Bayesian Indirect Inference using a Parametric Auxiliary Model

Size: px
Start display at page:

Download "Bayesian Indirect Inference using a Parametric Auxiliary Model"

Transcription

1 Bayesian Indirect Inference using a Parametric Auxiliary Model Dr Chris Drovandi Queensland University of Technology, Australia c.drovandi@qut.edu.au Collaborators: Tony Pettitt and Anthony Lee July 25, 214

2 Bayesian Indirect Likelihood Bayesian inference is based on posterior distribution p(θ Ý) p(ý θ)p(θ). Thus Bayesian inference requires the likelihood function p(ý θ) Many models have an intractable likelihood function Replace with some alternative tractable likelihood p(θ Ý) p(ý θ)p(θ). Referred to as Bayesian Indirect Likelihood (BIL)

3 BIL Methods Composite Likelihood (e.g. Pauli et al 211) Emulation (e.g. Gaussian process, Wilkinson 214) parametric BIL (uses the likelihood of an alternative parametric model, e.g. Gallant and McCullagh 29) non-parametric BIL (traditional ABC method recovered, e.g. Beaumont et al 22)

4 pbii methods pbii - Bayesian Indirect Inference methods that uses a p arametric auxiliary model in some way Requires parametric model with parameter φ with tractable likelihood, p A (y φ) Does not have to be an obvious connection between θ and φ Assume throughout that dim(φ) dim(θ) Two classes of pbii methods: Use auxiliary model to form summary statistics for ABC (ABC II) Use auxiliary model to form replacement likelihood, pbil

5 ABC II (Intro to ABC) ABC assumes that simulation from model is straightforward, Ü p(ý θ) Compares observed and simulated data on the basis of summary statistics, s( ) Based on the following target distribution p ǫ (θ Ý) p(θ)p ǫ (Ý θ), where ǫ is ABC tolerance and p ǫ (Ý θ) = p(ü θ)k ǫ ( s(ý) s(ü) )dx, Ü where K is a kernel weighting function. This can be estimated unbiasedly and this is enough (Andrieu and Roberts 29) If s( ) is sufficient and ǫ then p ǫ (θ Ý) p(θ Ý) (does not happen in practice) Thus two sources of error

6 ABC II (Intro to ABC Cont...) ABC has connections with kernel density estimation (Blum, 29) Choice of summary statistics involve trade-off between dimensionality and information loss Non-parametric aspect wants summary as low-dimensional as possible But decreasing dimension means information loss Choice of summary statistics most crucial to ABC approximation

7 ABC II Methods In some applications can propose an alternative parametric auxiliary model Use auxiliary model to form summary statistic (ABC II) Auxiliary parameter estimate as summary statistic (ABC IP and ABC IL) Score of auxiliary model as summary statistic (ABC IS)

8 ABC IP (Drovandi et al 211) Parameter estimates of auxiliary model are summary statistics Simulate data from generative model Ü p( θ) Estimate auxiliary parameter φ(θ, Ü) = argmax φ p A(Ü φ). Compare with φ(ý) ρ(s(ü),s(ý)) = (φ(ü) φ(ý)) T Á(φ(Ý))(φ(Ü) φ(ý)). Efficient weighting of summary statistics using observed Fisher information Á(φ(Ý))

9 ABC IL (Gleim and Pigorsch 213) Same as ABC IP but uses likelihood in the discrepancy function Parameter estimates of auxiliary model are summary statistics Simulate data from generative model Ü p( θ) Estimate auxiliary parameter Compare with φ(ý) φ(θ, Ü) = argmax φ p A(Ü φ). ρ(s(ü),s(ý)) = logp A (Ý φ(ý)) logp A (Ý φ(ü)).

10 ABC IS (Gleim and Pigorsch 213) Uses scores of auxiliary model as summary statistics ( logpa (Ý φ) Ë (Ý,φ) =,, logp ) T A(Ý φ), φ 1 φ dim(φ) Simulate data from generative model Ü p( θ) Evaluate scores of auxiliary model based on simulated data and φ(ý), Ë (Ü,φ(Ý)) Note that Ë (Ý,φ(Ý)) =. Uses following discrepancy function ρ(s(ü),s(ý)) = Ë (Ü,φ(Ý)) T Á(φ(Ý)) 1 Ë (Ü,φ(Ý)). Very fast when scores are analytic (no fitting of auxiliary model)

11 ABC II Assumptions Assumption (ABC IP Assumptions) The estimator of the auxiliary parameter, φ(θ, Ü), is unique for all θ with positive prior support. Assumption (ABC IL Assumptions) The auxiliary likelihood evaluated at the auxiliary estimate, p A (Ý φ(ü,θ)), is unique for all θ with positive prior support. Assumption (ABC IS Assumptions) The MLE of the auxiliary model fitted to the observed data, φ(ý), is an interior point of the parameter space of φ and Â(φ(Ý)) is positive definite. The log-likelihood of the auxiliary model, logp A ( φ), is differentiable and the score, ËA(Ü,φ(Ý)), is unique for any Ü that may be drawn according to any θ that has positive prior support.

12 ABC II vs Traditional ABC for Summary Statistics Advantages Can check if auxiliary model fits data well Natural ABC discrepancy functions Control dimensionality of summary statistics via parsimonious auxiliary model (e.g. compare auxiliary models via AIC) Disadvantages ABC II can be expensive Restricted to applications where auxiliary model can be proposed

13 pdbil (Gallant and McCullogh 29, Reeves and Pettitt 25) Replaces true likelihood with auxiliary likelihood (full data level) Simulate n independent datasets from generative model Ü1:n iid p( θ) Estimate auxiliary parameter φ(θ, Ü1:n) = argmax φ p A(Ü1:n φ). Provides estimate of mapping φ(θ) Evaluate auxiliary likelihood p A (Ý φ(θ, Ü1:n)) Not ABC. No summary statistics or ABC tolerance Theoretically behaves very different to ABC II

14 pdbil (Cont...) Ultimate target of pdbil method p A (θ Ý) p A (Ý φ(θ))p(θ). Unfortunately binding function unknown, estimate via n iid simulations from generative model, φ θ,n = φ(θ, Ü1:n) Target distribution of the method becomes p A,n (θ Ý) p A,n (Ý θ)p(θ), where p A,n (Ý θ) = p A (Ý φ θ,n )p(ü1:n θ)d Ü1:n Ü1:n p A,n (Ý θ) can be estimated unbiasedly, which is enough (Andrieu and Roberts (29))

15 pdbil (Cont...) Comparing p A,n (θ Ý) with p A (θ Ý) Under suitable conditions p A,n (θ Ý) p A (θ Ý) as n In reality must choose finite n Unfortunately E[p A (Ý φ θ,n )] p A (Ý φ(θ)) thus p A,n (θ Ý) p A (θ Ý) for finite n Empirical evidence suggests loss of precision as n decreases Can anticipate better approximations for pbil for n large

16 pdbil (Cont...) Comparing p A (θ Ý) with p(θ Ý) Under suitable conditions pdbil is exact as n if generative model special case of auxiliary model Suggests in this context auxiliary model needs to be flexible to mimic behaviour of generative model Empirical evidence suggests for good approximation auxiliary likelihood must do well in non-negligible posterior regions

17 Synthetic Likeihood as a psbil Method Reduces data to set of summary statistics. Target distribution: p(θ s(ý)) p(s(ý) θ)p(θ). But p(s(ý) θ) is unknown Synthetic likelihood approach Wood (21) assumes a parametric model, p A (s(ý) φ(θ)). Thus a pbil method. Wood takes p A to be multivariate normal N(µ(θ),Σ(θ)) For some θ, simulate n iid replicates, Ü1:n = (s(ü1),...,s(ün)), iid sample from p(s(ý) θ). Fit auxiliary model to this dataset: µ(ü1:n,θ) = 1 n s(üi), n Σ(Ü1:n,θ) = 1 n i=1 n (s(üi) µ(ü1:n,θ))(s(üi) µ(ü1:n,θ)) T, i=1

18 ABC as BIL with Non-Parametric Auxiliary Model (npbil) ABC is recovered as a BIL method by selecting a non-parametric auxiliary likelihood Full data (npdbil): Choose φ(θ, Ü1:n) = Ü1:n p A (Ý φ(θ, Ü1:n)) = p A (Ý Ü1:n) = 1 n n K ǫ (ρ(ý, Üi)), i=1 Data reduced to summary statistic (npsbil): 1 n n K ǫ (ρ( (Ý), (Üi))). i=1 ABC II is a special npsbil method where summary statistics come from a parametric auxiliary model

19 pbii Methods Figure: pbii Methods

20 BIL pbil npbil=abc pdbil psbil npdbil npsbil Key: BIL - B ayesian I ndirect L ikelihood pbil - p arametric BIL pdbil - pbil based on full d ata psbil - pbil based on s ummary statistic ABC - a pproximate B ayesian c omputation npbil - n on- p arametric BIL (equivalent to ABC) npdbil - npbil based on full d ata npsbil - npbil based on s ummary statistic ABC II - ABC I ndirect I nference ABC IP - ABC I ndirect P arameter ABC IL - ABC I ndirect L ikelihood ABC IS - ABC I ndirect S core ABC II ABC IP ABC IL ABC IS Figure: BIL framework

21 Theoretical Comarison pdbil exact when true model contained within auxiliary model (as n ). Suggests to choose flexible auxiliary model Even when true model is special case, ABC II statistics not sufficient in general When ABC II have sufficient statistics, pdbil not exact in general, even when n Some Toy Examples Later... Choice of Auxiliary Model ABC II requires good summarisation of data (independent of specification of generative model) Auxiliary model for pdbil does not necessarily have to fit data well (if generative model mis-specified). Requires flexibility wrt chosen generative model If generative model (approx) correct and fits data well then same auxiliary model may be chosen Comparison of pdbil and ABC II

22 Sampling from ABC II Target - MCMC ABC Algorithm 1 MCMC ABC algorithm of Marjoram et al 23. 1: Set θ 2: for i = 1 to T do 3: Draw θ q( θ i 1 ) 4: Simulate x p( θ ) 5: Compute r = p(θ )q(θ i 1 θ ) p(θ i 1 )q(θ θ i 1 ) 1(ρ(s(x ),s(y)) ǫ) 6: if uniform(,1) < r then 7: θ i = θ 8: else 9: θ i = θ i 1 1: end if 11: end for

23 Sampling from pdbil Target - MCMC pdbil Find increase in acceptance probability with increase in n Algorithm 1 MCMC BIL algorithm(see also Gallant and McCullogh 29). 1: Set θ 2: Simulate x n p( θ ) 3: Compute φ = argmax φ p A (x n φ) 4: for i = 1 to T do 5: Draw θ q( θ i 1 ) 6: Simulate x n p( θ ) 7: Compute ˆφ(x n ) = argmax φp A (x n φ) 8: Compute r = pa(y ˆφ(x n ))π(θ )q(θ i 1 θ ) pa(y φ i 1 )π(θ i 1 )q(θ θ i 1 ) 9: if uniform(,1) < r then 1: θ i = θ 11: φ i = ˆφ(x n ) 12: else 13: θ i = θ i 1 14: φ i = φ i 1 15: end if 16: end for

24 Toy Example 1 (Drovandi and Pettitt 213) True model Poisson(λ) and auxiliary model Normal(µ, τ) ABC II gets lucky as ˆµ = ȳ, sufficient for λ pdbil as n approximates Poisson(λ) likelihood with Normal(λ, λ) likelihood. Not exact..8.6 n = 1 n = 1 n = true 4 true auxiliary.4.2 log likelihood λ Acceptance probabilities: n = 1 46%, n = 1 67%, n = 1 72% and n = 1 73% for increasing n

25 Toy Example 1 Results for pdbil when auxiliary model mis-specified Normal(µ, 9) (underdispersed) Normal(µ, 49) (overdispersed) ABC II will still work by chance 1.8 true τ unspecified τ = 16 τ = log likelihood 32 true τ unspecified τ=16 τ= λ

26 Toy Example 2 True model t-distribution(µ, σ, ν = 1) and auxiliary model t-distribution(µ, σ, ν) Here pdbil exact as n since true is special case of auxiliary ABC II do not produce sufficient statistics (full set of order statistics are minimal sufficient) true true 1.8 n=1 n=1 n=1 n=1 1.6 n=1 2 n=

27 Toy Example 2 cont true true 1.8 ABC IP ABC IP ABC IL ABC IL 1.6 ABC IS 2 ABC IS

28 Quantile Distribution Example Model of Interest: g-and-k quantile distribution Defined in terms of its quantile function: ( Q(z(p);θ) = a+b 1+c 1 exp( gz(p)) ) (1+z(p) 2 ) k z(p). 1+exp( gz(p)) (1) p - quantile, z(p) - standard normal quantile, θ = (a,b,g,k), c =.8 (see Rayner and MacGillivray 23). Numerical likelihood evaluation possible Simulation easier via inversion method Data consists of 1 independent draws with a = 3, b = 1, g = 2 and k =.5

29 Quantile Distribution Example (Cont...) Auxiliary model is a 3-component normal mixture model Flexible and fits data well But breaks assumption of ABC IP (parameter estimates not unique).8 estimated density 3 component mixture density.6 density y

30 pdbil Results n=1 n=1 3 n=5 posterior (a) a (b) b (c) g (d) k Acc Prob: 2.8% for n = 1, 13.1% for n = 1 and 2.8% for n = 5

31 ABC II (no regression adjustment) posterior ABC IP 15 3 ABC IL ABC IS (e) a (f) b (g) g (h) k

32 Compare all (note ABC has regression adjustment) 3 ABC ABC IS ABC IL ABC IP pdbil n=5 posterior (i) a (j) b (k) g (l) k

33 4 Component Auxiliary Mixture Model 4 3 ABC ABC IS ABC IL ABC IP pdbil n=5 posterior (m) a (n) b (o) g (p) k

34 Macroparasite Immunity Example Estimate parameters of a Markov process model explaining macroparasite population development with host immunity 212 hosts (cats) i = 1,...,212. Each cat injected with l i juvenile Brugia pahangi larvae (approximately 1 or 2). At time t i host is sacrificed and the number of matures are recorded Host assumed to develop an immunity Three variable problem: M(t) matures, L(t) juveniles, I(t) immunity. Only L() and M(t i ) is observed for each host No tractable likelihood

35 Proporton of Matures Time

36 Trivariate Markov Process of Riley et al (23) Invisible Invisible Mature Parasites M(t) Maturation γl(t) Juvenile Parasites L(t) µ M M(t) Natural death Immunity µ L L(t) Natural death βi(t)l(t) Death due to immunity Gain of immunity νl(t) I(t) Loss of immunity µ I I(t) Invisible

37 Auxiliary Beta-Binomial model The data show too much variation for Binomial A Beta-Binomial model has an extra parameter to capture dispersion ( ) li B(mi +α i,l i m i +β i ) p(m i α i,β i ) =, B(α i,β i ) m i Useful reparameterisation p i = α i /(α i +β i ) and θ i = 1/(α i +β i ) Relate the proportion and over dispersion parameters to time, t i, and initial larvae, l i, covariates logit(p i ) = β +β 1 log(t i )+β 2 (log(t i )) 2, { η1, if l log(θ i ) = i 1 η 2, if l i 2, Five parameters φ = (β,β 1,β 2,η 1,η 2 )

38 pdbil Results 15 n=1 n=2 n= x (q) ν (r) µ I (s) µ L (t) β

39 pbii results (ABC II no regression adjustment) ABC IS ABC IP ABC IL pdbil n= x 1 3 (u) ν 1 2 (v) µ I (w) µ L (x) β

40 pbii results (ABC has regression adjustment) 2 15 ABC IS REG ABC IP REG ABC IL REG pdbil n= x (y) ν (z) µ L

41 Discussion pdbil very different to ABC II theoretically pdbil needs to have a good auxiliary model. ABC II might be useful is auxiliary model mis-specified ABC II more flexible; can incorporate other summary statistics

42 Key References Drovandi et al. (211). Approximate Bayesian Computation using Indirect Inference. JRSS C. Drovandi, Pettitt and Lee (214). Bayesian Indirect Inference using a Parametric Auxiliary Model. Gleim and Pigorsch (213). Approximate Bayesian Computation with Indirect Summary Statistics. Gallant and McCullogh (29). On the determination of general scientific models with application to asset pricing. JASA. Reeves and Pettitt (25). A theoretical framework for approximate Bayesian computation. 2th IWSM.

Bayesian Indirect Inference using a Parametric Auxiliary Model

Bayesian Indirect Inference using a Parametric Auxiliary Model Bayesian Indirect Inference using a Parametric Auxiliary Model Dr Chris Drovandi Queensland University of Technology, Australia c.drovandi@qut.edu.au Collaborators: Tony Pettitt and Anthony Lee February

More information

Bayesian Indirect Inference Using a Parametric Auxiliary Model

Bayesian Indirect Inference Using a Parametric Auxiliary Model Statistical Science 2015, Vol. 30, No. 1, 72 95 DOI: 10.1214/14-STS498 c Institute of Mathematical Statistics, 2015 Bayesian Indirect Inference Using a Parametric Auxiliary Model Christopher C. Drovandi,

More information

arxiv: v1 [stat.co] 6 Mar 2018

arxiv: v1 [stat.co] 6 Mar 2018 ABC and Indirect Inference Christopher C Drovandi arxiv:1803.01999v1 [stat.co] 6 Mar 2018 School of Mathematical Sciences, Queensland University of Technology, Australia ABSTRACT August 26, 2018 Indirect

More information

Tutorial on ABC Algorithms

Tutorial on ABC Algorithms Tutorial on ABC Algorithms Dr Chris Drovandi Queensland University of Technology, Australia c.drovandi@qut.edu.au July 3, 2014 Notation Model parameter θ with prior π(θ) Likelihood is f(ý θ) with observed

More information

Evidence estimation for Markov random fields: a triply intractable problem

Evidence estimation for Markov random fields: a triply intractable problem Evidence estimation for Markov random fields: a triply intractable problem January 7th, 2014 Markov random fields Interacting objects Markov random fields (MRFs) are used for modelling (often large numbers

More information

High-dimensional ABC

High-dimensional ABC High-dimensional ABC D. J. Nott, V. M.-H. Ong, Y. Fan and S. A. Sisson arxiv:1802.09725v1 [stat.co] 27 Feb 2018 1 Introduction February 28, 2018 Other chapters in this volume have discussed the curse of

More information

Inexact approximations for doubly and triply intractable problems

Inexact approximations for doubly and triply intractable problems Inexact approximations for doubly and triply intractable problems March 27th, 2014 Markov random fields Interacting objects Markov random fields (MRFs) are used for modelling (often large numbers of) interacting

More information

An introduction to Approximate Bayesian Computation methods

An introduction to Approximate Bayesian Computation methods An introduction to Approximate Bayesian Computation methods M.E. Castellanos maria.castellanos@urjc.es (from several works with S. Cabras, E. Ruli and O. Ratmann) Valencia, January 28, 2015 Valencia Bayesian

More information

A Review of Pseudo-Marginal Markov Chain Monte Carlo

A Review of Pseudo-Marginal Markov Chain Monte Carlo A Review of Pseudo-Marginal Markov Chain Monte Carlo Discussed by: Yizhe Zhang October 21, 2016 Outline 1 Overview 2 Paper review 3 experiment 4 conclusion Motivation & overview Notation: θ denotes the

More information

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference 1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE

More information

Modeling Real Estate Data using Quantile Regression

Modeling Real Estate Data using Quantile Regression Modeling Real Estate Data using Semiparametric Quantile Regression Department of Statistics University of Innsbruck September 9th, 2011 Overview 1 Application: 2 3 4 Hedonic regression data for house prices

More information

Bayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017

Bayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017 Bayesian inference Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 10, 2017 1 / 22 Outline for today A genetic example Bayes theorem Examples Priors Posterior summaries

More information

Approximate Bayesian Computation: a simulation based approach to inference

Approximate Bayesian Computation: a simulation based approach to inference Approximate Bayesian Computation: a simulation based approach to inference Richard Wilkinson Simon Tavaré 2 Department of Probability and Statistics University of Sheffield 2 Department of Applied Mathematics

More information

Introduction to Probabilistic Machine Learning

Introduction to Probabilistic Machine Learning Introduction to Probabilistic Machine Learning Piyush Rai Dept. of CSE, IIT Kanpur (Mini-course 1) Nov 03, 2015 Piyush Rai (IIT Kanpur) Introduction to Probabilistic Machine Learning 1 Machine Learning

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Tutorial on Approximate Bayesian Computation

Tutorial on Approximate Bayesian Computation Tutorial on Approximate Bayesian Computation Michael Gutmann https://sites.google.com/site/michaelgutmann University of Helsinki Aalto University Helsinki Institute for Information Technology 16 May 2016

More information

Introduction to Bayesian Inference

Introduction to Bayesian Inference University of Pennsylvania EABCN Training School May 10, 2016 Bayesian Inference Ingredients of Bayesian Analysis: Likelihood function p(y φ) Prior density p(φ) Marginal data density p(y ) = p(y φ)p(φ)dφ

More information

Approximate Bayesian computation: methods and applications for complex systems

Approximate Bayesian computation: methods and applications for complex systems Approximate Bayesian computation: methods and applications for complex systems Mark A. Beaumont, School of Biological Sciences, The University of Bristol, Bristol, UK 11 November 2015 Structure of Talk

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Notes on pseudo-marginal methods, variational Bayes and ABC

Notes on pseudo-marginal methods, variational Bayes and ABC Notes on pseudo-marginal methods, variational Bayes and ABC Christian Andersson Naesseth October 3, 2016 The Pseudo-Marginal Framework Assume we are interested in sampling from the posterior distribution

More information

Motivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University

Motivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University Econ 690 Purdue University In virtually all of the previous lectures, our models have made use of normality assumptions. From a computational point of view, the reason for this assumption is clear: combined

More information

Approximate Bayesian Computation

Approximate Bayesian Computation Approximate Bayesian Computation Sarah Filippi Department of Statistics University of Oxford 09/02/2016 Parameter inference in a signalling pathway A RAS Receptor Growth factor Cell membrane Phosphorylation

More information

ICML Scalable Bayesian Inference on Point processes. with Gaussian Processes. Yves-Laurent Kom Samo & Stephen Roberts

ICML Scalable Bayesian Inference on Point processes. with Gaussian Processes. Yves-Laurent Kom Samo & Stephen Roberts ICML 2015 Scalable Nonparametric Bayesian Inference on Point Processes with Gaussian Processes Machine Learning Research Group and Oxford-Man Institute University of Oxford July 8, 2015 Point Processes

More information

Approximate Bayesian Computation

Approximate Bayesian Computation Approximate Bayesian Computation Michael Gutmann https://sites.google.com/site/michaelgutmann University of Helsinki and Aalto University 1st December 2015 Content Two parts: 1. The basics of approximate

More information

Introduction to Bayesian Methods

Introduction to Bayesian Methods Introduction to Bayesian Methods Jessi Cisewski Department of Statistics Yale University Sagan Summer Workshop 2016 Our goal: introduction to Bayesian methods Likelihoods Priors: conjugate priors, non-informative

More information

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci

More information

Bayesian inference for multivariate skew-normal and skew-t distributions

Bayesian inference for multivariate skew-normal and skew-t distributions Bayesian inference for multivariate skew-normal and skew-t distributions Brunero Liseo Sapienza Università di Roma Banff, May 2013 Outline Joint research with Antonio Parisi (Roma Tor Vergata) 1. Inferential

More information

Variational Bayes with Synthetic Likelihood

Variational Bayes with Synthetic Likelihood Variational Bayes with Synthetic Likelihood Victor M-H. Ong 1, David J. Nott 1, Minh-Ngoc Tran 2, S. A. Sisson 3, and C. C. Drovandi 4 1 Department of Statistics and Applied Probability, National University

More information

Sequential Monte Carlo Algorithms for Bayesian Sequential Design

Sequential Monte Carlo Algorithms for Bayesian Sequential Design Sequential Monte Carlo Algorithms for Bayesian Sequential Design Dr Queensland University of Technology c.drovandi@qut.edu.au Collaborators: James McGree, Tony Pettitt, Gentry White Acknowledgements: Australian

More information

Recent Advances in Approximate Bayesian Computation (ABC): Inference and Forecasting

Recent Advances in Approximate Bayesian Computation (ABC): Inference and Forecasting Recent Advances in Approximate Bayesian Computation (ABC): Inference and Forecasting Gael Martin (Monash) Drawing heavily from work with: David Frazier (Monash University), Ole Maneesoonthorn (Uni. of

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm 1/29 EM & Latent Variable Models Gaussian Mixture Models EM Theory The Expectation-Maximization Algorithm Mihaela van der Schaar Department of Engineering Science University of Oxford MLE for Latent Variable

More information

Bayes Factors, posterior predictives, short intro to RJMCMC. Thermodynamic Integration

Bayes Factors, posterior predictives, short intro to RJMCMC. Thermodynamic Integration Bayes Factors, posterior predictives, short intro to RJMCMC Thermodynamic Integration Dave Campbell 2016 Bayesian Statistical Inference P(θ Y ) P(Y θ)π(θ) Once you have posterior samples you can compute

More information

CSC 2541: Bayesian Methods for Machine Learning

CSC 2541: Bayesian Methods for Machine Learning CSC 2541: Bayesian Methods for Machine Learning Radford M. Neal, University of Toronto, 2011 Lecture 10 Alternatives to Monte Carlo Computation Since about 1990, Markov chain Monte Carlo has been the dominant

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee and Andrew O. Finley 2 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Short Questions (Do two out of three) 15 points each

Short Questions (Do two out of three) 15 points each Econometrics Short Questions Do two out of three) 5 points each ) Let y = Xβ + u and Z be a set of instruments for X When we estimate β with OLS we project y onto the space spanned by X along a path orthogonal

More information

Approximate Bayesian Computation

Approximate Bayesian Computation Approximate Bayesian Computation Jessi Cisewski Department of Statistics Yale University October 26, 2016 SAMSI Astrostatistics Course Our goals... Approximate Bayesian Computation (ABC) 1 Background:

More information

Riemann Manifold Methods in Bayesian Statistics

Riemann Manifold Methods in Bayesian Statistics Ricardo Ehlers ehlers@icmc.usp.br Applied Maths and Stats University of São Paulo, Brazil Working Group in Statistical Learning University College Dublin September 2015 Bayesian inference is based on Bayes

More information

Fast Likelihood-Free Inference via Bayesian Optimization

Fast Likelihood-Free Inference via Bayesian Optimization Fast Likelihood-Free Inference via Bayesian Optimization Michael Gutmann https://sites.google.com/site/michaelgutmann University of Helsinki Aalto University Helsinki Institute for Information Technology

More information

Nonparametric Bayes Uncertainty Quantification

Nonparametric Bayes Uncertainty Quantification Nonparametric Bayes Uncertainty Quantification David Dunson Department of Statistical Science, Duke University Funded from NIH R01-ES017240, R01-ES017436 & ONR Review of Bayes Intro to Nonparametric Bayes

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Bayesian Linear Regression

Bayesian Linear Regression Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective

More information

Accelerating ABC methods using Gaussian processes

Accelerating ABC methods using Gaussian processes Richard D. Wilkinson School of Mathematical Sciences, University of Nottingham, Nottingham, NG7 2RD, United Kingdom Abstract Approximate Bayesian computation (ABC) methods are used to approximate posterior

More information

Approximate Bayesian Computation for Astrostatistics

Approximate Bayesian Computation for Astrostatistics Approximate Bayesian Computation for Astrostatistics Jessi Cisewski Department of Statistics Yale University October 24, 2016 SAMSI Undergraduate Workshop Our goals Introduction to Bayesian methods Likelihoods,

More information

Approximate Likelihoods

Approximate Likelihoods Approximate Likelihoods Nancy Reid July 28, 2015 Why likelihood? makes probability modelling central l(θ; y) = log f (y; θ) emphasizes the inverse problem of reasoning y θ converts a prior probability

More information

Foundations of Statistical Inference

Foundations of Statistical Inference Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 32 Lecture 14 : Variational Bayes

More information

Model Checking and Improvement

Model Checking and Improvement Model Checking and Improvement Statistics 220 Spring 2005 Copyright c 2005 by Mark E. Irwin Model Checking All models are wrong but some models are useful George E. P. Box So far we have looked at a number

More information

New Insights into History Matching via Sequential Monte Carlo

New Insights into History Matching via Sequential Monte Carlo New Insights into History Matching via Sequential Monte Carlo Associate Professor Chris Drovandi School of Mathematical Sciences ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS)

More information

Assessing Regime Uncertainty Through Reversible Jump McMC

Assessing Regime Uncertainty Through Reversible Jump McMC Assessing Regime Uncertainty Through Reversible Jump McMC August 14, 2008 1 Introduction Background Research Question 2 The RJMcMC Method McMC RJMcMC Algorithm Dependent Proposals Independent Proposals

More information

Basic math for biology

Basic math for biology Basic math for biology Lei Li Florida State University, Feb 6, 2002 The EM algorithm: setup Parametric models: {P θ }. Data: full data (Y, X); partial data Y. Missing data: X. Likelihood and maximum likelihood

More information

Generalized Linear Models Introduction

Generalized Linear Models Introduction Generalized Linear Models Introduction Statistics 135 Autumn 2005 Copyright c 2005 by Mark E. Irwin Generalized Linear Models For many problems, standard linear regression approaches don t work. Sometimes,

More information

Lecture : Probabilistic Machine Learning

Lecture : Probabilistic Machine Learning Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning

More information

Likelihood-free Markov chain Monte

Likelihood-free Markov chain Monte arxiv:1001.2058v1 [stat.me] 13 Jan 2010 Chapter 1 Likelihood-free Markov chain Monte Carlo 1.1 Introduction Scott A. Sisson and Yanan Fan In Bayesian inference, the posterior distribution for parameters

More information

A Bayesian Nonparametric Approach to Monotone Missing Data in Longitudinal Studies with Informative Missingness

A Bayesian Nonparametric Approach to Monotone Missing Data in Longitudinal Studies with Informative Missingness A Bayesian Nonparametric Approach to Monotone Missing Data in Longitudinal Studies with Informative Missingness A. Linero and M. Daniels UF, UT-Austin SRC 2014, Galveston, TX 1 Background 2 Working model

More information

Auto-Encoding Variational Bayes

Auto-Encoding Variational Bayes Auto-Encoding Variational Bayes Diederik P Kingma, Max Welling June 18, 2018 Diederik P Kingma, Max Welling Auto-Encoding Variational Bayes June 18, 2018 1 / 39 Outline 1 Introduction 2 Variational Lower

More information

Adaptive HMC via the Infinite Exponential Family

Adaptive HMC via the Infinite Exponential Family Adaptive HMC via the Infinite Exponential Family Arthur Gretton Gatsby Unit, CSML, University College London RegML, 2017 Arthur Gretton (Gatsby Unit, UCL) Adaptive HMC via the Infinite Exponential Family

More information

GAUSSIAN PROCESS REGRESSION

GAUSSIAN PROCESS REGRESSION GAUSSIAN PROCESS REGRESSION CSE 515T Spring 2015 1. BACKGROUND The kernel trick again... The Kernel Trick Consider again the linear regression model: y(x) = φ(x) w + ε, with prior p(w) = N (w; 0, Σ). The

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D.

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Ruppert A. EMPIRICAL ESTIMATE OF THE KERNEL MIXTURE Here we

More information

Chapter 4 - Fundamentals of spatial processes Lecture notes

Chapter 4 - Fundamentals of spatial processes Lecture notes Chapter 4 - Fundamentals of spatial processes Lecture notes Geir Storvik January 21, 2013 STK4150 - Intro 2 Spatial processes Typically correlation between nearby sites Mostly positive correlation Negative

More information

Pseudo-marginal MCMC methods for inference in latent variable models

Pseudo-marginal MCMC methods for inference in latent variable models Pseudo-marginal MCMC methods for inference in latent variable models Arnaud Doucet Department of Statistics, Oxford University Joint work with George Deligiannidis (Oxford) & Mike Pitt (Kings) MCQMC, 19/08/2016

More information

Lecture 3. Univariate Bayesian inference: conjugate analysis

Lecture 3. Univariate Bayesian inference: conjugate analysis Summary Lecture 3. Univariate Bayesian inference: conjugate analysis 1. Posterior predictive distributions 2. Conjugate analysis for proportions 3. Posterior predictions for proportions 4. Conjugate analysis

More information

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/??

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/?? to Bayesian Methods Introduction to Bayesian Methods p.1/?? We develop the Bayesian paradigm for parametric inference. To this end, suppose we conduct (or wish to design) a study, in which the parameter

More information

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Jonas Hallgren 1 1 Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden BFS 2012 June

More information

Intro to Probability. Andrei Barbu

Intro to Probability. Andrei Barbu Intro to Probability Andrei Barbu Some problems Some problems A means to capture uncertainty Some problems A means to capture uncertainty You have data from two sources, are they different? Some problems

More information

ABC random forest for parameter estimation. Jean-Michel Marin

ABC random forest for parameter estimation. Jean-Michel Marin ABC random forest for parameter estimation Jean-Michel Marin Université de Montpellier Institut Montpelliérain Alexander Grothendieck (IMAG) Institut de Biologie Computationnelle (IBC) Labex Numev! joint

More information

STAT 730 Chapter 4: Estimation

STAT 730 Chapter 4: Estimation STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum

More information

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012 Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood

More information

Construction of an Informative Hierarchical Prior Distribution: Application to Electricity Load Forecasting

Construction of an Informative Hierarchical Prior Distribution: Application to Electricity Load Forecasting Construction of an Informative Hierarchical Prior Distribution: Application to Electricity Load Forecasting Anne Philippe Laboratoire de Mathématiques Jean Leray Université de Nantes Workshop EDF-INRIA,

More information

PMR Learning as Inference

PMR Learning as Inference Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis

Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis Stéphanie Allassonnière CIS, JHU July, 15th 28 Context : Computational Anatomy Context and motivations :

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

Variational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures

Variational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures 17th Europ. Conf. on Machine Learning, Berlin, Germany, 2006. Variational Bayesian Dirichlet-Multinomial Allocation for Exponential Family Mixtures Shipeng Yu 1,2, Kai Yu 2, Volker Tresp 2, and Hans-Peter

More information

Hierarchical Modeling for Univariate Spatial Data

Hierarchical Modeling for Univariate Spatial Data Hierarchical Modeling for Univariate Spatial Data Geography 890, Hierarchical Bayesian Models for Environmental Spatial Data Analysis February 15, 2011 1 Spatial Domain 2 Geography 890 Spatial Domain This

More information

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007 Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 Importance Sampling Recall: Let s say that we want to compute some expectation (integral) E p [f] = p(x)f(x)dx and we remember

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

Bayesian Inference for Discretely Sampled Diffusion Processes: A New MCMC Based Approach to Inference

Bayesian Inference for Discretely Sampled Diffusion Processes: A New MCMC Based Approach to Inference Bayesian Inference for Discretely Sampled Diffusion Processes: A New MCMC Based Approach to Inference Osnat Stramer 1 and Matthew Bognar 1 Department of Statistics and Actuarial Science, University of

More information

MCMC 2: Lecture 3 SIR models - more topics. Phil O Neill Theo Kypraios School of Mathematical Sciences University of Nottingham

MCMC 2: Lecture 3 SIR models - more topics. Phil O Neill Theo Kypraios School of Mathematical Sciences University of Nottingham MCMC 2: Lecture 3 SIR models - more topics Phil O Neill Theo Kypraios School of Mathematical Sciences University of Nottingham Contents 1. What can be estimated? 2. Reparameterisation 3. Marginalisation

More information

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner Fundamentals CS 281A: Statistical Learning Theory Yangqing Jia Based on tutorial slides by Lester Mackey and Ariel Kleiner August, 2011 Outline 1 Probability 2 Statistics 3 Linear Algebra 4 Optimization

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

Accounting for Complex Sample Designs via Mixture Models

Accounting for Complex Sample Designs via Mixture Models Accounting for Complex Sample Designs via Finite Normal Mixture Models 1 1 University of Michigan School of Public Health August 2009 Talk Outline 1 2 Accommodating Sampling Weights in Mixture Models 3

More information

Likelihood-free MCMC

Likelihood-free MCMC Bayesian inference for stable distributions with applications in finance Department of Mathematics University of Leicester September 2, 2011 MSc project final presentation Outline 1 2 3 4 Classical Monte

More information

Eco517 Fall 2014 C. Sims MIDTERM EXAM

Eco517 Fall 2014 C. Sims MIDTERM EXAM Eco57 Fall 204 C. Sims MIDTERM EXAM You have 90 minutes for this exam and there are a total of 90 points. The points for each question are listed at the beginning of the question. Answer all questions.

More information

Bayesian Multivariate Logistic Regression

Bayesian Multivariate Logistic Regression Bayesian Multivariate Logistic Regression Sean M. O Brien and David B. Dunson Biostatistics Branch National Institute of Environmental Health Sciences Research Triangle Park, NC 1 Goals Brief review of

More information

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC Stat 451 Lecture Notes 07 12 Markov Chain Monte Carlo Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapters 8 9 in Givens & Hoeting, Chapters 25 27 in Lange 2 Updated: April 4, 2016 1 / 42 Outline

More information

Ages of stellar populations from color-magnitude diagrams. Paul Baines. September 30, 2008

Ages of stellar populations from color-magnitude diagrams. Paul Baines. September 30, 2008 Ages of stellar populations from color-magnitude diagrams Paul Baines Department of Statistics Harvard University September 30, 2008 Context & Example Welcome! Today we will look at using hierarchical

More information

Bayesian RL Seminar. Chris Mansley September 9, 2008

Bayesian RL Seminar. Chris Mansley September 9, 2008 Bayesian RL Seminar Chris Mansley September 9, 2008 Bayes Basic Probability One of the basic principles of probability theory, the chain rule, will allow us to derive most of the background material in

More information

Learning the hyper-parameters. Luca Martino

Learning the hyper-parameters. Luca Martino Learning the hyper-parameters Luca Martino 2017 2017 1 / 28 Parameters and hyper-parameters 1. All the described methods depend on some choice of hyper-parameters... 2. For instance, do you recall λ (bandwidth

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

Bayesian density regression for count data

Bayesian density regression for count data Bayesian density regression for count data Charalampos Chanialidis 1, Ludger Evers 2, and Tereza Neocleous 3 arxiv:1406.1882v1 [stat.me] 7 Jun 2014 1 University of Glasgow, c.chanialidis1@research.gla.ac.uk

More information

The Poisson transform for unnormalised statistical models. Nicolas Chopin (ENSAE) joint work with Simon Barthelmé (CNRS, Gipsa-LAB)

The Poisson transform for unnormalised statistical models. Nicolas Chopin (ENSAE) joint work with Simon Barthelmé (CNRS, Gipsa-LAB) The Poisson transform for unnormalised statistical models Nicolas Chopin (ENSAE) joint work with Simon Barthelmé (CNRS, Gipsa-LAB) Part I Unnormalised statistical models Unnormalised statistical models

More information

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf 1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf 2013-14 We know that X ~ B(n,p), but we do not know p. We get a random sample

More information

Bayesian Methods. David S. Rosenberg. New York University. March 20, 2018

Bayesian Methods. David S. Rosenberg. New York University. March 20, 2018 Bayesian Methods David S. Rosenberg New York University March 20, 2018 David S. Rosenberg (New York University) DS-GA 1003 / CSCI-GA 2567 March 20, 2018 1 / 38 Contents 1 Classical Statistics 2 Bayesian

More information

Bayesian Sequential Design under Model Uncertainty using Sequential Monte Carlo

Bayesian Sequential Design under Model Uncertainty using Sequential Monte Carlo Bayesian Sequential Design under Model Uncertainty using Sequential Monte Carlo, James McGree, Tony Pettitt October 7, 2 Introduction Motivation Model choice abundant throughout literature Take into account

More information

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17 MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

arxiv: v1 [stat.co] 17 Dec 2015

arxiv: v1 [stat.co] 17 Dec 2015 arxiv:1512.05633v1 [stat.co] 17 Dec 2015 Summary Statistics in Approximate Bayesian Computation Dennis Prangle Abstract This document is due to appear as a chapter of the forthcoming Handbook of Approximate

More information

Variational Scoring of Graphical Model Structures

Variational Scoring of Graphical Model Structures Variational Scoring of Graphical Model Structures Matthew J. Beal Work with Zoubin Ghahramani & Carl Rasmussen, Toronto. 15th September 2003 Overview Bayesian model selection Approximations using Variational

More information