Bayesian Indirect Inference using a Parametric Auxiliary Model

Size: px
Start display at page:

Download "Bayesian Indirect Inference using a Parametric Auxiliary Model"

Transcription

1 Bayesian Indirect Inference using a Parametric Auxiliary Model Dr Chris Drovandi Queensland University of Technology, Australia c.drovandi@qut.edu.au Collaborators: Tony Pettitt and Anthony Lee February 12, 214

2 Intro to ABC (intractable likelihood) Bayesian inference in based on posterior distribution p(θ y) p(y θ)p(θ). Thus Bayesian inference requires the likelihood function p(y θ) Many models have an intractable likelihood function One possible avenue is approximate Bayesian computation (ABC)

3 Intro to ABC (ABC target) ABC assumes that simulation from model is straightforward, x p(y θ) Compares observed and simulated data on the basis of summary statistics, s( ) Based on the following target distribution p ǫ (θ y) p(θ) p(x θ)k ǫ ( s(y) s(x) )dx, x where ǫ is ABC tolerance and K is a kernel weighting function If s( ) is sufficient and ǫ then p ǫ (θ y) p(θ y) (does not happen in practice) Thus two sources of error

4 Intro to ABC (choosing summary statistics) ABC has connections with kernel density estimation (Blum, 29) Choice of summary statistics involve trade-off between dimensionality and information loss Non-parametric aspect wants summary as low-dimensional as possible But decreasing dimension means information loss Choice of summary statistics most crucial to ABC approximation

5 Intro to ABC (MCMC ABC) A number of algorithms to sample from ABC target (Rejection sampling (Beaumont 22), MCMC (Marjoram et al 23) and SMC (e.g. Sisson et al 27 and Drovandi and Pettitt 211)) In this work used MCMC ABC. Involves the following steps: Propose new parameter θ q( θ). Simulate a dataset, x p(y θ ) Accept with probability:... and repeat... α = p(θ )K ǫ ( s(y) s(x ) )q(θ θ ) p(θ)k ǫ ( s(y) s(x) )q(θ θ)

6 Introduction Approximate Bayesian Computation ABC becoming standard for dealing with intractable (generative) model p(ý θ) Choosing summary statistics most difficult aspect Bayesian Indirect Inference (BII) Propose another (tractable) auxiliary model p A (Ý φ) Use the auxiliary model to build summary statistics or replace likelihood Aim: Review and Compare

7 BII Methods Figure: BII Methods

8 ABC II Methods ABC target distribution p ǫ,n (θ Ý) p(θ) ÜÒ p(ü Ò θ)1(ρ(s(ü Ò ),s(ý)) ǫ)d Ü Ò, where Ý is observed data (assume N observations), Ü Ò is simulated data (of size nn) θ is parameter, p(θ) is prior, p(ü Ò θ) is likelihood, s( ) is summary statistic, ρ(, ) is discrepancy function Common approach in II is to take n > 1 Can be shown that ABC can be over-precise when n > 1 Therefore take n = 1 for ABC II

9 ABC IP (Drovandi et al 211) Parameter estimates of auxiliary model are summary statistics Simulate data from generative model Ü p( θ) Estimate auxiliary parameter ˆφ(θ, Ü) = argmax φ p A(Ü φ). Compare with ˆφ(Ý) ρ(s(ü),s(ý)) = (ˆφ(Ü) ˆφ(Ý)) T Á(ˆφ(Ý))(ˆφ(Ü) ˆφ(Ý)). Efficient weighting of summary statistics using observed Fisher information Á(ˆφ(Ý))

10 ABC IL (Gleim and Pigorsch 213) Same as ABC IP but uses likelihood in the discrepancy function Parameter estimates of auxiliary model are summary statistics Simulate data from generative model Ü p( θ) Estimate auxiliary parameter Compare with ˆφ(Ý) ˆφ(θ, Ü) = argmax φ p A(Ü φ). ρ(s(ü),s(ý)) = logp A (Ý ˆφ(Ý)) logp A (Ý ˆφ(Ü)).

11 ABC IS (Gleim and Pigorsch 213) Uses scores of auxiliary model as summary statistics ( logpa (Ý φ) Ë (Ý,φ) =,, logp ) T A(Ý φ), φ 1 φ dim(φ) Simulate data from generative model Ü p( θ) Evaluate scores of auxiliary model based on simulated data and ˆφ(Ý), Ë (Ü, ˆφ(Ý)) Note that Ë (Ý, ˆφ(Ý)) =. Uses following discrepancy function ρ(s(ü),s(ý)) = Ë (Ü, ˆφ(Ý)) T Á(ˆφ(Ý)) 1 Ë (Ü, ˆφ(Ý)). Very fast when scores are analytic (no fitting of auxiliary model)

12 ABC II Assumptions Assumption (ABC IP Assumptions) The estimator of the auxiliary parameter, ˆφ(θ, Ü), is unique for all θ with positive prior support. Assumption (ABC IL Assumptions) The auxiliary likelihood evaluated at the auxiliary estimate, p A (Ý ˆφ(Ü,θ)), is unique for all θ with positive prior support. Assumption (ABC IS Assumptions) The MLE of the auxiliary model fitted to the observed data, ˆφ(Ý), is an interior point of the parameter space of φ. The log-likelihood of the auxiliary model, logp A ( φ), is differentiable and the score, Ë (Ü, ˆφ(Ý)), is unique for Ü that may be drawn from any θ that has positive prior support.

13 BIL (Gallant and McCullogh 29, Reeves and Pettitt 25) Replaces true likelihood with auxiliary likelihood Simulate data from generative model Ü Ò p( θ) Estimate auxiliary parameter ˆφ(θ, Ü Ò ) = argmax φ p A(Ü Ò φ). Evaluate auxiliary likelihood p A (Ý ˆφ(θ, Ü Ò )) Not ABC. No summary statistics or ABC tolerance Has the following target distribution p n (θ Ý) p(θ) p A (Ý ˆφ(θ, Ü Ò ))p(ü Ò θ)d Ü Ò, which depends on n. Theoretically behaves very different to ABC II ÜÒ

14 BIL Theory Assumption (BIL Assumptions) As n, p A (Ý ˆφ(Ü Ò,θ)) converges in probability to p A (Ý φ(θ)) for all θ with positive prior support. Result (BIL target as n ) The target distribution of BIL for n is p (θ Ý) p A (Ý φ(θ))p(θ). Thus BIL targets the correct posterior distribution provided that p A (Ý φ(θ)) p(ý θ) as a function of θ.

15 BIL Theory What about the target for finite n? p n and p have same target provided that E[p A (Ý ˆφ(θ, Ü Ò ))] = p A (Ý φ(θ)) (Andrieu and Roberts 29) In general p A (Ý ˆφ(θ, Ü Ò )) is a biased, noisy estimate of p A (Ý φ(θ)) Can anticipate better approximations for BIL for n large Contrasts with ABC II which needs n = 1

16 Synthetic Likeihood as a BIL Method Reduces data to set of summary statistics. Target distribution: p(θ s(ý)) p(s(ý) θ)p(θ). But p(s(y) θ) is unknown Synthetic likelihood approach Wood (21) assumes a parametric model, p A (s(ý) φ(θ)). Thus a BIL method. Wood takes p A to be multivariate normal N(µ(θ),Σ(θ)) For some θ, simulate n replicates of size N, Ü Ò = (1) (s(ü 1 ),...,s(ü(ò) 1 )), iid sample from p(s(ý) θ). Fit auxiliary model to this dataset: µ(ü Ò,θ) = 1 n Σ(Ü Ò,θ) = 1 n n i=1 s(ü ( ) 1 ), n ( ) ( ) (s(ü 1 ) µ(ü Ò,θ))(s(Ü 1 ) µ(ü Ò,θ)) T. i=1

17 ABC as BIL with Non-Parametric Auxiliary Model ABC is recovered as a BIL method by selecting a non-parametric auxiliary likelihood Full data (dabc): simulate n datasets Ü Ò = (1) (Ü 1,..., Ü(Ò) 1 ) and estimate likelihood 1 n n i=1 K ǫ (ρ(ý, Ü ( ) 1 )), Data reduced to summary statistic (sabc): 1 n n i=1 K ǫ (ρ( (Ý), (Ü ( ) 1 ))). ABC II is a special sabc method where summary statistics come from a parametric auxiliary model

18 Theoretical comparison of BIL and ABC II BIL exact when true model contained within auxiliary model (as n ). Suggests to choose flexible auxiliary model Even when true model is special case, ABC II statistics not sufficient in general When ABC II have sufficient statistics, BIL not exact in general, even when n Some Toy Examples Later...

19 Sampling from ABC II Target - MCMC ABC Algorithm 1 MCMC ABC algorithm of Marjoram et al 23. 1: Set θ 2: for i = 1 to T do 3: Draw θ q( θ i 1 ) 4: Simulate x p( θ ) 5: Compute r = p(θ )q(θ i 1 θ ) p(θ i 1 )q(θ θ i 1 ) 1(ρ(s(x ),s(y)) ǫ) 6: if uniform(,1) < r then 7: θ i = θ 8: else 9: θ i = θ i 1 1: end if 11: end for

20 Sampling from BIL Target - MCMC BIL Find increase in acceptance probability with increase in n Algorithm 1 MCMC BIL algorithm(see also Gallant and McCullogh 29). 1: Set θ 2: Simulate x n p( θ ) 3: Compute φ = argmax φ p A (x n φ) 4: for i = 1 to T do 5: Draw θ q( θ i 1 ) 6: Simulate x n p( θ ) 7: Compute ˆφ(x n ) = argmax φp A (x n φ) 8: Compute r = pa(y ˆφ(x n ))π(θ )q(θ i 1 θ ) pa(y φ i 1 )π(θ i 1 )q(θ θ i 1 ) 9: if uniform(,1) < r then 1: θ i = θ 11: φ i = ˆφ(x n ) 12: else 13: θ i = θ i 1 14: φ i = φ i 1 15: end if 16: end for

21 Toy Example 1 (Drovandi and Pettitt 213) True model Poisson(λ) and auxiliary model Normal(µ, τ) ABC II gets lucky as ˆµ = ȳ, sufficient for λ BIL as n approximates Poisson(λ) likelihood with Normal(λ, λ) likelihood. Not exact..8.6 n = 1 n = 1 n = true Acceptance probabilities: n = 1 46%, n = 1 67%, n = 1 72% and n = 1 73% for increasing n

22 Toy Example 1 Results for BIL when auxiliary model mis-specified Normal(µ, 9) (underdispersed) Normal(µ, 49) (overdispersed) ABC II will still work by chance 1.8 true τ unspecified τ = 16 τ =

23 Toy Example 2 True model t-distribution(µ, σ, ν = 1) and auxiliary model t-distribution(µ, σ, ν) Here BIL exact as n since true is special case of auxiliary ABC II do not produce sufficient statistics (full set of order statistics are minimal sufficient) true true 1.8 n=1 n=1 n=1 n=1 1.6 n=1 2 n=

24 Toy Example 3 cont true true 1.8 ABC IP ABC IP ABC IL ABC IL 1.6 ABC IS 2 ABC IS

25 Quantile Distribution Example Model of Interest: g-and-k quantile distribution Defined in terms of its quantile function: ( Q(z(p);θ) = a+b 1+c 1 exp( gz(p)) ) (1+z(p) 2 ) k z(p). 1+exp( gz(p)) (1) p - quantile, z(p) - standard normal quantile, θ = (a,b,g,k), c =.8 (see Rayner and MacGillivray 23). Numerical likelihood evaluation possible Simulation easier via inversion method Data consists of 1 independent draws with a = 3, b = 1, g = 2 and k =.5

26 Quantile Distribution Example (Cont...) Auxiliary model is a 3-component normal mixture model Flexible and fits data well But breaks assumption of ABC IP (parameter estimates not unique).8 estimated density 3 component mixture density.6 density y

27 Quantile Distribution Example (Results) 3 ABC n=1 n=4 posterior (a) a (b) b (c) g (d) k Acc Prob: 1.5% for n = 1, 2.7% for n = 2 and 3.8% for n = 4

28 Quantile Distribution Example (Results) 3 ABC n=4 n=2 posterior (e) a (f) b (g) g (h) k MCMC acc prob: 6.5% for n = 1 and 8.4% for n = 2

29 Quantile Distribution Example (Results) ABC IS ABC IS REG 3 ABC IL BIL n=2 15 posterior (i) a (j) b (k) g (l) k

30 Macroparasite Immunity Example Estimate parameters of a Markov process model explaining macroparasite population development with host immunity 212 hosts (cats) i = 1,...,212. Each cat injected with l i juvenile Brugia pahangi larvae (approximately 1 or 2). At time t i host is sacrificed and the number of matures are recorded Host assumed to develop an immunity Three variable problem: M(t) matures, L(t) juveniles, I(t) immunity. Only L() and M(t i ) is observed for each host No tractable likelihood

31 Proporton of Matures Time

32 Trivariate Markov Process of Riley et al (23) Invisible Invisible Mature Parasites M(t) Maturation γl(t) Juvenile Parasites L(t) µ M M(t) Natural death Immunity µ L L(t) Natural death βi(t)l(t) Death due to immunity Gain of immunity νl(t) I(t) Loss of immunity µ I I(t) Invisible

33 Auxiliary Beta-Binomial model The data show too much variation for Binomial A Beta-Binomial model has an extra parameter to capture dispersion ( ) li B(mi +α i,l i m i +β i ) p(m i α i,β i ) =, B(α i,β i ) m i Useful reparameterisation p i = α i /(α i +β i ) and θ i = 1/(α i +β i ) Relate the proportion and over dispersion parameters to time, t i, and initial larvae, l i, covariates logit(p i ) = β +β 1 log(t i )+β 2 (log(t i )) 2, { η1, if l log(θ i ) = i 1 η 2, if l i 2, Five parameters φ = (β,β 1,β 2,η 1,η 2 )

34 Macroparasite Immunity Results - Compare BII 15 1 ABC IS ABC IP ABC IL BIL x (m) ν (n) µ I (o) µ L (p) β

35 Macroparasite Immunity Results - Regression Adjustment 2 15 ABC IS REG ABC IP REG ABC IL REG BIL n= x (q) ν (r) µ L

36 Macroparasite Immunity Results - Increase n 15 n=1 n=2 n= x (s) ν (t) µ I (u) µ L (v) β

37 Discussion BIL very different to ABC II theoretically BIL needs to have a good auxiliary model. ABC II might be useful is auxiliary model mis-specified ABC II more flexible; can incorporate other summary statistics

38 Key References Drovandi et al. (211). Approximate Bayesian Computation using Indirect Inference. JRSS C. Drovandi and Pettitt (213). Bayesian Indirect Inference. Gleim and Pigorsch (213). Approximate Bayesian Computation with Indirect Summary Statistics. Gallant and McCullogh (29). On the determination of general scientific models with application to asset pricing. JASA. Reeves and Pettitt (25). A theoretical framework for approximate Bayesian computation. 2th IWSM.

Bayesian Indirect Inference using a Parametric Auxiliary Model

Bayesian Indirect Inference using a Parametric Auxiliary Model Bayesian Indirect Inference using a Parametric Auxiliary Model Dr Chris Drovandi Queensland University of Technology, Australia c.drovandi@qut.edu.au Collaborators: Tony Pettitt and Anthony Lee July 25,

More information

Bayesian Indirect Inference Using a Parametric Auxiliary Model

Bayesian Indirect Inference Using a Parametric Auxiliary Model Statistical Science 2015, Vol. 30, No. 1, 72 95 DOI: 10.1214/14-STS498 c Institute of Mathematical Statistics, 2015 Bayesian Indirect Inference Using a Parametric Auxiliary Model Christopher C. Drovandi,

More information

Tutorial on ABC Algorithms

Tutorial on ABC Algorithms Tutorial on ABC Algorithms Dr Chris Drovandi Queensland University of Technology, Australia c.drovandi@qut.edu.au July 3, 2014 Notation Model parameter θ with prior π(θ) Likelihood is f(ý θ) with observed

More information

arxiv: v1 [stat.co] 6 Mar 2018

arxiv: v1 [stat.co] 6 Mar 2018 ABC and Indirect Inference Christopher C Drovandi arxiv:1803.01999v1 [stat.co] 6 Mar 2018 School of Mathematical Sciences, Queensland University of Technology, Australia ABSTRACT August 26, 2018 Indirect

More information

Evidence estimation for Markov random fields: a triply intractable problem

Evidence estimation for Markov random fields: a triply intractable problem Evidence estimation for Markov random fields: a triply intractable problem January 7th, 2014 Markov random fields Interacting objects Markov random fields (MRFs) are used for modelling (often large numbers

More information

An introduction to Approximate Bayesian Computation methods

An introduction to Approximate Bayesian Computation methods An introduction to Approximate Bayesian Computation methods M.E. Castellanos maria.castellanos@urjc.es (from several works with S. Cabras, E. Ruli and O. Ratmann) Valencia, January 28, 2015 Valencia Bayesian

More information

Approximate Bayesian computation: methods and applications for complex systems

Approximate Bayesian computation: methods and applications for complex systems Approximate Bayesian computation: methods and applications for complex systems Mark A. Beaumont, School of Biological Sciences, The University of Bristol, Bristol, UK 11 November 2015 Structure of Talk

More information

High-dimensional ABC

High-dimensional ABC High-dimensional ABC D. J. Nott, V. M.-H. Ong, Y. Fan and S. A. Sisson arxiv:1802.09725v1 [stat.co] 27 Feb 2018 1 Introduction February 28, 2018 Other chapters in this volume have discussed the curse of

More information

Approximate Bayesian Computation

Approximate Bayesian Computation Approximate Bayesian Computation Sarah Filippi Department of Statistics University of Oxford 09/02/2016 Parameter inference in a signalling pathway A RAS Receptor Growth factor Cell membrane Phosphorylation

More information

A Review of Pseudo-Marginal Markov Chain Monte Carlo

A Review of Pseudo-Marginal Markov Chain Monte Carlo A Review of Pseudo-Marginal Markov Chain Monte Carlo Discussed by: Yizhe Zhang October 21, 2016 Outline 1 Overview 2 Paper review 3 experiment 4 conclusion Motivation & overview Notation: θ denotes the

More information

Inexact approximations for doubly and triply intractable problems

Inexact approximations for doubly and triply intractable problems Inexact approximations for doubly and triply intractable problems March 27th, 2014 Markov random fields Interacting objects Markov random fields (MRFs) are used for modelling (often large numbers of) interacting

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

Tutorial on Approximate Bayesian Computation

Tutorial on Approximate Bayesian Computation Tutorial on Approximate Bayesian Computation Michael Gutmann https://sites.google.com/site/michaelgutmann University of Helsinki Aalto University Helsinki Institute for Information Technology 16 May 2016

More information

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci

More information

Approximate Bayesian Computation

Approximate Bayesian Computation Approximate Bayesian Computation Michael Gutmann https://sites.google.com/site/michaelgutmann University of Helsinki and Aalto University 1st December 2015 Content Two parts: 1. The basics of approximate

More information

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference 1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE

More information

Notes on pseudo-marginal methods, variational Bayes and ABC

Notes on pseudo-marginal methods, variational Bayes and ABC Notes on pseudo-marginal methods, variational Bayes and ABC Christian Andersson Naesseth October 3, 2016 The Pseudo-Marginal Framework Assume we are interested in sampling from the posterior distribution

More information

Approximate Bayesian Computation

Approximate Bayesian Computation Approximate Bayesian Computation Jessi Cisewski Department of Statistics Yale University October 26, 2016 SAMSI Astrostatistics Course Our goals... Approximate Bayesian Computation (ABC) 1 Background:

More information

Accelerating ABC methods using Gaussian processes

Accelerating ABC methods using Gaussian processes Richard D. Wilkinson School of Mathematical Sciences, University of Nottingham, Nottingham, NG7 2RD, United Kingdom Abstract Approximate Bayesian computation (ABC) methods are used to approximate posterior

More information

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Jonas Hallgren 1 1 Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden BFS 2012 June

More information

Approximate Bayesian Computation: a simulation based approach to inference

Approximate Bayesian Computation: a simulation based approach to inference Approximate Bayesian Computation: a simulation based approach to inference Richard Wilkinson Simon Tavaré 2 Department of Probability and Statistics University of Sheffield 2 Department of Applied Mathematics

More information

Bayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017

Bayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017 Bayesian inference Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 10, 2017 1 / 22 Outline for today A genetic example Bayes theorem Examples Priors Posterior summaries

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/??

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/?? to Bayesian Methods Introduction to Bayesian Methods p.1/?? We develop the Bayesian paradigm for parametric inference. To this end, suppose we conduct (or wish to design) a study, in which the parameter

More information

Introduction to Probabilistic Machine Learning

Introduction to Probabilistic Machine Learning Introduction to Probabilistic Machine Learning Piyush Rai Dept. of CSE, IIT Kanpur (Mini-course 1) Nov 03, 2015 Piyush Rai (IIT Kanpur) Introduction to Probabilistic Machine Learning 1 Machine Learning

More information

Approximate Bayesian Computation for Astrostatistics

Approximate Bayesian Computation for Astrostatistics Approximate Bayesian Computation for Astrostatistics Jessi Cisewski Department of Statistics Yale University October 24, 2016 SAMSI Undergraduate Workshop Our goals Introduction to Bayesian methods Likelihoods,

More information

Lecture : Probabilistic Machine Learning

Lecture : Probabilistic Machine Learning Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning

More information

Fast Likelihood-Free Inference via Bayesian Optimization

Fast Likelihood-Free Inference via Bayesian Optimization Fast Likelihood-Free Inference via Bayesian Optimization Michael Gutmann https://sites.google.com/site/michaelgutmann University of Helsinki Aalto University Helsinki Institute for Information Technology

More information

Foundations of Statistical Inference

Foundations of Statistical Inference Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 32 Lecture 14 : Variational Bayes

More information

Modeling Real Estate Data using Quantile Regression

Modeling Real Estate Data using Quantile Regression Modeling Real Estate Data using Semiparametric Quantile Regression Department of Statistics University of Innsbruck September 9th, 2011 Overview 1 Application: 2 3 4 Hedonic regression data for house prices

More information

Fitting the Bartlett-Lewis rainfall model using Approximate Bayesian Computation

Fitting the Bartlett-Lewis rainfall model using Approximate Bayesian Computation 22nd International Congress on Modelling and Simulation, Hobart, Tasmania, Australia, 3 to 8 December 2017 mssanz.org.au/modsim2017 Fitting the Bartlett-Lewis rainfall model using Approximate Bayesian

More information

Variational Bayes with Synthetic Likelihood

Variational Bayes with Synthetic Likelihood Variational Bayes with Synthetic Likelihood Victor M-H. Ong 1, David J. Nott 1, Minh-Ngoc Tran 2, S. A. Sisson 3, and C. C. Drovandi 4 1 Department of Statistics and Applied Probability, National University

More information

Likelihood-free MCMC

Likelihood-free MCMC Bayesian inference for stable distributions with applications in finance Department of Mathematics University of Leicester September 2, 2011 MSc project final presentation Outline 1 2 3 4 Classical Monte

More information

Basic math for biology

Basic math for biology Basic math for biology Lei Li Florida State University, Feb 6, 2002 The EM algorithm: setup Parametric models: {P θ }. Data: full data (Y, X); partial data Y. Missing data: X. Likelihood and maximum likelihood

More information

Bayesian Sequential Design under Model Uncertainty using Sequential Monte Carlo

Bayesian Sequential Design under Model Uncertainty using Sequential Monte Carlo Bayesian Sequential Design under Model Uncertainty using Sequential Monte Carlo, James McGree, Tony Pettitt October 7, 2 Introduction Motivation Model choice abundant throughout literature Take into account

More information

Proceedings of the 2012 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A. M. Uhrmacher, eds.

Proceedings of the 2012 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A. M. Uhrmacher, eds. Proceedings of the 2012 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A. M. Uhrmacher, eds. OPTIMAL PARALLELIZATION OF A SEQUENTIAL APPROXIMATE BAYESIAN COMPUTATION

More information

Introduction to Bayesian Methods

Introduction to Bayesian Methods Introduction to Bayesian Methods Jessi Cisewski Department of Statistics Yale University Sagan Summer Workshop 2016 Our goal: introduction to Bayesian methods Likelihoods Priors: conjugate priors, non-informative

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm 1/29 EM & Latent Variable Models Gaussian Mixture Models EM Theory The Expectation-Maximization Algorithm Mihaela van der Schaar Department of Engineering Science University of Oxford MLE for Latent Variable

More information

Sequential Monte Carlo Algorithms for Bayesian Sequential Design

Sequential Monte Carlo Algorithms for Bayesian Sequential Design Sequential Monte Carlo Algorithms for Bayesian Sequential Design Dr Queensland University of Technology c.drovandi@qut.edu.au Collaborators: James McGree, Tony Pettitt, Gentry White Acknowledgements: Australian

More information

Model Checking and Improvement

Model Checking and Improvement Model Checking and Improvement Statistics 220 Spring 2005 Copyright c 2005 by Mark E. Irwin Model Checking All models are wrong but some models are useful George E. P. Box So far we have looked at a number

More information

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17 MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner Fundamentals CS 281A: Statistical Learning Theory Yangqing Jia Based on tutorial slides by Lester Mackey and Ariel Kleiner August, 2011 Outline 1 Probability 2 Statistics 3 Linear Algebra 4 Optimization

More information

Learning the hyper-parameters. Luca Martino

Learning the hyper-parameters. Luca Martino Learning the hyper-parameters Luca Martino 2017 2017 1 / 28 Parameters and hyper-parameters 1. All the described methods depend on some choice of hyper-parameters... 2. For instance, do you recall λ (bandwidth

More information

arxiv: v1 [stat.me] 30 Sep 2009

arxiv: v1 [stat.me] 30 Sep 2009 Model choice versus model criticism arxiv:0909.5673v1 [stat.me] 30 Sep 2009 Christian P. Robert 1,2, Kerrie Mengersen 3, and Carla Chen 3 1 Université Paris Dauphine, 2 CREST-INSEE, Paris, France, and

More information

STAT 518 Intro Student Presentation

STAT 518 Intro Student Presentation STAT 518 Intro Student Presentation Wen Wei Loh April 11, 2013 Title of paper Radford M. Neal [1999] Bayesian Statistics, 6: 475-501, 1999 What the paper is about Regression and Classification Flexible

More information

New Insights into History Matching via Sequential Monte Carlo

New Insights into History Matching via Sequential Monte Carlo New Insights into History Matching via Sequential Monte Carlo Associate Professor Chris Drovandi School of Mathematical Sciences ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS)

More information

Bayesian Linear Regression

Bayesian Linear Regression Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective

More information

Short Questions (Do two out of three) 15 points each

Short Questions (Do two out of three) 15 points each Econometrics Short Questions Do two out of three) 5 points each ) Let y = Xβ + u and Z be a set of instruments for X When we estimate β with OLS we project y onto the space spanned by X along a path orthogonal

More information

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for

More information

Bayesian density regression for count data

Bayesian density regression for count data Bayesian density regression for count data Charalampos Chanialidis 1, Ludger Evers 2, and Tereza Neocleous 3 arxiv:1406.1882v1 [stat.me] 7 Jun 2014 1 University of Glasgow, c.chanialidis1@research.gla.ac.uk

More information

Accounting for Complex Sample Designs via Mixture Models

Accounting for Complex Sample Designs via Mixture Models Accounting for Complex Sample Designs via Finite Normal Mixture Models 1 1 University of Michigan School of Public Health August 2009 Talk Outline 1 2 Accommodating Sampling Weights in Mixture Models 3

More information

Bayesian Inference: Concept and Practice

Bayesian Inference: Concept and Practice Inference: Concept and Practice fundamentals Johan A. Elkink School of Politics & International Relations University College Dublin 5 June 2017 1 2 3 Bayes theorem In order to estimate the parameters of

More information

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC Stat 451 Lecture Notes 07 12 Markov Chain Monte Carlo Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapters 8 9 in Givens & Hoeting, Chapters 25 27 in Lange 2 Updated: April 4, 2016 1 / 42 Outline

More information

Riemann Manifold Methods in Bayesian Statistics

Riemann Manifold Methods in Bayesian Statistics Ricardo Ehlers ehlers@icmc.usp.br Applied Maths and Stats University of São Paulo, Brazil Working Group in Statistical Learning University College Dublin September 2015 Bayesian inference is based on Bayes

More information

Lecture 3. Univariate Bayesian inference: conjugate analysis

Lecture 3. Univariate Bayesian inference: conjugate analysis Summary Lecture 3. Univariate Bayesian inference: conjugate analysis 1. Posterior predictive distributions 2. Conjugate analysis for proportions 3. Posterior predictions for proportions 4. Conjugate analysis

More information

Intro to Probability. Andrei Barbu

Intro to Probability. Andrei Barbu Intro to Probability Andrei Barbu Some problems Some problems A means to capture uncertainty Some problems A means to capture uncertainty You have data from two sources, are they different? Some problems

More information

Recent Advances in Approximate Bayesian Computation (ABC): Inference and Forecasting

Recent Advances in Approximate Bayesian Computation (ABC): Inference and Forecasting Recent Advances in Approximate Bayesian Computation (ABC): Inference and Forecasting Gael Martin (Monash) Drawing heavily from work with: David Frazier (Monash University), Ole Maneesoonthorn (Uni. of

More information

Nonparametric Bayes Uncertainty Quantification

Nonparametric Bayes Uncertainty Quantification Nonparametric Bayes Uncertainty Quantification David Dunson Department of Statistical Science, Duke University Funded from NIH R01-ES017240, R01-ES017436 & ONR Review of Bayes Intro to Nonparametric Bayes

More information

STAT 730 Chapter 4: Estimation

STAT 730 Chapter 4: Estimation STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum

More information

On Consistency of Approximate Bayesian Computation

On Consistency of Approximate Bayesian Computation On Consistency of Approximate Bayesian Computation David T. Frazier, Gael M. Martin and Christian P. Robert August 24, 2015 arxiv:1508.05178v1 [stat.co] 21 Aug 2015 Abstract Approximate Bayesian computation

More information

Auto-Encoding Variational Bayes

Auto-Encoding Variational Bayes Auto-Encoding Variational Bayes Diederik P Kingma, Max Welling June 18, 2018 Diederik P Kingma, Max Welling Auto-Encoding Variational Bayes June 18, 2018 1 / 39 Outline 1 Introduction 2 Variational Lower

More information

Likelihood-free Markov chain Monte

Likelihood-free Markov chain Monte arxiv:1001.2058v1 [stat.me] 13 Jan 2010 Chapter 1 Likelihood-free Markov chain Monte Carlo 1.1 Introduction Scott A. Sisson and Yanan Fan In Bayesian inference, the posterior distribution for parameters

More information

Metropolis-Hastings Algorithm

Metropolis-Hastings Algorithm Strength of the Gibbs sampler Metropolis-Hastings Algorithm Easy algorithm to think about. Exploits the factorization properties of the joint probability distribution. No difficult choices to be made to

More information

Bayes Factors, posterior predictives, short intro to RJMCMC. Thermodynamic Integration

Bayes Factors, posterior predictives, short intro to RJMCMC. Thermodynamic Integration Bayes Factors, posterior predictives, short intro to RJMCMC Thermodynamic Integration Dave Campbell 2016 Bayesian Statistical Inference P(θ Y ) P(Y θ)π(θ) Once you have posterior samples you can compute

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee and Andrew O. Finley 2 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

arxiv: v1 [stat.co] 17 Dec 2015

arxiv: v1 [stat.co] 17 Dec 2015 arxiv:1512.05633v1 [stat.co] 17 Dec 2015 Summary Statistics in Approximate Bayesian Computation Dennis Prangle Abstract This document is due to appear as a chapter of the forthcoming Handbook of Approximate

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Monte Carlo in Bayesian Statistics

Monte Carlo in Bayesian Statistics Monte Carlo in Bayesian Statistics Matthew Thomas SAMBa - University of Bath m.l.thomas@bath.ac.uk December 4, 2014 Matthew Thomas (SAMBa) Monte Carlo in Bayesian Statistics December 4, 2014 1 / 16 Overview

More information

CSC 2541: Bayesian Methods for Machine Learning

CSC 2541: Bayesian Methods for Machine Learning CSC 2541: Bayesian Methods for Machine Learning Radford M. Neal, University of Toronto, 2011 Lecture 10 Alternatives to Monte Carlo Computation Since about 1990, Markov chain Monte Carlo has been the dominant

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

ICML Scalable Bayesian Inference on Point processes. with Gaussian Processes. Yves-Laurent Kom Samo & Stephen Roberts

ICML Scalable Bayesian Inference on Point processes. with Gaussian Processes. Yves-Laurent Kom Samo & Stephen Roberts ICML 2015 Scalable Nonparametric Bayesian Inference on Point Processes with Gaussian Processes Machine Learning Research Group and Oxford-Man Institute University of Oxford July 8, 2015 Point Processes

More information

Approximate Bayesian computation (ABC) and the challenge of big simulation

Approximate Bayesian computation (ABC) and the challenge of big simulation Approximate Bayesian computation (ABC) and the challenge of big simulation Richard Wilkinson School of Mathematical Sciences University of Nottingham September 3, 2014 Computer experiments Rohrlich (1991):

More information

Introduction to Bayesian Inference

Introduction to Bayesian Inference University of Pennsylvania EABCN Training School May 10, 2016 Bayesian Inference Ingredients of Bayesian Analysis: Likelihood function p(y φ) Prior density p(φ) Marginal data density p(y ) = p(y φ)p(φ)dφ

More information

Bayesian Inference for Discretely Sampled Diffusion Processes: A New MCMC Based Approach to Inference

Bayesian Inference for Discretely Sampled Diffusion Processes: A New MCMC Based Approach to Inference Bayesian Inference for Discretely Sampled Diffusion Processes: A New MCMC Based Approach to Inference Osnat Stramer 1 and Matthew Bognar 1 Department of Statistics and Actuarial Science, University of

More information

Generalized Linear Models Introduction

Generalized Linear Models Introduction Generalized Linear Models Introduction Statistics 135 Autumn 2005 Copyright c 2005 by Mark E. Irwin Generalized Linear Models For many problems, standard linear regression approaches don t work. Sometimes,

More information

Bayesian inference for multivariate skew-normal and skew-t distributions

Bayesian inference for multivariate skew-normal and skew-t distributions Bayesian inference for multivariate skew-normal and skew-t distributions Brunero Liseo Sapienza Università di Roma Banff, May 2013 Outline Joint research with Antonio Parisi (Roma Tor Vergata) 1. Inferential

More information

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007 Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 Importance Sampling Recall: Let s say that we want to compute some expectation (integral) E p [f] = p(x)f(x)dx and we remember

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

Machine Learning CSE546 Carlos Guestrin University of Washington. September 30, What about continuous variables?

Machine Learning CSE546 Carlos Guestrin University of Washington. September 30, What about continuous variables? Linear Regression Machine Learning CSE546 Carlos Guestrin University of Washington September 30, 2014 1 What about continuous variables? n Billionaire says: If I am measuring a continuous variable, what

More information

GAUSSIAN PROCESS REGRESSION

GAUSSIAN PROCESS REGRESSION GAUSSIAN PROCESS REGRESSION CSE 515T Spring 2015 1. BACKGROUND The kernel trick again... The Kernel Trick Consider again the linear regression model: y(x) = φ(x) w + ε, with prior p(w) = N (w; 0, Σ). The

More information

A Bayesian Nonparametric Approach to Monotone Missing Data in Longitudinal Studies with Informative Missingness

A Bayesian Nonparametric Approach to Monotone Missing Data in Longitudinal Studies with Informative Missingness A Bayesian Nonparametric Approach to Monotone Missing Data in Longitudinal Studies with Informative Missingness A. Linero and M. Daniels UF, UT-Austin SRC 2014, Galveston, TX 1 Background 2 Working model

More information

Approximate Bayesian Computation for the Stellar Initial Mass Function

Approximate Bayesian Computation for the Stellar Initial Mass Function Approximate Bayesian Computation for the Stellar Initial Mass Function Jessi Cisewski Department of Statistics Yale University SCMA6 Collaborators: Grant Weller (Savvysherpa), Chad Schafer (Carnegie Mellon),

More information

(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1.

(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1. Problem 1 (21 points) An economist runs the regression y i = β 0 + x 1i β 1 + x 2i β 2 + x 3i β 3 + ε i (1) The results are summarized in the following table: Equation 1. Variable Coefficient Std. Error

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1 Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in

More information

Petr Volf. Model for Difference of Two Series of Poisson-like Count Data

Petr Volf. Model for Difference of Two Series of Poisson-like Count Data Petr Volf Institute of Information Theory and Automation Academy of Sciences of the Czech Republic Pod vodárenskou věží 4, 182 8 Praha 8 e-mail: volf@utia.cas.cz Model for Difference of Two Series of Poisson-like

More information

Approximate Bayesian computation for the parameters of PRISM programs

Approximate Bayesian computation for the parameters of PRISM programs Approximate Bayesian computation for the parameters of PRISM programs James Cussens Department of Computer Science & York Centre for Complex Systems Analysis University of York Heslington, York, YO10 5DD,

More information

Auxiliary Likelihood-Based Approximate Bayesian Computation in State Space Models

Auxiliary Likelihood-Based Approximate Bayesian Computation in State Space Models Auxiliary Likelihood-Based Approximate Bayesian Computation in State Space Models Gael M. Martin, Brendan P.M. McCabe, Worapree Maneesoonthorn, David Frazier and Christian P. Robert December 2, 2015 Abstract

More information

MCMC 2: Lecture 3 SIR models - more topics. Phil O Neill Theo Kypraios School of Mathematical Sciences University of Nottingham

MCMC 2: Lecture 3 SIR models - more topics. Phil O Neill Theo Kypraios School of Mathematical Sciences University of Nottingham MCMC 2: Lecture 3 SIR models - more topics Phil O Neill Theo Kypraios School of Mathematical Sciences University of Nottingham Contents 1. What can be estimated? 2. Reparameterisation 3. Marginalisation

More information

Predictive Distributions

Predictive Distributions Predictive Distributions October 6, 2010 Hoff Chapter 4 5 October 5, 2010 Prior Predictive Distribution Before we observe the data, what do we expect the distribution of observations to be? p(y i ) = p(y

More information

Sequential Monte Carlo Methods (for DSGE Models)

Sequential Monte Carlo Methods (for DSGE Models) Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References These lectures use material from our joint work: Tempered

More information

arxiv: v1 [stat.co] 1 Jan 2014

arxiv: v1 [stat.co] 1 Jan 2014 Approximate Bayesian Computation for a Class of Time Series Models BY AJAY JASRA Department of Statistics & Applied Probability, National University of Singapore, Singapore, 117546, SG. E-Mail: staja@nus.edu.sg

More information

Time Series and Dynamic Models

Time Series and Dynamic Models Time Series and Dynamic Models Section 1 Intro to Bayesian Inference Carlos M. Carvalho The University of Texas at Austin 1 Outline 1 1. Foundations of Bayesian Statistics 2. Bayesian Estimation 3. The

More information

Probability and Estimation. Alan Moses

Probability and Estimation. Alan Moses Probability and Estimation Alan Moses Random variables and probability A random variable is like a variable in algebra (e.g., y=e x ), but where at least part of the variability is taken to be stochastic.

More information

ABC random forest for parameter estimation. Jean-Michel Marin

ABC random forest for parameter estimation. Jean-Michel Marin ABC random forest for parameter estimation Jean-Michel Marin Université de Montpellier Institut Montpelliérain Alexander Grothendieck (IMAG) Institut de Biologie Computationnelle (IBC) Labex Numev! joint

More information

Approximate Bayesian computation (ABC) NIPS Tutorial

Approximate Bayesian computation (ABC) NIPS Tutorial Approximate Bayesian computation (ABC) NIPS Tutorial Richard Wilkinson r.d.wilkinson@nottingham.ac.uk School of Mathematical Sciences University of Nottingham December 5 2013 Computer experiments Rohrlich

More information

Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis

Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis Stéphanie Allassonnière CIS, JHU July, 15th 28 Context : Computational Anatomy Context and motivations :

More information

1. Fisher Information

1. Fisher Information 1. Fisher Information Let f(x θ) be a density function with the property that log f(x θ) is differentiable in θ throughout the open p-dimensional parameter set Θ R p ; then the score statistic (or score

More information