Sequential Monte Carlo Methods

Size: px
Start display at page:

Download "Sequential Monte Carlo Methods"

Transcription

1 University of Pennsylvania Econ 722 Part 1 February 13, 2019

2 Introduction Posterior expectations can be approximated by Monte Carlo averages. If we have draws from {θ s } N s=1 from p(θ Y ), then (under some regularity conditions) 1 N N s=1 h(θ s ) a.s. E[h(θ) Y ]. Standard approach in DSGE model literature (Schorfheide, 2000; Otrok, 2001): use Markov chain Monte Carlo (MCMC) methods to generate a sequence of serially correlated draws {θ s } N s=1. Unfortunately, standard MCMC can be quite inaccurate, especially in medium and large-scale DSGE models: disentangling importance of internal versus external propagation mechanism; determining the relative importance of shocks.

3 What Do We Mean by Inaccurate? News about the future may be important for business cycle fluctuations. Use DSGE model to construct histogram (reflecting posterior parameter uncertainty) for the relative importance of news shocks for fluctuations in hours worked. Repeat posterior computations 20 times; black is inaccurate ; red is accurate ; blue is prior. 1 Hours

4 Introduction Previously: Modify MCMC algorithms to overcome weaknesses: blocking of parameters; tailoring of (mixture) proposal densities Kohn et al. (2010) Chib and Ramamurthy (2011) Curdia and Reis (2011) Herbst (2012) Now, we use sequential Monte Carlo (SMC) (more precisely, sequential importance sampling) instead: Better suited to handle irregular and multimodal posteriors associated with large DSGE models. Algorithms can be easily parallelized. SMC = Importance Sampling on Steriods. We build on Theoretical work: Chopin (2004); Del Moral, Doucet, Jasra (2006) Applied work: Creal (2007); Durham and Geweke (2011, 2012)

5 Review: Importance Sampling π(θ) = p(θ Y ) = f (θ)/z is posterior. We can approximate E π [h(θ)] using draws from some other distribution g( ): E π [h(θ)] = h(θ) f (θ) dθ = 1 h(θ) f (θ) }{{} Z Z g(θ) g(θ)dθ π(θ) If θ i s are draws from g( ) then E π [h] = References: h(θ) f (θ) g(θ) g(θ)dθ f (θ) g(θ) g(θ)dθ 1 N N i=1 h(θi )w(θ i ) 1 N N i=1 w(θi ), w(θ) = f (θ) g(θ). General: Kloek and van Dijk (1978), Geweke (1989); DSGE model application: Dejong, Ingram, and Whiteman (2000).

6 From Importance Sampling to Sequential Importance Sampling In general, it s hard to construct a good proposal density g(θ), especially if the posterior has several peaks and valleys. Idea - Part 1: it might be easier to find a proposal density for π n (θ) = [p(y θ)]φn p(θ) = f n(θ). [p(y θ)] φ np(θ)dθ Z n at least if φ n is close to zero. Idea - Part 2: We can try to turn a proposal density for π n into a proposal density for π n+1 and iterate, letting φ n φ N = 1.

7 Illustration: Our state-space model: [ y t = [1 1]s t, s t = Innovation: ɛ t iidn(0, 1). ] [ θ (1 θ1 2) θ 1θ 2 (1 θ1 2) s t Prior: uniform on the square 0 θ 1 1 and 0 θ 2 1. ] ɛ t. Simulate T = 200 observations given θ = [0.45, 0.45], which is observationally equivalent to θ = [0.89, 0.22]

8 Illustration: Tempered Posteriors of θ n θ

9 Illustration: Posterior Draws θ θ 1

10 SMC Algorithm: A Graphical Illustration C S M C S M C S M φ0 φ1 φ2 φ3 π n (θ) is represented by a swarm of particles {θ i n, W i n} N i=1 : h n,n = 1 N N i=1 W i nh(θ i ) a.s. E π [h(θ n )]. C is Correction; S is Selection; and M is Mutation.

11 SMC Algorithm 1 Initialization. (φ 0 = 0). Draw the initial particles from the prior: θ1 i iid p(θ) and W i 1 = 1, i = 1,..., N. 2 Recursion. For n = 1,..., N φ, 1 Correction. Reweight the particles from stage n 1 by defining the incremental weights w i n = [p(y θ i n 1)] φn φ n 1 (1) and the normalized weights W n i w i = nwn 1 i 1 N N i=1 w, i = 1,..., N. (2) nw i n 1 i An approximation of E πn [h(θ)] is given by h n,n = 1 N 2 Selection. N i=1 W i nh(θ i n 1). (3)

12 SMC Algorithm 1 Initialization. 2 Recursion. For n = 1,..., N φ, 1 Correction. 2 Selection. (Optional Resampling) Let {ˆθ} N i=1 denote N iid draws from a multinomial distribution characterized by support points and weights {θ i n 1, W i n} N i=1 and set W i n = 1. An approximation of E πn [h(θ)] is given by ĥ n,n = 1 N N Wnh(ˆθ i n). i i=1 (4) 3 Mutation. Propagate the particles {ˆθ i, W i n} via N MH steps of a MH algorithm with transition density θ i n K n(θ n ˆθ i n; ζ n) and stationary distribution π n(θ). An approximation of E πn [h(θ)] is given by h n,n = 1 N N h(θn)w i n. i i=1 (5)

13 Remarks Correction Step: reweight particles from iteration n 1 to create importance sampling approximation of E πn [h(θ)] Selection Step: the resampling of the particles (good) equalizes the particle weights and thereby increases accuracy of subsequent importance sampling approximations; (not good) adds a bit of noise to the MC approximation. Mutation Step: adapts particles to posterior π n(θ); imagine we don t do it: then we would be using draws from prior p(θ) to approximate posterior π(θ), which can t be good! n θ

14 Theoretical Properties Goal: strong law of large numbers (SLLN) and central limit theorem (CLT) as N for every iteration n = 1,..., N φ. Regularity conditions: proper prior; bounded likelihood function; 2 + δ posterior moments of h(θ). Idea of proof (Chopin, 2004): SLLN and CLT can be proved recursively. For step n assume that n 1 approximation (with normalized weights) yields ( ) 1 N N h(θn 1)W i n 1 i E πn 1 [h(θ)] = N ( 0, Ω ) n 1(h) N i=1 Initialization: SLLN and CLT for iid random variables because we sample from prior

15 Theoretical Properties: Correction Step Suppose that the n 1 approximation (with normalized weights) yields ( ) 1 N N h(θ i N n 1)Wn 1 i E πn 1 [h(θ)] = N ( 0, Ω n 1 (h) ) i=1 Then ( 1 N N N i=1 h(θi n 1 )[p(y ) θi n 1 )]φn φn 1 Wn 1 i 1 N N i=1 [p(y E θi n 1 )]φn φn 1 Wn 1 i πn [h(θ)] = N ( 0, Ω n (h) ) where Ω n (h) = Ω n 1 ( vn 1 (θ)(h E πn [h]) ) v n 1 (θ) = [p(y θ)] φn φn 1 Z n 1 Z n This step relies on likelihood evaluations from iteration n 1 that are already stored in memory.

16 Theoretical Properties: Selection / Resampling After resampling by drawing from iid multinomial distribution we obtain ( ) 1 N N h(ˆθ i )Wn i E πn [h] = N ( 0, ˆΩ(h) ), N where i=1 ˆΩ n (h) = Ω(h) + V πn [h] Disadvantage of resampling: it adds noise. Advantage of resampling: it equalizes the particle weights, reducing the variance of v n (θ) in Ω n+1 (h) = Ω n ( vn (θ)(h E πn+1 [h]).

17 Theoretical Properties: Mutation We are using the Markov transition kernel K n (θ ˆθ) to transform draws ˆθ i n into draws θ i n. To preserve the distribution of the ˆθ n s i it has to be the case that π n (θ) = K n (θ ˆθ)π n (ˆθ)d ˆθ. It can be shown that the overall asymptotic variance after the mutation is the sum of the variance of the approximation of the conditional mean E Kn( θ n 1 )[h(θ)] which is given by ˆΩ ( E Kn( θ n 1 )[h(θ)] ) ; a weighted average of the conditional variance V Kn( θ n 1 )[h(θ)]: W n 1(θ n 1)v n 1(θ n 1)V Kn( θ n 1 )[h(θ)]π n 1(θ n 1). This step is embarassingly parallelizable, well designed for single instruction, multiple data (SIMD) processing.

18 More on Transition Kernel in Mutation Step Transition kernel K n (θ ˆθ n 1 ; ζ n ): generated by running M steps of a Metropolis-Hastings algorithm. Lessons from DSGE model MCMC: blocking of parameters can reduces persistence of Markov chain; mixture proposal density avoids getting stuck. Blocking: Partition the parameter vector θ n into N blocks equally sized blocks, denoted by θ n,b, b = 1,..., N blocks. (We generate the blocks for n = 1,..., N φ randomly prior to running the SMC algorithm.) Example: Mixture proposal density: ϑ b (θn,b,m 1, i θn, b,m, i θn,b, Σ n,b) ( ) αn θn,b,m 1, i cnσ 2 n,b + 1 α 2 ( ) θn,b, cnσ 2 n,b. + 1 α N 2 ( ) N θn,b,m 1, i cndiag(σ 2 n,b)

19 Adaptive Choice of ζ n = (θ n, Σ n, c n ) Infeasible adaption: Let θ n = E πn [θ] and Σ n = V πn [θ]. Adjust scaling factor according to c n = c n 1f ( R n 1(ζ n 1) ), where R n 1( ) is population rejection rate from iteration n 1 and e16(x 0.25) f (x) = e. 16(x 0.25) Feasible adaption use output from stage n 1 to replace ζ n by ˆζ n : Use particle approximations of E πn [θ] and V πn [θ] based on {θ i n 1, W i n} N i=1. Use actual rejection rate from stage n 1 to calculate ĉ n = ĉ n 1f ( ˆRn 1(ˆζ n 1) ). Result: under suitable regularity conditions replacing ζ n by ˆζ n where n(ˆζ n ζ n ) = O p (1) does not affect the asymptotic variance of the MC approximation.

20 Adaption of SMC Algorithm for Stylized State-Space Model 1.0 Acceptance Rate Scale Parameter c Effective Sample Size n Notes: The dashed line in the top panel indicates the target acceptance rate of 0.25.

21 Convergence of SMC Approximation for Stylized State-Space Model NV [θ 1 ] NV [θ 2 ] N Notes: The figure shows NV[ θ j ] for each parameter as a function of the number of particles N. V[ θ j ] is computed based on N run = 1, 000 runs of the SMC algorithm with N φ = 100. The width of the bands is (2 1.96) 3/N run (NV[ θ j ]).

22 More on Resampling So far, we have used multinomial resampling. It s fairly intuitive and it is straightforward to obtain a CLT. But: multinominal resampling is not particularly efficient. The book contains a section on alternative resampling schemes (stratified resampling, residual resampling...) These alternative techniques are designed to achieve a variance reduction. Most resampling algorithms are not parallelizable because they rely on the normalized particle weights.

23 Application 1: Small Scale New Keynesian Model We will take a look at the effect of various tuning choices on accuracy: Tempering schedule λ: λ = 1 is linear, λ > 1 is convex. Number of stages N φ versus number of particles N. Number of blocks in mutation step versus number of particles.

24 Effect of λ on Inefficiency Factors InEff N [ θ] λ Notes: The figure depicts hairs of InEff N [ θ] as function of λ. The inefficiency factors are computed based on N run = 50 runs of the SMC algorithm. Each hair corresponds to a DSGE model parameter.

25 Number of Stages N φ vs Number of Particles N ρ g σ r ρ r ρ z σ g κ σ z π (A) ψ 1 γ (Q) ψ 2 r (A) τ Nφ = 400, N = 250 Nφ = 200, N = 500 Nφ = 100, N = 1000 Nφ = 50, N = 2000 Nφ = 25, N = 4000 Notes: Plot of V[ θ]/v π [θ] for a specific configuration of the SMC algorithm. The inefficiency factors are computed based on N run = 50 runs of the SMC algorithm. N blocks = 1, λ = 2, N MH = 1.

26 Number of blocks N blocks in Mutation Step vs Number of Particles N ρ g σ g σ z σ r ρ r κ ρ z ψ 1 π (A) τ γ (Q) r (A) ψ 2 Nblocks = 4, N = 250 Nblocks = 2, N = 500 Nblocks = 1, N = 1000 Notes: Plot of V[ θ]/v π [θ] for a specific configuration of the SMC algorithm. The inefficiency factors are computed based on N run = 50 runs of the SMC algorithm. N φ = 100, λ = 2, N MH = 1.

27 Marginal Likelihood Approximation Recall w i n = [p(y θ i n 1 )]φn φn 1. Then 1 N N w nw i n 1 i i=1 = p φn 1 (Y θ)p(θ) [p(y θ)] φn φn 1 dθ p φ n 1(Y θ)p(θ)dθ p(y θ) φ n p(θ)dθ p(y θ) φ n 1p(θ)dθ Thus, N φ ( ) 1 N w i N nwn 1 i n=1 i=1 p(y θ)p(θ)dθ.

28 SMC Marginal Data Density Estimates N φ = 100 N φ = 400 N Mean(ln ˆp(Y )) SD(ln ˆp(Y )) Mean(ln ˆp(Y )) SD(ln ˆp(Y )) (3.18) (0.20) 1, (1.98) (0.14) 2, (1.65) (0.12) 4, (0.92) (0.07) Notes: Table shows mean and standard deviation of log marginal data density estimates as a function of the number of particles N computed over N run = 50 runs of the SMC sampler with N blocks = 4, λ = 2, and N MH = 1.

29 Application 2: Estimation of Smets and Wouters (2007) Model Benchmark macro model, has been estimated many (many) times. Core of many larger-scale models. 36 estimated parameters. RWMH: 10 million draws (5 million discarded); SMC: 500 stages with 12,000 particles. We run the RWM (using a particular version of a parallelized MCMC) and the SMC algorithm on 24 processors for the same amount of time. We estimate the SW model twenty times using RWM and SMC and get essentially identical results.

30 Application 2: Estimation of Smets and Wouters (2007) Model More interesting question: how does quality of posterior simulators change as one makes the priors more diffuse? Replace Beta by Uniform distributions; increase variances of parameters with Gamma and Normal prior by factor of 3.

31 SW Model with DIFFUSE Prior: Estimation stability RWH (black) versus SMC (red) l ι w µ p µ w ρ w ξ w r π

32 A Measure of Effective Number of Draws Suppose we could generate iid N eff ( Ê π [θ] approx N E π [θ], 1 N eff V π [θ] draws from posterior, then ). We can measure the variance of Ê π [θ] by running SMC and RWM algorithm repeatedly. Then, N eff V π[θ] V [ Ê π [θ] ]

33 Effective Number of Draws SMC RWMH Parameter Mean STD(Mean) N eff Mean STD(Mean) N eff σ l l ι p h Φ r π ρ b ϕ σ p ξ p ι w µ p ρ w µ w ξ w

34 A Closer Look at the Posterior: Two Modes Parameter Mode 1 Mode 2 ξ w ι w ρ w µ w Log Posterior Mode 1 implies that wage persistence is driven by extremely exogenous persistent wage markup shocks. Mode 2 implies that wage persistence is driven by endogenous amplification of shocks through the wage Calvo and indexation parameter. SMC is able to capture the two modes.

35 A Closer Look at the Posterior: Internal ξ w versus External ρ w Propagation ρ w ξ w

36 Stability of Posterior Computations: RWH (black) versus SMC (red) P (ξ w > ρ w) P (ρ w > µ w) P (ξ w > µ w) P (ξ p > ρ p) P (ρ p > µ p) P (ξ p > µ p)

37 Marginal Data Density Bayesian model evaluation is based on marginal data density p n (Y ) = [p(y θ)] φn p(θ)dθ. Recall that marginal data density is a by-product of the importance sampler: ˆp n (Y ) = Ẑn = 1 N N i=1 W i n. [c] Algorithm, Method MDD Estimate Standard Deviation SMC, Particle Estimate RWM, Harmonic Mean

38 Application 3: News Shock Model of Schmitt-Grohe and Uribe (2012) Real business cycle model; investment adjustment costs; variable capacity utilitization; Jaimovich-Rebelo (2009) preferences; permanent and stationary neutral and investment-specific technology shocks, goverment spending, wage mark-up, and preference shocks. Data: Real GDP, consumption, investment, government spending, hours, TFP, price of investment growth rates from 1955:Q2-2006:Q4.

39 Key Model Features News Shocks: each of the 7 exogenous processes has two anticipated components z t = ρz t 1 + ɛ 0 t }{{} unanticipated + ɛ 4 t 4 }{{} anticipated 4 qtrs ahead + ɛ 8 t 8 }{{} anticipated 8 qtrs ahead Jaimovich-Rebelo Pref: Households have CRRA preferences over consumption bundle V t : V t = C t bc t ψht θ S t S t is a geometric average of current and past habit-adjusted consumption levels. S t = (C t bc t 1 ) γ (S t 1 ) 1 γ γ 0 : GHH preferences. Wealth effect on labor is small. γ 1 : Standard preferences.

40 Effective Number of Draws SMC RWMH Parameter Mean STD(Mean) N eff Mean STD(Mean) N eff σ 4 z i σg σ 8 z i θ σ 0 z i σµ σg κ σζ σζ σζ σµ σµ

41 Histogram of Wage Markup Process Parameters: RWH (black) versus SMC (red) 0 ρ µ σ µ σ µ σ µ Sequential 0 Monte Carlo Methods

42 Histogram of Preference Parameter γ: RWH (black) versus SMC (red) 1 8 x 10 3 γ γ γ 0.6 implies that the importance of anticipated shocks for hours is substantially diminished.

43 An Alternative Prior for γ SGU use a Uniform prior for γ. We now change the prior to Beta(2, 1) (density is straight line between 0 and 1).

44 Histogram of Preference Parameter γ Alternative Prior: RWH (black) versus SMC (red) 1 γ

45 Anticipated Shocks Variance Shares for Hours: RWH (black) versus SMC (red) SGU Prior Beta(2, 1) Prior 1 Hours 1 Hours

Sequential Monte Carlo Methods

Sequential Monte Carlo Methods University of Pennsylvania Bradley Visitor Lectures October 23, 2017 Introduction Unfortunately, standard MCMC can be inaccurate, especially in medium and large-scale DSGE models: disentangling importance

More information

Sequential Monte Carlo Methods

Sequential Monte Carlo Methods University of Pennsylvania EABCN Training School May 10, 2016 Introduction Unfortunately, standard MCMC can be inaccurate, especially in medium and large-scale DSGE models: disentangling importance of

More information

Sequential Monte Carlo Methods (for DSGE Models)

Sequential Monte Carlo Methods (for DSGE Models) Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References Solution and Estimation of DSGE Models, with J. Fernandez-Villaverde

More information

Bayesian Computations for DSGE Models

Bayesian Computations for DSGE Models Bayesian Computations for DSGE Models Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 This Lecture is Based on Bayesian Estimation of DSGE Models Edward P. Herbst &

More information

Sequential Monte Carlo Sampling for DSGE Models

Sequential Monte Carlo Sampling for DSGE Models Sequential Monte Carlo Sampling for DSGE Models Edward Herbst Federal Reserve Board Frank Schorfheide University of Pennsylvania CEPR, and NBER September 15, 2013 Abstract We develop a sequential Monte

More information

NBER WORKING PAPER SERIES SEQUENTIAL MONTE CARLO SAMPLING FOR DSGE MODELS. Edward P. Herbst Frank Schorfheide

NBER WORKING PAPER SERIES SEQUENTIAL MONTE CARLO SAMPLING FOR DSGE MODELS. Edward P. Herbst Frank Schorfheide NBER WORKING PAPER SERIES SEQUENTIAL MONTE CARLO SAMPLING FOR DSGE MODELS Edward P. Herbst Frank Schorfheide Working Paper 19152 http://www.nber.org/papers/w19152 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050

More information

WORKING PAPER NO SEQUENTIAL MONTE CARLO SAMPLING FOR DSGE MODELS. Edward Herbst Federal Reserve Board

WORKING PAPER NO SEQUENTIAL MONTE CARLO SAMPLING FOR DSGE MODELS. Edward Herbst Federal Reserve Board WORKING PAPER NO. 12-27 SEQUENTIAL MONTE CARLO SAMPLING FOR DSGE MODELS Edward Herbst Federal Reserve Board Frank Schorfheide University of Pennsylvania and Visiting Scholar, Federal Reserve Bank of Philadelphia

More information

Sequential Monte Carlo Methods (for DSGE Models)

Sequential Monte Carlo Methods (for DSGE Models) Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References These lectures use material from our joint work: Tempered

More information

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference 1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE

More information

Estimation of DSGE Models under Diffuse Priors and Data- Driven Identification Constraints. Markku Lanne and Jani Luoto

Estimation of DSGE Models under Diffuse Priors and Data- Driven Identification Constraints. Markku Lanne and Jani Luoto Estimation of DSGE Models under Diffuse Priors and Data- Driven Identification Constraints Markku Lanne and Jani Luoto CREATES Research Paper 2015-37 Department of Economics and Business Economics Aarhus

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

Sequential Monte Carlo Samplers for Applications in High Dimensions

Sequential Monte Carlo Samplers for Applications in High Dimensions Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex

More information

Infinite-State Markov-switching for Dynamic. Volatility Models : Web Appendix

Infinite-State Markov-switching for Dynamic. Volatility Models : Web Appendix Infinite-State Markov-switching for Dynamic Volatility Models : Web Appendix Arnaud Dufays 1 Centre de Recherche en Economie et Statistique March 19, 2014 1 Comparison of the two MS-GARCH approximations

More information

The Metropolis-Hastings Algorithm. June 8, 2012

The Metropolis-Hastings Algorithm. June 8, 2012 The Metropolis-Hastings Algorithm June 8, 22 The Plan. Understand what a simulated distribution is 2. Understand why the Metropolis-Hastings algorithm works 3. Learn how to apply the Metropolis-Hastings

More information

DSGE Model Forecasting

DSGE Model Forecasting University of Pennsylvania EABCN Training School May 1, 216 Introduction The use of DSGE models at central banks has triggered a strong interest in their forecast performance. The subsequent material draws

More information

Sequential Monte Carlo samplers for Bayesian DSGE models

Sequential Monte Carlo samplers for Bayesian DSGE models Sequential Monte Carlo samplers for Bayesian DSGE models Drew Creal First version: February 8, 27 Current version: March 27, 27 Abstract Dynamic stochastic general equilibrium models have become a popular

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian

More information

Introduction to Bayesian Inference

Introduction to Bayesian Inference University of Pennsylvania EABCN Training School May 10, 2016 Bayesian Inference Ingredients of Bayesian Analysis: Likelihood function p(y φ) Prior density p(φ) Marginal data density p(y ) = p(y φ)p(φ)dφ

More information

Bayesian Estimation with Sparse Grids

Bayesian Estimation with Sparse Grids Bayesian Estimation with Sparse Grids Kenneth L. Judd and Thomas M. Mertens Institute on Computational Economics August 7, 27 / 48 Outline Introduction 2 Sparse grids Construction Integration with sparse

More information

Sequential Monte Carlo samplers for Bayesian DSGE models

Sequential Monte Carlo samplers for Bayesian DSGE models Sequential Monte Carlo samplers for Bayesian DSGE models Drew Creal Department of Econometrics, Vrije Universitiet Amsterdam, NL-8 HV Amsterdam dcreal@feweb.vu.nl August 7 Abstract Bayesian estimation

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Piecewise Linear Continuous Approximations and Filtering for DSGE Models with Occasionally-Binding Constraints

Piecewise Linear Continuous Approximations and Filtering for DSGE Models with Occasionally-Binding Constraints The views expressed in this presentation are those of the authors and do not necessarily reflect those of the Board of Governors or the Federal Reserve System. Piecewise Linear Continuous Approximations

More information

Sequential Bayesian Updating

Sequential Bayesian Updating BS2 Statistical Inference, Lectures 14 and 15, Hilary Term 2009 May 28, 2009 We consider data arriving sequentially X 1,..., X n,... and wish to update inference on an unknown parameter θ online. In a

More information

Learning the hyper-parameters. Luca Martino

Learning the hyper-parameters. Luca Martino Learning the hyper-parameters Luca Martino 2017 2017 1 / 28 Parameters and hyper-parameters 1. All the described methods depend on some choice of hyper-parameters... 2. For instance, do you recall λ (bandwidth

More information

Tutorial on ABC Algorithms

Tutorial on ABC Algorithms Tutorial on ABC Algorithms Dr Chris Drovandi Queensland University of Technology, Australia c.drovandi@qut.edu.au July 3, 2014 Notation Model parameter θ with prior π(θ) Likelihood is f(ý θ) with observed

More information

arxiv: v1 [stat.co] 23 Apr 2018

arxiv: v1 [stat.co] 23 Apr 2018 Bayesian Updating and Uncertainty Quantification using Sequential Tempered MCMC with the Rank-One Modified Metropolis Algorithm Thomas A. Catanach and James L. Beck arxiv:1804.08738v1 [stat.co] 23 Apr

More information

Can News be a Major Source of Aggregate Fluctuations?

Can News be a Major Source of Aggregate Fluctuations? Can News be a Major Source of Aggregate Fluctuations? A Bayesian DSGE Approach Ippei Fujiwara 1 Yasuo Hirose 1 Mototsugu 2 1 Bank of Japan 2 Vanderbilt University August 4, 2009 Contributions of this paper

More information

Tailored Randomized-block MCMC Methods with Application to DSGE Models

Tailored Randomized-block MCMC Methods with Application to DSGE Models Tailored Randomized-block MCMC Methods with Application to DSGE Models Siddhartha Chib Srikanth Ramamurthy September 28; January 29; August 29 Abstract In this paper we develop new Markov chain Monte Carlo

More information

MONTE CARLO METHODS. Hedibert Freitas Lopes

MONTE CARLO METHODS. Hedibert Freitas Lopes MONTE CARLO METHODS Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

Auxiliary Particle Methods

Auxiliary Particle Methods Auxiliary Particle Methods Perspectives & Applications Adam M. Johansen 1 adam.johansen@bristol.ac.uk Oxford University Man Institute 29th May 2008 1 Collaborators include: Arnaud Doucet, Nick Whiteley

More information

SMC 2 : an efficient algorithm for sequential analysis of state-space models

SMC 2 : an efficient algorithm for sequential analysis of state-space models SMC 2 : an efficient algorithm for sequential analysis of state-space models N. CHOPIN 1, P.E. JACOB 2, & O. PAPASPILIOPOULOS 3 1 ENSAE-CREST 2 CREST & Université Paris Dauphine, 3 Universitat Pompeu Fabra

More information

Introduction to Bayesian Computation

Introduction to Bayesian Computation Introduction to Bayesian Computation Dr. Jarad Niemi STAT 544 - Iowa State University March 20, 2018 Jarad Niemi (STAT544@ISU) Introduction to Bayesian Computation March 20, 2018 1 / 30 Bayesian computation

More information

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January

More information

Non-nested model selection. in unstable environments

Non-nested model selection. in unstable environments Non-nested model selection in unstable environments Raffaella Giacomini UCLA (with Barbara Rossi, Duke) Motivation The problem: select between two competing models, based on how well they fit thedata Both

More information

Bayesian Estimation of DSGE Models: Lessons from Second-order Approximations

Bayesian Estimation of DSGE Models: Lessons from Second-order Approximations Bayesian Estimation of DSGE Models: Lessons from Second-order Approximations Sungbae An Singapore Management University Bank Indonesia/BIS Workshop: STRUCTURAL DYNAMIC MACROECONOMIC MODELS IN ASIA-PACIFIC

More information

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for

More information

Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models

Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models Francis X. Diebold Frank Schorfheide Minchul Shin University of Pennsylvania May 4, 2014 1 / 33 Motivation The use

More information

17 : Markov Chain Monte Carlo

17 : Markov Chain Monte Carlo 10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo

More information

What s News In Business Cycles

What s News In Business Cycles What s News In Business Cycles Stephanie Schmitt-Grohé Martín Uribe First Draft: November 7 This Draft: December 6, 8 Abstract In this paper, we perform a structural Bayesian estimation of the contribution

More information

NBER WORKING PAPER SERIES WHAT'S NEWS IN BUSINESS CYCLES. Stephanie Schmitt-Grohe Martin Uribe. Working Paper

NBER WORKING PAPER SERIES WHAT'S NEWS IN BUSINESS CYCLES. Stephanie Schmitt-Grohe Martin Uribe. Working Paper NBER WORKING PAPER SERIES WHAT'S NEWS IN BUSINESS CYCLES Stephanie Schmitt-Grohe Martin Uribe Working Paper 14215 http://www.nber.org/papers/w14215 NATIONAL BUREAU OF ECONOMIC RESEARCH 15 Massachusetts

More information

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Lecture 8: Bayesian Estimation of Parameters in State Space Models in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Jonas Hallgren 1 1 Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden BFS 2012 June

More information

Estimating a Nonlinear New Keynesian Model with the Zero Lower Bound for Japan

Estimating a Nonlinear New Keynesian Model with the Zero Lower Bound for Japan Estimating a Nonlinear New Keynesian Model with the Zero Lower Bound for Japan Hirokuni Iiboshi 1, Mototsugu Shintani 2, Kozo Ueda 3 August 218 @ EEA-ESEM 1 Tokyo Metropolitan University 2 University of

More information

Controlled sequential Monte Carlo

Controlled sequential Monte Carlo Controlled sequential Monte Carlo Jeremy Heng, Department of Statistics, Harvard University Joint work with Adrian Bishop (UTS, CSIRO), George Deligiannidis & Arnaud Doucet (Oxford) Bayesian Computation

More information

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling 10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel

More information

Session 3A: Markov chain Monte Carlo (MCMC)

Session 3A: Markov chain Monte Carlo (MCMC) Session 3A: Markov chain Monte Carlo (MCMC) John Geweke Bayesian Econometrics and its Applications August 15, 2012 ohn Geweke Bayesian Econometrics and its Session Applications 3A: Markov () chain Monte

More information

Lecture 4: Dynamic models

Lecture 4: Dynamic models linear s Lecture 4: s Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

DSGE Methods. Estimation of DSGE models: Maximum Likelihood & Bayesian. Willi Mutschler, M.Sc.

DSGE Methods. Estimation of DSGE models: Maximum Likelihood & Bayesian. Willi Mutschler, M.Sc. DSGE Methods Estimation of DSGE models: Maximum Likelihood & Bayesian Willi Mutschler, M.Sc. Institute of Econometrics and Economic Statistics University of Münster willi.mutschler@uni-muenster.de Summer

More information

ESTIMATION of a DSGE MODEL

ESTIMATION of a DSGE MODEL ESTIMATION of a DSGE MODEL Paris, October 17 2005 STÉPHANE ADJEMIAN stephane.adjemian@ens.fr UNIVERSITÉ DU MAINE & CEPREMAP Slides.tex ESTIMATION of a DSGE MODEL STÉPHANE ADJEMIAN 16/10/2005 21:37 p. 1/3

More information

Kernel adaptive Sequential Monte Carlo

Kernel adaptive Sequential Monte Carlo Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline

More information

State-Space Methods for Inferring Spike Trains from Calcium Imaging

State-Space Methods for Inferring Spike Trains from Calcium Imaging State-Space Methods for Inferring Spike Trains from Calcium Imaging Joshua Vogelstein Johns Hopkins April 23, 2009 Joshua Vogelstein (Johns Hopkins) State-Space Calcium Imaging April 23, 2009 1 / 78 Outline

More information

Eco517 Fall 2013 C. Sims MCMC. October 8, 2013

Eco517 Fall 2013 C. Sims MCMC. October 8, 2013 Eco517 Fall 2013 C. Sims MCMC October 8, 2013 c 2013 by Christopher A. Sims. This document may be reproduced for educational and research purposes, so long as the copies contain this notice and are retained

More information

Direct Simulation Methods #2

Direct Simulation Methods #2 Direct Simulation Methods #2 Econ 690 Purdue University Outline 1 A Generalized Rejection Sampling Algorithm 2 The Weighted Bootstrap 3 Importance Sampling Rejection Sampling Algorithm #2 Suppose we wish

More information

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci

More information

arxiv: v1 [stat.co] 1 Jun 2015

arxiv: v1 [stat.co] 1 Jun 2015 arxiv:1506.00570v1 [stat.co] 1 Jun 2015 Towards automatic calibration of the number of state particles within the SMC 2 algorithm N. Chopin J. Ridgway M. Gerber O. Papaspiliopoulos CREST-ENSAE, Malakoff,

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

On the conjugacy of off-line and on-line Sequential Monte Carlo Samplers. Working Paper Research. by Arnaud Dufays. September 2014 No 263

On the conjugacy of off-line and on-line Sequential Monte Carlo Samplers. Working Paper Research. by Arnaud Dufays. September 2014 No 263 On the conjugacy of off-line and on-line Sequential Monte Carlo Samplers Working Paper Research by Arnaud Dufays September 2014 No 263 Editorial Director Jan Smets, Member of the Board of Directors of

More information

SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, )

SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, ) Econometrica Supplementary Material SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, 653 710) BY SANGHAMITRA DAS, MARK ROBERTS, AND

More information

Foundation of (virtually) all DSGE models (e.g., RBC model) is Solow growth model

Foundation of (virtually) all DSGE models (e.g., RBC model) is Solow growth model THE BASELINE RBC MODEL: THEORY AND COMPUTATION FEBRUARY, 202 STYLIZED MACRO FACTS Foundation of (virtually all DSGE models (e.g., RBC model is Solow growth model So want/need/desire business-cycle models

More information

Ambiguous Business Cycles: Online Appendix

Ambiguous Business Cycles: Online Appendix Ambiguous Business Cycles: Online Appendix By Cosmin Ilut and Martin Schneider This paper studies a New Keynesian business cycle model with agents who are averse to ambiguity (Knightian uncertainty). Shocks

More information

Bayesian Phylogenetics:

Bayesian Phylogenetics: Bayesian Phylogenetics: an introduction Marc A. Suchard msuchard@ucla.edu UCLA Who is this man? How sure are you? The one true tree? Methods we ve learned so far try to find a single tree that best describes

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31 Hierarchical models Dr. Jarad Niemi Iowa State University August 31, 2017 Jarad Niemi (Iowa State) Hierarchical models August 31, 2017 1 / 31 Normal hierarchical model Let Y ig N(θ g, σ 2 ) for i = 1,...,

More information

CSC 2541: Bayesian Methods for Machine Learning

CSC 2541: Bayesian Methods for Machine Learning CSC 2541: Bayesian Methods for Machine Learning Radford M. Neal, University of Toronto, 2011 Lecture 10 Alternatives to Monte Carlo Computation Since about 1990, Markov chain Monte Carlo has been the dominant

More information

Session 5B: A worked example EGARCH model

Session 5B: A worked example EGARCH model Session 5B: A worked example EGARCH model John Geweke Bayesian Econometrics and its Applications August 7, worked example EGARCH model August 7, / 6 EGARCH Exponential generalized autoregressive conditional

More information

Pseudo-marginal MCMC methods for inference in latent variable models

Pseudo-marginal MCMC methods for inference in latent variable models Pseudo-marginal MCMC methods for inference in latent variable models Arnaud Doucet Department of Statistics, Oxford University Joint work with George Deligiannidis (Oxford) & Mike Pitt (Kings) MCQMC, 19/08/2016

More information

Macroeconomics Theory II

Macroeconomics Theory II Macroeconomics Theory II Francesco Franco FEUNL February 2011 Francesco Franco Macroeconomics Theory II 1/34 The log-linear plain vanilla RBC and ν(σ n )= ĉ t = Y C ẑt +(1 α) Y C ˆn t + K βc ˆk t 1 + K

More information

Bayesian Model Comparison:

Bayesian Model Comparison: Bayesian Model Comparison: Modeling Petrobrás log-returns Hedibert Freitas Lopes February 2014 Log price: y t = log p t Time span: 12/29/2000-12/31/2013 (n = 3268 days) LOG PRICE 1 2 3 4 0 500 1000 1500

More information

Negative Association, Ordering and Convergence of Resampling Methods

Negative Association, Ordering and Convergence of Resampling Methods Negative Association, Ordering and Convergence of Resampling Methods Nicolas Chopin ENSAE, Paristech (Joint work with Mathieu Gerber and Nick Whiteley, University of Bristol) Resampling schemes: Informal

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

Monte Carlo methods for sampling-based Stochastic Optimization

Monte Carlo methods for sampling-based Stochastic Optimization Monte Carlo methods for sampling-based Stochastic Optimization Gersende FORT LTCI CNRS & Telecom ParisTech Paris, France Joint works with B. Jourdain, T. Lelièvre, G. Stoltz from ENPC and E. Kuhn from

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet. Stat 535 C - Statistical Computing & Monte Carlo Methods Arnaud Doucet Email: arnaud@cs.ubc.ca 1 1.1 Outline Introduction to Markov chain Monte Carlo The Gibbs Sampler Examples Overview of the Lecture

More information

Introduction to Markov Chain Monte Carlo & Gibbs Sampling

Introduction to Markov Chain Monte Carlo & Gibbs Sampling Introduction to Markov Chain Monte Carlo & Gibbs Sampling Prof. Nicholas Zabaras Sibley School of Mechanical and Aerospace Engineering 101 Frank H. T. Rhodes Hall Ithaca, NY 14853-3801 Email: zabaras@cornell.edu

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

Bayes Factors, posterior predictives, short intro to RJMCMC. Thermodynamic Integration

Bayes Factors, posterior predictives, short intro to RJMCMC. Thermodynamic Integration Bayes Factors, posterior predictives, short intro to RJMCMC Thermodynamic Integration Dave Campbell 2016 Bayesian Statistical Inference P(θ Y ) P(Y θ)π(θ) Once you have posterior samples you can compute

More information

Dynamics and Monetary Policy in a Fair Wage Model of the Business Cycle

Dynamics and Monetary Policy in a Fair Wage Model of the Business Cycle Dynamics and Monetary Policy in a Fair Wage Model of the Business Cycle David de la Croix 1,3 Gregory de Walque 2 Rafael Wouters 2,1 1 dept. of economics, Univ. cath. Louvain 2 National Bank of Belgium

More information

On Markov chain Monte Carlo methods for tall data

On Markov chain Monte Carlo methods for tall data On Markov chain Monte Carlo methods for tall data Remi Bardenet, Arnaud Doucet, Chris Holmes Paper review by: David Carlson October 29, 2016 Introduction Many data sets in machine learning and computational

More information

An introduction to Sequential Monte Carlo

An introduction to Sequential Monte Carlo An introduction to Sequential Monte Carlo Thang Bui Jes Frellsen Department of Engineering University of Cambridge Research and Communication Club 6 February 2014 1 Sequential Monte Carlo (SMC) methods

More information

WORKING PAPER NO BAYESIAN ANALYSIS OF DSGE MODELS. Sungbae An University of Pennsylvania

WORKING PAPER NO BAYESIAN ANALYSIS OF DSGE MODELS. Sungbae An University of Pennsylvania WORKING PAPER NO. 06-5 BAYESIAN ANALYSIS OF DSGE MODELS Sungbae An University of Pennsylvania Frank Schorfheide University of Pennsylvania, CEPR, and Visiting Scholar, Federal Reserve Bank of Philadelphia

More information

Financial Factors in Economic Fluctuations. Lawrence Christiano Roberto Motto Massimo Rostagno

Financial Factors in Economic Fluctuations. Lawrence Christiano Roberto Motto Massimo Rostagno Financial Factors in Economic Fluctuations Lawrence Christiano Roberto Motto Massimo Rostagno Background Much progress made on constructing and estimating models that fit quarterly data well (Smets-Wouters,

More information

ASYMPTOTICALLY INDEPENDENT MARKOV SAMPLING: A NEW MARKOV CHAIN MONTE CARLO SCHEME FOR BAYESIAN INFERENCE

ASYMPTOTICALLY INDEPENDENT MARKOV SAMPLING: A NEW MARKOV CHAIN MONTE CARLO SCHEME FOR BAYESIAN INFERENCE International Journal for Uncertainty Quantification, 3 (5): 445 474 (213) ASYMPTOTICALLY IDEPEDET MARKOV SAMPLIG: A EW MARKOV CHAI MOTE CARLO SCHEME FOR BAYESIA IFERECE James L. Beck & Konstantin M. Zuev

More information

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano Dynamic Factor Models and Factor Augmented Vector Autoregressions Lawrence J Christiano Dynamic Factor Models and Factor Augmented Vector Autoregressions Problem: the time series dimension of data is relatively

More information

Divide-and-Conquer Sequential Monte Carlo

Divide-and-Conquer Sequential Monte Carlo Divide-and-Conquer Joint work with: John Aston, Alexandre Bouchard-Côté, Brent Kirkpatrick, Fredrik Lindsten, Christian Næsseth, Thomas Schön University of Warwick a.m.johansen@warwick.ac.uk http://go.warwick.ac.uk/amjohansen/talks/

More information

News Shocks and Business Cycles: Evidence from Forecast Data

News Shocks and Business Cycles: Evidence from Forecast Data News Shocks and Business Cycles: Evidence from Forecast Data Wataru Miyamoto and Thuy Lan Nguyen Columbia University First version: January 2012 This version: November 15, 2013 Abstract This paper proposes

More information

Sequentially Adaptive Bayesian Learning Algorithms for Inference and Optimization

Sequentially Adaptive Bayesian Learning Algorithms for Inference and Optimization Sequentially Adaptive Bayesian Learning Algorithms for Inference and Optimization John Geweke and Garland Durham October 5, 7 Abstract The sequentially adaptive Bayesian learning algorithm SABL) builds

More information

Introduction to Probabilistic Machine Learning

Introduction to Probabilistic Machine Learning Introduction to Probabilistic Machine Learning Piyush Rai Dept. of CSE, IIT Kanpur (Mini-course 1) Nov 03, 2015 Piyush Rai (IIT Kanpur) Introduction to Probabilistic Machine Learning 1 Machine Learning

More information

Implementable Fiscal Policy Rules

Implementable Fiscal Policy Rules Implementable Fiscal Policy Rules Martin Kliem Alexander Kriwoluzky Deutsche Bundesbank Universiteit van Amsterdam Preliminary version, comments welcome May, 21 Abstract We use a novel procedure to identify

More information

Inference in state-space models with multiple paths from conditional SMC

Inference in state-space models with multiple paths from conditional SMC Inference in state-space models with multiple paths from conditional SMC Sinan Yıldırım (Sabancı) joint work with Christophe Andrieu (Bristol), Arnaud Doucet (Oxford) and Nicolas Chopin (ENSAE) September

More information

Eco517 Fall 2014 C. Sims MIDTERM EXAM

Eco517 Fall 2014 C. Sims MIDTERM EXAM Eco57 Fall 204 C. Sims MIDTERM EXAM You have 90 minutes for this exam and there are a total of 90 points. The points for each question are listed at the beginning of the question. Answer all questions.

More information

Sequentially Adaptive Bayesian Learning Algorithms for Inference and Optimization

Sequentially Adaptive Bayesian Learning Algorithms for Inference and Optimization Sequentially Adaptive Bayesian Learning Algorithms for Inference and Optimization John Geweke and Garland Durham October, 7 Abstract The sequentially adaptive Bayesian learning algorithm (SABL) builds

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007 Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 Importance Sampling Recall: Let s say that we want to compute some expectation (integral) E p [f] = p(x)f(x)dx and we remember

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Stat 535 C - Statistical Computing & Monte Carlo Methods. Lecture February Arnaud Doucet

Stat 535 C - Statistical Computing & Monte Carlo Methods. Lecture February Arnaud Doucet Stat 535 C - Statistical Computing & Monte Carlo Methods Lecture 13-28 February 2006 Arnaud Doucet Email: arnaud@cs.ubc.ca 1 1.1 Outline Limitations of Gibbs sampling. Metropolis-Hastings algorithm. Proof

More information

News Shocks and Business Cycles: Evidence from Forecast Data

News Shocks and Business Cycles: Evidence from Forecast Data News Shocks and Business Cycles: Evidence from Forecast Data Wataru Miyamoto and Thuy Lan Nguyen Columbia University First version: January 2012 This version: January 14, 2014 JOB MARKET PAPER Abstract

More information

SC7/SM6 Bayes Methods HT18 Lecturer: Geoff Nicholls Lecture 2: Monte Carlo Methods Notes and Problem sheets are available at http://www.stats.ox.ac.uk/~nicholls/bayesmethods/ and via the MSc weblearn pages.

More information

Particle Learning for Sequential Bayesian Computation Rejoinder

Particle Learning for Sequential Bayesian Computation Rejoinder BAYESIAN STATISTICS 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.) c Oxford University Press, 20 Particle Learning for Sequential Bayesian

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information