Sequential Monte Carlo Methods (for DSGE Models)
|
|
- Melvyn Quinn
- 5 years ago
- Views:
Transcription
1 Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017
2 Some References These lectures use material from our joint work: Tempered Particle Filtering, 2016, PIER Working Paper, Bayesian Estimation of DSGE Models, 2015, Princeton University Press Sequential Monte Carlo Sampling for DSGE Models, 2014, Journal of Econometrics
3 Sequential Monte Carlo (SMC) Methods SMC can help to Lecture 1 approximate the posterior of θ: Chopin (2002)... Durham and Geweke (2013)... Creal (2007), Herbst and Schorfheide (2014) Lecture 2 approximate the likelihood function (particle filtering): Gordon, Salmond, and Smith (1993)... Fernandez-Villaverde and Rubio-Ramirez (2007) or both: SMC 2 : Chopin, Jacob, and Papaspiliopoulos (2012)... Herbst and Schorfheide (2015)
4 Lecture 2
5 Filtering - General Idea State-space representation of nonlinear DSGE model Measurement Eq. : y t = Ψ(s t, t; θ) + u t, u t F u ( ; θ) State Transition : s t = Φ(s t 1, ɛ t ; θ), ɛ t F ɛ ( ; θ). Likelihood function: T p(y 1:T θ) = p(y t Y 1:t 1, θ) t=1 A filter generates a sequence of conditional distributions s t Y 1:t. Iterations: Initialization at time t 1: p(s t 1 Y 1:t 1, θ) Forecasting t given t 1: 1 Transition equation: p(s t Y 1:t 1, θ) = p(s t s t 1, Y 1:t 1, θ)p(s t 1 Y 1:t 1, θ)ds t 1 2 Measurement equation: p(y t Y 1:t 1, θ) = p(y t s t, Y 1:t 1, θ)p(s t Y 1:t 1, θ)ds t Updating with Bayes theorem. Once y t becomes available: p(s t Y 1:t, θ) = p(s t y t, Y 1:t 1, θ) = p(yt st, Y1:t 1, θ)p(st Y1:t 1, θ) p(y t Y 1:t 1, θ)
6 Conditional Distributions for Kalman Filter (Linear Gaussian State-Space Model) Distribution s t 1 (Y 1:t 1, θ) N ( s ) t 1 t 1, P t 1 t 1 s t (Y 1:t 1, θ) N ( s t t 1, P t t 1 ) y t (Y 1:t 1, θ) N ( ȳ t t 1, F t t 1 ) s t (Y 1:t, θ) N ( s t t, P t t ) Mean and Variance Given from Iteration t 1 s t t 1 = Φ 1 s t 1 t 1 P t t 1 = Φ 1 P t 1 t 1 Φ 1 + Φ ɛσ ɛ Φ ɛ ȳ t t 1 = Ψ 0 + Ψ 1 t + Ψ 2 s t t 1 F t t 1 = Ψ 2 P t t 1 Ψ 2 + Σ u s t t = s t t 1 + P t t 1 Ψ 2 F 1 t t 1 (y t ȳ t t 1 ) P t t = P t t 1 P t t 1 Ψ 2 F 1 t t 1 Ψ 2P t t 1
7 Approximating the Likelihood Function DSGE models are inherently nonlinear. Sometimes linear approximations are sufficiently accurate... but in other applications nonlinearities may be important: asset pricing; borrowing constraints; zero lower bound on nominal interest rates;... Nonlinear state-space representation requires nonlinear filter: y t = Ψ(s t, t; θ) + u t, u t F u ( ; θ) s t = Φ(s t 1, ɛ t ; θ), ɛ t F ɛ ( ; θ).
8 Particle Filters There are many particle filters... We will focus on three types: Bootstrap PF A generic PF A conditionally-optimal PF
9 Bootstrap Particle Filter 1 Initialization. Draw the initial particles from the distribution s j iid 0 p(s0 ) and set W j 0 = 1, j = 1,..., M. 2 Recursion. For t = 1,..., T : 1 Forecasting s t. Propagate the period t 1 particles {s j t 1, W j t 1 } by iterating the state-transition equation forward: s j t = Φ(s j t 1, ɛj t; θ), ɛ j t F ɛ( ; θ). (1) An approximation of E[h(s t) Y 1:t 1, θ] is given by ĥ t,m = 1 M M j=1 h( s j t )W j t 1. (2)
10 Bootstrap Particle Filter 1 Initialization. 2 Recursion. For t = 1,..., T : 1 Forecasting s t. 2 Forecasting y t. Define the incremental weights w j t = p(y t s j t, θ). (3) The predictive density p(y t Y 1:t 1, θ) can be approximated by ˆp(y t Y 1:t 1, θ) = 1 M M j=1 w j t W j t 1. (4) If the measurement errors are N(0, Σ u) then the incremental weights take the form { w j t = (2π) n/2 Σ u 1/2 exp 1 ( yt Ψ( s j t, t; θ) ) Σ 1 ( u yt Ψ( s j t, t; θ) )}, 2 where n here denotes the dimension of y t. (5)
11 Bootstrap Particle Filter 1 Initialization. 2 Recursion. For t = 1,..., T : 1 Forecasting s t. 2 Forecasting y t. Define the incremental weights w j t = p(y t s j t, θ). (6) 3 Updating. Define the normalized weights W j t = 1 M w j t W j t 1 M j=1 w. (7) j t W j t 1 An approximation of E[h(s t) Y 1:t, θ] is given by h t,m = 1 M M j=1 h( s j t ) W j t. (8)
12 Bootstrap Particle Filter 1 Initialization. 2 Recursion. For t = 1,..., T : 1 Forecasting s t. 2 Forecasting y t. 3 Updating. 4 Selection (Optional). Resample the particles via multinomial resampling. Let {s j t } M j=1 denote M iid draws from a multinomial distribution characterized by support points and weights { s j t, W j and set W j t = 1 for j =, 1..., M. An approximation of E[h(s t) Y 1:t, θ] is given by t } h t,m = 1 M M j=1 h(s j t )W j t. (9)
13 Likelihood Approximation The approximation of the log likelihood function is given by T ln ˆp(Y 1:T θ) = ln 1 M w t j W j t 1. (10) M t=1 j=1 One can show that the approximation of the likelihood function is unbiased. This implies that the approximation of the log likelihood function is downward biased.
14 The Role of Measurement Errors Measurement errors may not be intrinsic to DSGE model. Bootstrap filter needs non-degenerate p(y t s t, θ) for incremental weights to be well defined. Decreasing the measurement error variance Σ u, holding everything else fixed, increases the variance of the particle weights, and reduces the accuracy of Monte Carlo approximation.
15 Generic Particle Filter 1 Initialization. Same as BS PF 2 Recursion. For t = 1,..., T : 1 Forecasting s t. Draw s j t from density g t( s t s j t 1, θ) and define ω j t = p( sj t s j t 1, θ). (11) g t( s j t s j t 1, θ) An approximation of E[h(s t) Y 1:t 1, θ] is given by ĥ t,m = 1 M h( s j t )ω j M tw j t 1. (12) j=1 2 Forecasting y t. Define the incremental weights w j t = p(y t s j t, θ)ω j t. (13) The predictive density p(y t Y 1:t 1, θ) can be approximated by ˆp(y t Y 1:t 1, θ) = 1 M w j t W j t 1 M. (14) j=1 3 Updating. Same as BS PF 4 Selection. Same as BS PF 3 Likelihood Approximation. Same as BS PF
16 Asymptotics The convergence results can be established recursively, starting from the assumption a.s. h t 1,M E[h(s t 1 ) Y 1:t 1 ], M ( h t 1,M E[h(s t 1 ) Y 1:t 1 ] ) = N ( 0, Ω t 1 (h) ). Forward iteration: draw s t from g t (s t s j t 1 ) = p(s t s j t 1 ). Decompose ĥ t,m E[h(s t ) Y 1:t 1 ] (15) = 1 M ( ) h( s j M t) E p( s j )[h] W j t 1 t M j=1 = I + II, M j=1 ( ) E p( s j )[h]w j t 1 E[h(s t) Y 1:t 1 ] t 1 Both I and II converge to zero (and potentially satisfy CLT).
17 Asymptotics Updating step approximates E[h(s t ) Y 1:t ] = h(st )p(y t s t )p(s t Y 1:t 1 )ds t p(yt s t )p(s t Y 1:t 1 )ds t 1 M M j=1 h( sj t) w t j W j 1 M M j=1 w t j W j t 1 (16) t 1 Define the normalized incremental weights as v t (s t ) = p(y t s t ) p(yt s t )p(s t Y 1:t 1 )ds t. (17) Under suitable regularity conditions, the Monte Carlo approximation satisfies a CLT of the form M ( ht,m E[h(s t ) Y 1:t ] ) (18) = N ( 0, Ω t (h) ), Ω t (h) = ˆΩ t ( vt (s t )(h(s t ) E[h(s t ) Y 1:t ]) ). Distribution of particle weights matters for accuracy! = Resampling!
18 Adapting the Generic PF Conditionally-optimal importance distribution: g t ( s t s j t 1 ) = p( s t y t, s j t 1 ). This is the posterior of s t given s j t 1. Typically infeasible, but a good benchmark. Approximately conditionally-optimal distributions: from linearize version of DSGE model or approximate nonlinear filters. Conditionally-linear models: do Kalman filter updating on a subvector of s t. Example: y t = Ψ 0 (m t ) + Ψ 1 (m t )t + Ψ 2 (m t )s t + u t, u t N(0, Σ u ), s t = Φ 0 (m t ) + Φ 1 (m t )s t 1 + Φ ɛ (m t )ɛ t, ɛ t N(0, Σ ɛ ), where m t follows a discrete Markov-switching process.
19 Nonlinear and Partially Deterministic State Transitions Example: s 1,t = Φ 1 (s t 1, ɛ t ), s 2,t = Φ 2 (s t 1 ), ɛ t N(0, 1). Generic filter requires evaluation of p(s t s t 1 ). Define ς t = [s t, ɛ t] and add identity ɛ t = ɛ t to state transition. Factorize the density p(ς t ς t 1 ) as p(ς t ς t 1 ) = p ɛ (ɛ t )p(s 1,t s t 1, ɛ t )p(s 2,t s t 1 ). where p(s 1,t s t 1, ɛ t ) and p(s 2,t s t 1 ) are pointmasses. Sample innovation ɛ t from g ɛ t (ɛ t s t 1 ). Then ω j t = p( ςj t ς j t 1 ) g t ( ς j t ς j t 1 ) = pɛ j ( ɛ t)p( s j 1,t sj t 1, ɛj t)p( s j 2,t sj t 1 ) gt ɛ ( ɛ j t s j t 1 )p( sj 1,t sj t 1, ɛj t)p( s j 2,t sj t 1 ) = pɛ j ( ɛ t) gt ɛ ( ɛ j t s j t 1 ).
20 Degenerate Measurement Error Distributions Our discussion of the conditionally-optimal importance distribution suggests that in the absence of measurement errors, one has to solve the system of equations y t = Ψ ( Φ(s j t 1, ɛj t) ), to determine ɛ j t as a function of s j t 1 and the current observation y t. Then define ω j t = p ɛ ( ɛ j t) and s j t = Φ(s j t 1, ɛj t). Difficulty: one has to find all solutions to a nonlinear system of equations. While resampling duplicates particles, the duplicated particles do not mutate, which can lead to a degeneracy.
21 Next Steps We will now apply PFs to linearized DSGE models. This allows us to compare the Monte Carlo approximation to the truth. Small-scale New Keynesian DSGE model Smets-Wouters model
22 Illustration 1: Small-Scale DSGE Model Parameter Values For Likelihood Evaluation Parameter θ m θ l Parameter θ m θ l τ κ ψ ψ ρ r ρ g ρ z r (A) π (A) γ (Q) σ r σ g σ z ln p(y θ)
23 Likelihood Approximation ln ˆp(y t Y 1:t 1, θ m ) vs. ln p(y t Y 1:t 1, θ m ) Notes: The results depicted in the figure are based on a single run of the bootstrap PF (dashed, M = 40, 000), the conditionally-optimal PF (dotted, M = 400), and the Kalman filter (solid).
24 Filtered State 4 Ê[ĝ t Y 1:t, θ m ] vs. E[ĝ t Y 1:t, θ m ] Notes: The results depicted in the figure are based on a single run of the bootstrap PF (dashed, M = 40, 000), the conditionally-optimal PF (dotted, M = 400), and the Kalman filter (solid).
25 Distribution of Log-Likelihood Approximation Errors Bootstrap PF: θ m vs. θ l Notes: Density estimate of ˆ 1 = ln ˆp(Y 1:T θ) ln p(y 1:T θ) based on N run = 100 runs of the PF. Solid line is θ = θ m ; dashed line is θ = θ l (M = 40, 000).
26 Distribution of Log-Likelihood Approximation Errors Notes: Density estimate of ˆ 1 = ln ˆp(Y 1:T θ) ln p(y 1:T θ) based on N run = 100 runs of the PF. Solid line is bootstrap particle filter (M = 40, 000); dotted line is conditionally optimal particle filter (M = 400) θ m : Bootstrap vs. Cond. Opt. PF
27 Summary Statistics for Particle Filters Bootstrap Cond. Opt. Auxiliary Number of Particles M 40, ,000 Number of Repetitions High Posterior Density: θ = θ m Bias ˆ StdD ˆ Bias ˆ Low Posterior Density: θ = θ l Bias ˆ StdD ˆ Bias ˆ Notes: ˆ 1 = ln ˆp(Y 1:T θ) ln p(y 1:T θ) and ˆ 2 = exp[ln ˆp(Y 1:T θ) ln p(y 1:T θ)] 1. Results are based on N run = 100 runs of the particle filters.
28 Great Recession and Beyond Mean of Log-likelihood Increments ln ˆp(y t Y 1:t 1, θ m ) Notes: Solid lines represent results from Kalman filter. Dashed lines correspond to bootstrap particle filter (M = 40, 000) and dotted lines correspond to conditionally-optimal particle filter (M = 400). Results are based on N run = 100 runs of the filters.
29 Great Recession and Beyond Mean of Log-likelihood Increments ln ˆp(y t Y 1:t 1, θ m ) Notes: Solid lines represent results from Kalman filter. Dashed lines correspond to bootstrap particle filter (M = 40, 000) and dotted lines correspond to conditionally-optimal particle filter (M = 400). Results are based on N run = 100 runs of the filters.
30 Great Recession and Beyond Log Standard Dev of Log-Likelihood Increments Notes: Solid lines represent results from Kalman filter. Dashed lines correspond to bootstrap particle filter (M = 40, 000) and dotted lines correspond to conditionally-optimal particle filter (M = 400). Results are based on N run = 100 runs of the filters.
31 SW Model: Distr. of Log-Likelihood Approximation Errors BS (M = 40, 000) versus CO (M = 4, 000) Notes: Density estimates of ˆ 1 = ln ˆp(Y θ) ln p(y θ) based on N run = 100. Solid densities summarize results for the bootstrap (BS) particle filter; dashed densities summarize results for the conditionally-optimal (CO) particle filter.
32 SW Model: Distr. of Log-Likelihood Approximation Errors BS (M = 400, 000) versus CO (M = 4, 000) Notes: Density estimates of ˆ 1 = ln ˆp(Y θ) ln p(y θ) based on N run = 100. Solid densities summarize results for the bootstrap (BS) particle filter; dashed densities summarize results for the conditionally-optimal (CO) particle filter.
33 SW Model: Summary Statistics for Particle Filters Bootstrap Cond. Opt. Number of Particles M 40, ,000 4,000 40,000 Number of Repetitions High Posterior Density: θ = θ m Bias ˆ StdD ˆ Bias ˆ Low Posterior Density: θ = θ l Bias ˆ StdD ˆ Bias ˆ Notes: ˆ 1 = ln ˆp(Y 1:T θ) ln p(y 1:T θ) and ˆ 2 = exp[ln ˆp(Y 1:T θ) ln p(y 1:T θ)] 1. Results are based on N run = 100.
34 Embedding PF Likelihoods into Posterior Samplers Likelihood functions for nonlinear DSGE models can be approximated by the PF. We will now embed the likelihood approximation into a posterior sampler: PFMH Algorithm (a special case of PMCMC). The book also discusses SMC 2.
35 Embedding PF Likelihoods into Posterior Samplers Distinguish between: {p(y θ), p(θ Y ), p(y )}, which are related according to: p(y θ)p(θ) p(θ Y ) =, p(y ) = p(y θ)p(θ)dθ p(y ) {ˆp(Y θ), ˆp(θ Y ), ˆp(Y )}, which are related according to: ˆp(Y θ)p(θ) ˆp(θ Y ) =, ˆp(Y ) = ˆp(Y θ)p(θ)dθ. ˆp(Y ) Surprising result (Andrieu, Docet, and Holenstein, 2010): under certain conditions we can replace p(y θ) by ˆp(Y θ) and still obtain draws from p(θ Y ).
36 PFMH Algorithm For i = 1 to N: 1 Draw ϑ from a density q(ϑ θ i 1 ). 2 Set θ i = ϑ with probability { α(ϑ θ i 1 ) = min 1, ˆp(Y ϑ)p(ϑ)/q(ϑ θ i 1 } ) ˆp(Y θ i 1 )p(θ i 1 )/q(θ i 1 ϑ) and θ i = θ i 1 otherwise. The likelihood approximation ˆp(Y ϑ) is computed using a particle filter.
37 Why Does the PFMH Work? At each iteration the filter generates draws s j t from the proposal distribution g t ( s j t 1 ). Let S t = ( s t 1,..., s t M S 1:T 1:M. ) and denote the entire sequence of draws by Selection step: define a random variable A j t that contains this ancestry information. For instance, suppose that during the resampling particle j = 1 was assigned the value s t 10 then A 1 t = 10. Let A t = ( ) A 1 t,..., A N t and use A1:T to denote the sequence of A t s. PFMH operates on an enlarged probability space: θ, S 1:T and A 1:T.
38 Why Does the PFMH Work? Use U 1:T to denote random vectors for S 1:T and A 1:T. U 1:T is an array of iid uniform random numbers. The transformation of U 1:T into ( S 1:T, A 1:T ) typically depends on θ and Y 1:T, because the proposal distribution g t ( s t s j t 1 ) depends both on the current observation y t as well as the parameter vector θ. E.g., implementation of conditionally-optimal PF requires sampling from a N( s j t t, P t t) distribution for each particle j. Can be done using a prob integral transform of uniform random variables. We can express the particle filter approximation of the likelihood function as where ˆp(Y 1:T θ) = g(y 1:T θ, U 1:T ). U 1:T p(u 1:T ) = T p(u t ). t=1
39 Why Does the PFMH Work? Define the joint distribution ( ) p g Y1:T, θ, U 1:T = g(y1:t θ, U 1:T )p ( ) U 1:T p(θ). The PFMH algorithm samples from the joint posterior ( ) p g θ, U1:T Y 1:T g(y θ, U1:T )p ( ) U 1:T p(θ) and discards the draws of ( ) U 1:T. For this procedure to be valid, it needs to be the case that PF approximation is unbiased: E[ˆp(Y 1:T θ)] = g(y 1:T θ, U 1:T )p ( ) U 1:T dθ = p(y1:t θ).
40 Why Does the PFMH Work? We can express acceptance probability directly in terms of ˆp(Y 1:T θ). Need to generate a proposed draw for both θ and U 1:T : ϑ and U 1:T. The proposal distribution for (ϑ, U1:T ) in the MH algorithm is given by q(ϑ θ (i 1) )p(u1:t ). No need to keep track of the draws (U 1:T ). MH acceptance probability: g(y ϑ,u )p(u )p(ϑ) α(ϑ θ i 1 ) = min 1, q(ϑ θ (i 1) )p(u ) g(y θ (i 1),U (i 1) )p(u (i 1) )p(θ (i 1) ) q(θ (i 1) θ )p(u (i 1) ) { ˆp(Y ϑ)p(ϑ) / } q(ϑ θ (i 1) ) = min 1, ˆp(Y θ (i 1) )p(θ (i 1) ) /. q(θ (i 1) ϑ)
41 Small-Scale DSGE: Accuracy of MH Approximations Results are based on N run = 20 runs of the PF-RWMH-V algorithm. Each run of the algorithm generates N = 100, 000 draws and the first N 0 = 50, 000 are discarded. The likelihood function is computed with the Kalman filter (KF), bootstrap particle filter (BS-PF, M = 40, 000) or conditionally-optimal particle filter (CO-PF, M = 400). Pooled means that we are pooling the draws from the N run = 20 runs to compute posterior statistics.
42 Autocorrelation of PFMH Draws τ κ σr ρr ψ ψ Notes: The figure depicts autocorrelation functions computed from the output of the 1 Block RWMH-V algorithm based on the Kalman filter (solid), the conditionally-optimal particle filter (dashed) and the bootstrap particle filter (solid with dots).
43 Small-Scale DSGE: Accuracy of MH Approximations Posterior Mean (Pooled) Inefficiency Factors Std Dev of Means KF CO-PF BS-PF KF CO-PF BS-PF KF CO-PF BS-PF τ κ ψ ψ ρ r ρ g ρ z r (A) π (A) γ (Q) σ r σ g σ z ln ˆp(Y )
44 SW Model: Accuracy of MH Approximations Results are based on N run = 20 runs of the PF-RWMH-V algorithm. Each run of the algorithm generates N = 10, 000 draws. The likelihood function is computed with the Kalman filter (KF) or conditionally-optimal particle filter (CO-PF). Pooled means that we are pooling the draws from the N run = 20 runs to compute posterior statistics. The CO-PF uses M = 40, 000 particles to compute the likelihood.
45 SW Model: Accuracy of MH Approximations Post. Mean (Pooled) Ineff. Factors Std Dev of Means KF CO-PF KF CO-PF KF CO-PF (100β 1 1) π l α σ c Φ ϕ h ξ w σ l ξ p ι w ι p ψ r π ρ r y r y
46 SW Model: Accuracy of MH Approximations Post. Mean (Pooled) Ineff. Factors Std Dev of Means KF CO-PF KF CO-PF KF CO-PF ρ a ρ b ρ g ρ i ρ r ρ p ρ w ρ ga µ p µ w σ a σ b σ g σ i σ r σ p σ w ln ˆp(Y )
47 Conclusion Versatile set of tools... SMC can be used to compute Laplace-type estimators. Particle filters can be used for other nonlinear state-space models. If you are interested in more: Tempered Particle Filtering presentation in Econometrics Workshop on Wednesday.
Bayesian Computations for DSGE Models
Bayesian Computations for DSGE Models Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 This Lecture is Based on Bayesian Estimation of DSGE Models Edward P. Herbst &
More informationSequential Monte Carlo Methods (for DSGE Models)
Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References Solution and Estimation of DSGE Models, with J. Fernandez-Villaverde
More informationSequential Monte Carlo Methods
University of Pennsylvania Bradley Visitor Lectures October 23, 2017 Introduction Unfortunately, standard MCMC can be inaccurate, especially in medium and large-scale DSGE models: disentangling importance
More informationSequential Monte Carlo Methods
University of Pennsylvania EABCN Training School May 10, 2016 Introduction Unfortunately, standard MCMC can be inaccurate, especially in medium and large-scale DSGE models: disentangling importance of
More informationSequential Monte Carlo Methods
University of Pennsylvania Econ 722 Part 1 February 13, 2019 Introduction Posterior expectations can be approximated by Monte Carlo averages. If we have draws from {θ s } N s=1 from p(θ Y ), then (under
More informationBayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference
1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE
More informationDSGE Model Econometrics
DSGE Model Econometrics Frank Schorfheide University of Pennsylvania, CEPR, NBER, PIER June 2017 Cowles Lunch Papers and software available at https://web.sas.upenn.edu/schorf/ A Potted History 1993-1994:
More informationPiecewise Linear Continuous Approximations and Filtering for DSGE Models with Occasionally-Binding Constraints
The views expressed in this presentation are those of the authors and do not necessarily reflect those of the Board of Governors or the Federal Reserve System. Piecewise Linear Continuous Approximations
More informationExercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters
Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for
More informationDSGE Model Forecasting
University of Pennsylvania EABCN Training School May 1, 216 Introduction The use of DSGE models at central banks has triggered a strong interest in their forecast performance. The subsequent material draws
More informationEstimating a Nonlinear New Keynesian Model with the Zero Lower Bound for Japan
Estimating a Nonlinear New Keynesian Model with the Zero Lower Bound for Japan Hirokuni Iiboshi 1, Mototsugu Shintani 2, Kozo Ueda 3 August 218 @ EEA-ESEM 1 Tokyo Metropolitan University 2 University of
More informationSMC 2 : an efficient algorithm for sequential analysis of state-space models
SMC 2 : an efficient algorithm for sequential analysis of state-space models N. CHOPIN 1, P.E. JACOB 2, & O. PAPASPILIOPOULOS 3 1 ENSAE-CREST 2 CREST & Université Paris Dauphine, 3 Universitat Pompeu Fabra
More informationAdvanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering
Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London
More informationPoint, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models
Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models Francis X. Diebold Frank Schorfheide Minchul Shin University of Pennsylvania May 4, 2014 1 / 33 Motivation The use
More informationThe Metropolis-Hastings Algorithm. June 8, 2012
The Metropolis-Hastings Algorithm June 8, 22 The Plan. Understand what a simulated distribution is 2. Understand why the Metropolis-Hastings algorithm works 3. Learn how to apply the Metropolis-Hastings
More informationSequential Monte Carlo Methods for Bayesian Computation
Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter
More informationBayesian Estimation of DSGE Models: Lessons from Second-order Approximations
Bayesian Estimation of DSGE Models: Lessons from Second-order Approximations Sungbae An Singapore Management University Bank Indonesia/BIS Workshop: STRUCTURAL DYNAMIC MACROECONOMIC MODELS IN ASIA-PACIFIC
More informationControlled sequential Monte Carlo
Controlled sequential Monte Carlo Jeremy Heng, Department of Statistics, Harvard University Joint work with Adrian Bishop (UTS, CSIRO), George Deligiannidis & Arnaud Doucet (Oxford) Bayesian Computation
More informationEstimation and Inference on Dynamic Panel Data Models with Stochastic Volatility
Estimation and Inference on Dynamic Panel Data Models with Stochastic Volatility Wen Xu Department of Economics & Oxford-Man Institute University of Oxford (Preliminary, Comments Welcome) Theme y it =
More informationNonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania
Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of
More informationEstimating Macroeconomic Models: A Likelihood Approach
Estimating Macroeconomic Models: A Likelihood Approach Jesús Fernández-Villaverde University of Pennsylvania, NBER, and CEPR Juan Rubio-Ramírez Federal Reserve Bank of Atlanta Estimating Dynamic Macroeconomic
More informationSequential Monte Carlo Sampling for DSGE Models
Sequential Monte Carlo Sampling for DSGE Models Edward Herbst Federal Reserve Board Frank Schorfheide University of Pennsylvania CEPR, and NBER September 15, 2013 Abstract We develop a sequential Monte
More informationLabor-Supply Shifts and Economic Fluctuations. Technical Appendix
Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January
More informationSequential Monte Carlo samplers for Bayesian DSGE models
Sequential Monte Carlo samplers for Bayesian DSGE models Drew Creal First version: February 8, 27 Current version: March 27, 27 Abstract Dynamic stochastic general equilibrium models have become a popular
More informationNonlinear DSGE model with Asymmetric Adjustment Costs under ZLB:
Nonlinear DSGE model with Asymmetric Adjustment Costs under ZLB: Bayesian Estimation with Particle Filter for Japan Hirokuni Iiboshi Tokyo Metropolitan University Economic and Social Research Institute,
More informationCalibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods
Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Jonas Hallgren 1 1 Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden BFS 2012 June
More informationWORKING PAPER NO SEQUENTIAL MONTE CARLO SAMPLING FOR DSGE MODELS. Edward Herbst Federal Reserve Board
WORKING PAPER NO. 12-27 SEQUENTIAL MONTE CARLO SAMPLING FOR DSGE MODELS Edward Herbst Federal Reserve Board Frank Schorfheide University of Pennsylvania and Visiting Scholar, Federal Reserve Bank of Philadelphia
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Contents in latter part Linear Dynamical Systems What is different from HMM? Kalman filter Its strength and limitation Particle Filter
More informationLearning Static Parameters in Stochastic Processes
Learning Static Parameters in Stochastic Processes Bharath Ramsundar December 14, 2012 1 Introduction Consider a Markovian stochastic process X T evolving (perhaps nonlinearly) over time variable T. We
More informationL09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms
L09. PARTICLE FILTERING NA568 Mobile Robotics: Methods & Algorithms Particle Filters Different approach to state estimation Instead of parametric description of state (and uncertainty), use a set of state
More informationAuxiliary Particle Methods
Auxiliary Particle Methods Perspectives & Applications Adam M. Johansen 1 adam.johansen@bristol.ac.uk Oxford University Man Institute 29th May 2008 1 Collaborators include: Arnaud Doucet, Nick Whiteley
More informationCS281A/Stat241A Lecture 22
CS281A/Stat241A Lecture 22 p. 1/4 CS281A/Stat241A Lecture 22 Monte Carlo Methods Peter Bartlett CS281A/Stat241A Lecture 22 p. 2/4 Key ideas of this lecture Sampling in Bayesian methods: Predictive distribution
More informationThe Hierarchical Particle Filter
and Arnaud Doucet http://go.warwick.ac.uk/amjohansen/talks MCMSki V Lenzerheide 7th January 2016 Context & Outline Filtering in State-Space Models: SIR Particle Filters [GSS93] Block-Sampling Particle
More informationSequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007
Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 Importance Sampling Recall: Let s say that we want to compute some expectation (integral) E p [f] = p(x)f(x)dx and we remember
More informationThe Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo
NBER Summer Institute Minicourse What s New in Econometrics: Time Series Lecture 5 July 5, 2008 The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo Lecture 5, July 2, 2008 Outline. Models
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.
More informationOn some properties of Markov chain Monte Carlo simulation methods based on the particle filter
On some properties of Markov chain Monte Carlo simulation methods based on the particle filter Michael K. Pitt Economics Department University of Warwick m.pitt@warwick.ac.uk Ralph S. Silva School of Economics
More informationIntroduction to Bayesian Inference
University of Pennsylvania EABCN Training School May 10, 2016 Bayesian Inference Ingredients of Bayesian Analysis: Likelihood function p(y φ) Prior density p(φ) Marginal data density p(y ) = p(y φ)p(φ)dφ
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.
More informationarxiv: v1 [stat.co] 1 Jun 2015
arxiv:1506.00570v1 [stat.co] 1 Jun 2015 Towards automatic calibration of the number of state particles within the SMC 2 algorithm N. Chopin J. Ridgway M. Gerber O. Papaspiliopoulos CREST-ENSAE, Malakoff,
More informationDSGE Methods. Estimation of DSGE models: Maximum Likelihood & Bayesian. Willi Mutschler, M.Sc.
DSGE Methods Estimation of DSGE models: Maximum Likelihood & Bayesian Willi Mutschler, M.Sc. Institute of Econometrics and Economic Statistics University of Münster willi.mutschler@uni-muenster.de Summer
More informationE cient Likelihood Evaluation of State-Space Representations
E cient Likelihood Evaluation of State-Space Representations David N. DeJong, Roman Liesenfeld, Guilherme V. Moura, Jean-Francois Richard, Hariharan Dharmarajan University of Pittsburgh Universität Kiel
More informationAn introduction to Sequential Monte Carlo
An introduction to Sequential Monte Carlo Thang Bui Jes Frellsen Department of Engineering University of Cambridge Research and Communication Club 6 February 2014 1 Sequential Monte Carlo (SMC) methods
More informationParticle Filtering for Data-Driven Simulation and Optimization
Particle Filtering for Data-Driven Simulation and Optimization John R. Birge The University of Chicago Booth School of Business Includes joint work with Nicholas Polson. JRBirge INFORMS Phoenix, October
More informationInference in state-space models with multiple paths from conditional SMC
Inference in state-space models with multiple paths from conditional SMC Sinan Yıldırım (Sabancı) joint work with Christophe Andrieu (Bristol), Arnaud Doucet (Oxford) and Nicolas Chopin (ENSAE) September
More informationNBER WORKING PAPER SERIES SEQUENTIAL MONTE CARLO SAMPLING FOR DSGE MODELS. Edward P. Herbst Frank Schorfheide
NBER WORKING PAPER SERIES SEQUENTIAL MONTE CARLO SAMPLING FOR DSGE MODELS Edward P. Herbst Frank Schorfheide Working Paper 19152 http://www.nber.org/papers/w19152 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050
More informationKernel adaptive Sequential Monte Carlo
Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline
More informationSequential Monte Carlo Samplers for Applications in High Dimensions
Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationNegative Association, Ordering and Convergence of Resampling Methods
Negative Association, Ordering and Convergence of Resampling Methods Nicolas Chopin ENSAE, Paristech (Joint work with Mathieu Gerber and Nick Whiteley, University of Bristol) Resampling schemes: Informal
More informationLecture 8: Bayesian Estimation of Parameters in State Space Models
in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space
More informationSequential Monte Carlo samplers for Bayesian DSGE models
Sequential Monte Carlo samplers for Bayesian DSGE models Drew Creal Department of Econometrics, Vrije Universitiet Amsterdam, NL-8 HV Amsterdam dcreal@feweb.vu.nl August 7 Abstract Bayesian estimation
More informationState-Space Methods for Inferring Spike Trains from Calcium Imaging
State-Space Methods for Inferring Spike Trains from Calcium Imaging Joshua Vogelstein Johns Hopkins April 23, 2009 Joshua Vogelstein (Johns Hopkins) State-Space Calcium Imaging April 23, 2009 1 / 78 Outline
More informationParticle Learning and Smoothing
Particle Learning and Smoothing Hedibert Freitas Lopes The University of Chicago Booth School of Business September 18th 2009 Instituto Nacional de Pesquisas Espaciais São José dos Campos, Brazil Joint
More informationESTIMATION of a DSGE MODEL
ESTIMATION of a DSGE MODEL Paris, October 17 2005 STÉPHANE ADJEMIAN stephane.adjemian@ens.fr UNIVERSITÉ DU MAINE & CEPREMAP Slides.tex ESTIMATION of a DSGE MODEL STÉPHANE ADJEMIAN 16/10/2005 21:37 p. 1/3
More informationParticle Filtering a brief introductory tutorial. Frank Wood Gatsby, August 2007
Particle Filtering a brief introductory tutorial Frank Wood Gatsby, August 2007 Problem: Target Tracking A ballistic projectile has been launched in our direction and may or may not land near enough to
More informationWinter 2019 Math 106 Topics in Applied Mathematics. Lecture 8: Importance Sampling
Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 8: Importance Sampling 8.1 Importance Sampling Importance sampling
More informationEstimation of DSGE Models under Diffuse Priors and Data- Driven Identification Constraints. Markku Lanne and Jani Luoto
Estimation of DSGE Models under Diffuse Priors and Data- Driven Identification Constraints Markku Lanne and Jani Luoto CREATES Research Paper 2015-37 Department of Economics and Business Economics Aarhus
More informationMCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17
MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making
More informationA Note on Auxiliary Particle Filters
A Note on Auxiliary Particle Filters Adam M. Johansen a,, Arnaud Doucet b a Department of Mathematics, University of Bristol, UK b Departments of Statistics & Computer Science, University of British Columbia,
More informationSequential Bayesian Updating
BS2 Statistical Inference, Lectures 14 and 15, Hilary Term 2009 May 28, 2009 We consider data arriving sequentially X 1,..., X n,... and wish to update inference on an unknown parameter θ online. In a
More informationMonetary and Exchange Rate Policy Under Remittance Fluctuations. Technical Appendix and Additional Results
Monetary and Exchange Rate Policy Under Remittance Fluctuations Technical Appendix and Additional Results Federico Mandelman February In this appendix, I provide technical details on the Bayesian estimation.
More informationParticle Filters. Outline
Particle Filters M. Sami Fadali Professor of EE University of Nevada Outline Monte Carlo integration. Particle filter. Importance sampling. Degeneracy Resampling Example. 1 2 Monte Carlo Integration Numerical
More informationForecasting with DSGE Models: Theory and Practice
Forecasting with DSGE Models: Theory and Practice Marco Del Negro Federal Reserve Bank of New York Frank Schorfheide University of Pennsylvania, CEPR, NBER August, 2011 CIGS Conference on Macroeconomic
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters
More informationDSGE Models in a Liquidity Trap and Japan s Lost Decade
DSGE Models in a Liquidity Trap and Japan s Lost Decade Koiti Yano Economic and Social Research Institute ESRI International Conference 2009 June 29, 2009 1 / 27 Definition of a Liquidity Trap Terminology
More informationMarkov Chain Monte Carlo Methods for Stochastic
Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013
More informationBayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems
Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems John Bardsley, University of Montana Collaborators: H. Haario, J. Kaipio, M. Laine, Y. Marzouk, A. Seppänen, A. Solonen, Z.
More informationMonte Carlo Approximation of Monte Carlo Filters
Monte Carlo Approximation of Monte Carlo Filters Adam M. Johansen et al. Collaborators Include: Arnaud Doucet, Axel Finke, Anthony Lee, Nick Whiteley 7th January 2014 Context & Outline Filtering in State-Space
More informationEconometrics I, Estimation
Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the
More informationECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering
ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:
More informationBayesian Model Comparison:
Bayesian Model Comparison: Modeling Petrobrás log-returns Hedibert Freitas Lopes February 2014 Log price: y t = log p t Time span: 12/29/2000-12/31/2013 (n = 3268 days) LOG PRICE 1 2 3 4 0 500 1000 1500
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing
More informationA Review of Pseudo-Marginal Markov Chain Monte Carlo
A Review of Pseudo-Marginal Markov Chain Monte Carlo Discussed by: Yizhe Zhang October 21, 2016 Outline 1 Overview 2 Paper review 3 experiment 4 conclusion Motivation & overview Notation: θ denotes the
More informationParticle Filtering Approaches for Dynamic Stochastic Optimization
Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,
More informationEstimation of moment-based models with latent variables
Estimation of moment-based models with latent variables work in progress Ra aella Giacomini and Giuseppe Ragusa UCL/Cemmap and UCI/Luiss UPenn, 5/4/2010 Giacomini and Ragusa (UCL/Cemmap and UCI/Luiss)Moments
More informationAn intro to particle methods for parameter inference in state-space models
An intro to particle methods for parameter inference in state-space models PhD course FMS020F NAMS002 Statistical inference for partially observed stochastic processes, Lund University http://goo.gl/sx8vu9
More informationLecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations
Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model
More informationFiltering and Likelihood Inference
Filtering and Likelihood Inference Jesús Fernández-Villaverde University of Pennsylvania July 10, 2011 Jesús Fernández-Villaverde (PENN) Filtering and Likelihood July 10, 2011 1 / 79 Motivation Introduction
More informationLecture 4: Dynamic models
linear s Lecture 4: s Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu
More informationSUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, )
Econometrica Supplementary Material SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, 653 710) BY SANGHAMITRA DAS, MARK ROBERTS, AND
More informationOn the conjugacy of off-line and on-line Sequential Monte Carlo Samplers. Working Paper Research. by Arnaud Dufays. September 2014 No 263
On the conjugacy of off-line and on-line Sequential Monte Carlo Samplers Working Paper Research by Arnaud Dufays September 2014 No 263 Editorial Director Jan Smets, Member of the Board of Directors of
More informationSequential Monte Carlo methods for system identification
Technical report arxiv:1503.06058v3 [stat.co] 10 Mar 2016 Sequential Monte Carlo methods for system identification Thomas B. Schön, Fredrik Lindsten, Johan Dahlin, Johan Wågberg, Christian A. Naesseth,
More informationAn Introduction to Particle Filtering
An Introduction to Particle Filtering Author: Lisa Turner Supervisor: Dr. Christopher Sherlock 10th May 2013 Abstract This report introduces the ideas behind particle filters, looking at the Kalman filter
More informationQuantile Filtering and Learning
Quantile Filtering and Learning Michael Johannes, Nicholas Polson and Seung M. Yae October 5, 2009 Abstract Quantile and least-absolute deviations (LAD) methods are popular robust statistical methods but
More informationOnline appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US
Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus
More informationMarkov Chain Monte Carlo Methods for Stochastic Optimization
Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,
More informationNon-nested model selection. in unstable environments
Non-nested model selection in unstable environments Raffaella Giacomini UCLA (with Barbara Rossi, Duke) Motivation The problem: select between two competing models, based on how well they fit thedata Both
More informationInferring biological dynamics Iterated filtering (IF)
Inferring biological dynamics 101 3. Iterated filtering (IF) IF originated in 2006 [6]. For plug-and-play likelihood-based inference on POMP models, there are not many alternatives. Directly estimating
More informationDSGE MODELS WITH STUDENT-t ERRORS
Econometric Reviews, 33(1 4):152 171, 2014 Copyright Taylor & Francis Group, LLC ISSN: 0747-4938 print/1532-4168 online DOI: 10.1080/07474938.2013.807152 DSGE MODELS WITH STUDENT-t ERRORS Siddhartha Chib
More informationThe Kalman Filter ImPr Talk
The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman
More informationParticle Learning and Smoothing
Particle Learning and Smoothing Carlos Carvalho, Michael Johannes, Hedibert Lopes and Nicholas Polson This version: September 2009 First draft: December 2007 Abstract In this paper we develop particle
More informationAn Brief Overview of Particle Filtering
1 An Brief Overview of Particle Filtering Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ May 11th, 2010 Warwick University Centre for Systems
More informationPart 1: Expectation Propagation
Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 1: Expectation Propagation Tom Heskes Machine Learning Group, Institute for Computing and Information Sciences Radboud
More informationKernel Sequential Monte Carlo
Kernel Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) * equal contribution April 25, 2016 1 / 37 Section
More informationDivide-and-Conquer Sequential Monte Carlo
Divide-and-Conquer Joint work with: John Aston, Alexandre Bouchard-Côté, Brent Kirkpatrick, Fredrik Lindsten, Christian Næsseth, Thomas Schön University of Warwick a.m.johansen@warwick.ac.uk http://go.warwick.ac.uk/amjohansen/talks/
More informationPseudo-marginal MCMC methods for inference in latent variable models
Pseudo-marginal MCMC methods for inference in latent variable models Arnaud Doucet Department of Statistics, Oxford University Joint work with George Deligiannidis (Oxford) & Mike Pitt (Kings) MCQMC, 19/08/2016
More informationTutorial on ABC Algorithms
Tutorial on ABC Algorithms Dr Chris Drovandi Queensland University of Technology, Australia c.drovandi@qut.edu.au July 3, 2014 Notation Model parameter θ with prior π(θ) Likelihood is f(ý θ) with observed
More information