Bayesian Estimation of DSGE Models

Size: px
Start display at page:

Download "Bayesian Estimation of DSGE Models"

Transcription

1 Bayesian Estimation of DSGE Models Stéphane Adjemian Université du Maine, GAINS & CEPREMAP June 28, 2011 June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 1

2 Bayesian paradigm (motivations) Bayesian estimation of DSGE models with Dynare. 1. Data are not informative enough DSGE models are misspecified. 3. Model comparison. Prior elicitation. Efficiency issues. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 2

3 Bayesian paradigm (basics) A model defines a joint probability distribution parametrized function over a sample of variables: Likelihood. p(y T θ) (1) We Assume that our prior information about parameters can be summarized by a joint probability density function. Let the prior density be p 0 (θ). The posterior distribution is given by (Bayes theorem squared): p 1 (θ Y T) = p 0(θ)p(Y T θ) p(y T ) (2) June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 3

4 Bayesian paradigm (basics, contd) The denominator is defined by p(yt) = p 0 (θ)p(yt θ)dθ (3) Θ the marginal density of the sample. A weighted mean of the sample conditional densities over all the possible values for the parameters. The posterior density is proportional to the product of the prior density and the density of the sample. p 1 (θ YT) p 0 (θ)p(yt θ) That s all we need for any inference about θ! The prior density deforms the shape of the likelihood! June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 4

5 A simple example (I) Data Generating Process y t = µ+ε t where ε t N(0,1) is a gaussian white noise. iid Let Y T (y 1,...,y T ). The likelihood is given by: p(y T µ) = (2π) T 2 e 1 2 Tt=1 (y t µ) 2 And the ML estimator of µ is: µ ML,T = 1 T T y t y t=1 June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 5

6 A simple example (II) Note that the variance of this estimator is a simple function of the sample size Noting that: V[ µ ML,T ] = 1 T T (y t µ) 2 = νs 2 +T(µ µ) 2 t=1 with ν = T 1 and s 2 = (T 1) 1 T t=1 (y t µ) 2. The likelihood can be equivalently written as: p(y T µ) = (2π) T 2e 1 2(νs 2 +T(µ µ) 2 ) The two statistics s 2 and µ are summing up the sample information. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 6

7 A simple example (II, bis) T T (y t µ) 2 = ([y t µ] [µ µ]) 2 t=1 = t=1 T (y t µ) 2 + t=1 t=1 = νs 2 +T(µ µ) 2 = νs 2 +T(µ µ) 2 T T (µ µ) 2 (y t µ)(µ µ) t=1 ( T ) y t T µ (µ µ) The last term cancels out by definition of the sample mean. t=1 June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 7

8 A simple example (III) Let our prior be a gaussian distribution with expectation µ 0 and variance σ 2 µ. The posterior density is defined, up to a constant, by: p(µ Y T ) (2πσ 2 µ) 1 2 e 1 2 (µ µ 0 ) 2 σ 2 µ (2π) T 2 e 1 2(νs 2 +T(µ µ) 2 ) where the missing constant (denominator) is the marginal density (does not depend on µ). We also have: p(µ Y T ) exp { 1 2 ( T(µ µ) )} σµ 2 (µ µ 0 ) 2 June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 8

9 A simple example (IV) A(µ) = T(µ µ) (µ µ σµ 2 0 ) 2 = T ( µ 2 + µ 2 2µ µ ) + 1 ( µ 2 +µ 2 ) 0 2µµ 0 = = = σ 2 µ ( T + 1 ) ( µ 2 2µ T µ+ 1 µ σµ 2 σµ 2 0 )+ ( T + 1 ) µ 2µT µ+ 1 2 σµµ 2 0 σµ 2 T + 1 σµ 2 ( T + 1 ) µ σµ 2 T µ+ 1 σ 2 µµ 0 T + 1 σ 2 µ ( T µ ) µ 2 σµ 2 0 ( T µ ) µ 2 σµ 2 0 ( T µ ) µ 2 σµ 2 0 ( ) 2 T µ+ 1 µ σµ 2 0 T + 1 σµ 2 June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 9

10 A simple example (V) Finally we have: p(µ Y T ) exp 1 2 ( T + 1 ) [ σµ 2 µ T µ+ 1 σ 2 µµ 0 T + 1 σ 2 µ ]2 Up to a constant, this is a gaussian density with (posterior) expectation: E[µ] = and (posterior) variance: T µ+ 1 σ 2 µµ 0 T + 1 σ 2 µ V[µ] = 1 T + 1 σ 2 µ June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 10

11 A simple example (VI, The bridge) The posterior mean is a convex combination of the prior mean and the ML estimate. If σµ 2 (no prior information) then E[µ] µ (ML). If σµ 2 0 (calibration) then E[µ] µ 0. If σµ 2 < then the variance of the ML estimator is greater than the posterior variance. Not so simple if the model is non linear in the estimated parameters... Asymptotic (Gaussian) approximation. Simulation based approach (MCMC, Metropolis-Hastings,...). June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 11

12 Bayesian paradigm (II, Model Comparison) Comparison of marginal densities of the (same) data across models. p(yt I) measures the fit of model I. Suppose we have a prior distribution over models A, B,...: p(a), p(b),... Again, using the Bayes theorem we can compute the posterior distribution over models: p(i YT) = p(i)p(y T I) I p(i)p(y T I) June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 12

13 Estimation of DSGE models (I, Reduced form) Compute the steady state of the model (a system of non linear recurrence equations. Compute linear approximation of the model. Solve the linearized model: y t ȳ(θ) n 1 = T(θ) (y t 1 ȳ(θ)) n n n 1 +R(θ) n q ε t q 1 (4) where n is the number of endogenous variables, q is the number of structural innovations. The reduced form model is non linear w.r.t the deep parameters. We do not observe all the endogenous variables. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 13

14 Estimation of DSGE models (II, SSM) Let y t be a subset of y t gathering p observed variables. To bring the model to the data, we use a state-space representation: y t = Z(y t +ȳ(θ))+η t ŷ t = T(θ)ŷ t 1 +R(θ)ε t (5a) (5b) where ŷ t = y t ȳ(θ). Equation (5b) is the reduced form of the DSGE model. state equation Equation (5a) selects a subset of the endogenous variables, Z is a p n matrix filled with zeros and ones. measurement equation June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 14

15 Estimation of DSGE models (III, Likelihood) a Let YT = {y 1,y 2,...,y T } be the sample. Let ψ be the vector of parameters to be estimated (θ, the covariance matrices of ε and η). The likelihood, that is the density of YT the parameters, is given by: conditionally on T L(ψ;YT) = p(yt ψ) = p(y0 ψ) p ( yt Y t 1,ψ ) (6) To evaluate the likelihood we need to specify the marginal density p(y0 ψ) (or p(y 0 ψ)) and the conditional density p ( yt Y t 1,ψ). t=1 June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 15

16 Estimation of DSGE models (III, Likelihood) b The state-space model (5), describes the evolution of the endogenous variables distribution. The distribution of the initial condition (y 0 ) is set equal to the ergodic distribution of the stochastic difference equation (so that the distribution of y t is time invariant). Because we consider a linear(ized) reduce form model and the disturbances are supposed to be gaussian (say ε N (0,Σ)) then the initial (ergodic) distribution is also gaussian: y 0 N (E [y t ],V [y t ]) Unit roots (diffuse kalman filter). June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 16

17 Estimation (III, Likelihood) c Evaluation of the density of yt Y t 1 is not trivial, because yt also depends on unobserved endogenous variables. The following identity can be used: p ( yt Y t 1,ψ ) = p(yt y t,ψ)p(y t Yt 1,ψ)dy t (7) Λ The density of y t Y t 1 is the mean of the density of y t y t weigthed by the density of y t Y t 1. The first conditional density is given by the measurement equation (5a). A Kalman filter is used to evaluate the density of the latent variables (y t ) conditional on the sample up to time t 1 (Yt 1 ) [ predictive density]. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 17

18 Estimation (III, Likelihood) d The Kalman filter can be seen as a bayesian recursive estimation routine: p(y t Y t 1,ψ) = p(y t Y t,ψ) = Λ p(y t y t 1,ψ)p(y t 1 Y t 1,ψ)dy t 1 (8a) p(y t y t,ψ)p(y t Y t 1,ψ) Λ p(y t y t,ψ)p ( y t Y t 1,ψ) dy t (8b) Equation (8a) says that the predictive density of the latent variables is the mean of the density of y t y t 1, given by the state equation (5b), weigthed by the density y t 1 conditional on Yt 1 (given by (8b)). The update equation (8b) is an application of the Bayes theorem how to update our knowledge about the latent variables when new information (data) becomes available. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 18

19 Estimation (III, Likelihood) e p(y t Y t,ψ) = p(y t y t,ψ)p ( y t Y t 1,ψ) Λ p(y t y t,ψ)p ( y t Y t 1,ψ) dy t p ( y t Yt 1,ψ) is the a priori density of the latent variables at time t. p(yt y t,ψ) is the density of the observation at time t knowing the state and the parameters (this density is obtained from the measurement equation (5a)) the likelihood associated to yt. Λ p(y t y t,ψ)p ( y t Yt 1,ψ) dy t is the marginal density of the new information. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 19

20 Estimation (III, Likelihood) f The linear gaussian Kalman filter recursion is given by: v t F t K t = y t Z(ŷ t +ȳ(θ) = ZP t Z +V[η] = T(θ)P t T(θ) F 1 t ŷ t+1 = T(θ)ŷ t +K t v t P t+1 = T(θ)P t (T(θ) K t Z) +R(θ)ΣR(θ) for t = 1,...,T, with ŷ 0 and P 0 given. Finally the (log)-likelihood is: lnl(ψ Y T) = Tk 2 ln(2π) 1 2 T t=1 F t 1 2 v tf 1 t v t References: Harvey, Hamilton. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 20

21 Simulations for exact posterior analysis Noting that: E[ϕ(ψ)] = Ψ ϕ(ψ)p 1 (ψ Y T)dψ we can use the empirical mean of ( ϕ(ψ (1) ),ϕ(ψ (2) ),...,ϕ(ψ (n) ) ), where ψ (i) are draws from the posterior distribution to evaluate the expectation of ϕ(ψ). The approximation error goes to zero when n. We need to simulate draws from the posterior distribution Metropolis-Hastings. We build a stochastic recurrence whose limiting distribution is the posterior distribution. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 21

22 Simulations (Metropolis-Hastings) a 1. Choose a starting point Ψ 0 & run a loop over Draw a proposal Ψ from a jumping distribution J(Ψ Ψ t 1 ) = N(Ψ t 1,c Ω m ) 3. Compute the acceptance ratio r = p 1(Ψ Y T ) p(ψ t 1 Y T ) = K(Ψ Y T ) K(Ψ t 1 Y T ) 4. Finally Ψ t = Ψ Ψ t 1 with probability min(r,1) otherwise. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 22

23 Simulations (Metropolis-Hastings) b posterior kernel K(θ o ) θ o June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 23

24 Simulations (Metropolis-Hastings) c K ( θ 1) = K(θ ) posterior kernel K(θ o ) θ o θ 1 = θ June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 24

25 Simulations (Metropolis-Hastings) d K ( θ 1) posterior kernel K(θ o ) K(θ )?? θ o θ 1 θ June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 25

26 Simulations (Metropolis-Hastings) e How should we choose the scale factor c (variance of the jumping distribution)? The acceptance rate should be strictly positive and not too important. How many draws? Convergence has to be assessed... Parallel Markov chains Pooled moments have to be close to Within moments. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 26

27 Dynare syntax (I) var A B C; varexo E; parameters a b c d e f ; model(linear); A=A(+1)-b/e*(B-C(+1)+A(+1)-A); C=f*A+(1-d)*C(-1);... end; June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 27

28 Dynare syntax (II) estimated_params; stderr e, uniform_pdf,,,0,1;... a, normal_pdf, 1.5, 0.25; b, gamma_pdf, 1, 2; c, gamma_pdf, 2, 2, 1;... end; varobs pie r y rw; estimation(datafile=dataraba,first_obs=10,...,mh_jscale=0.5); June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 28

29 Prior Elicitation The results may depend heavily on our choice for the prior density or the parametrization of the model (not asymptotically). How to choose the prior? Subjective choice (data driven or theoretical), example: the Calvo parameter for the Phillips curve. Objective choice, examples: the (optimized) Minnesota prior for VAR (Phillips, 1996). Robustness of the results must be evaluated: Try different parametrization. Use more general prior densities. Uninformative priors. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 29

30 Prior Elicitation (parametrization of the model) a Estimation of the Phillips curve : π t = βeπ t+1 + (1 ξ p)(1 βξ p ) ( ) (σ c +σ l )y t +τ t ξ p ξ p is the (Calvo) probability (for an intermediate firm) of being able to optimally choose its price at time t. With probability 1 ξ p the price is indexed on past inflation an/or steady state inflation. Let α p 1 1 ξ p be the expected period length during which a firm will not optimally adjust its price. Let λ = (1 ξ p)(1 βξ p ) ξ p be the slope of the Phillips curve. Suppose that β, σ c and σ l are known. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 30

31 Prior Elicitation (parametrization of the model) b The prior may be defined on ξ p, α p or the slope λ. Say we choose a uniform prior for the Calvo probability: ξ p U [.51,.99] The prior mean is.75 (so that the implied value for α p is 4 quarters). This prior is often think as a non informative prior... An alternative would be to choose a uniform prior for α p : α p U [ ,1 1.99] These two priors are very different! June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 31

32 Prior Elicitation (parametrization of the model) c The prior on α p is much more informative than the prior on ξ p. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 32

33 Prior Elicitation (parametrization of the model) d Implied prior density of ξ p = 1 1 α p if the prior density of α p is uniform. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 33

34 Prior Elicitation (more general prior densities) Robustness of the results may be evaluated by considering a more general prior density. For instance, in our simple example we could assume a student prior density for µ instead of a gaussian density. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 34

35 Prior Elicitation (flat prior) If a parameter, say µ, can take values between and, the flat prior is a uniform density between and. If a parameter, say σ, can take values between 0 and, the flat prior is a uniform density between and for logσ: p 0 (logσ) 1 p 0 (σ) 1 σ Invariance. Why is this prior non informative?... p 0 (µ)dµ is not defined! Improper prior. Practical implications for DSGE estimation. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 35

36 Prior Elicitation (non informative prior) An alternative, proposed by Jeffrey, is to use the Fisher information matrix: p 0 (ψ) I(ψ) 1 2 with [( p(y )( I(ψ) = E T ψ) p(y ) T ψ) ] ψ ψ The idea is to mimic the information in the data... Automatic choice of the prior. Invariance to any continuous transformation of the parameters. Very different results (compared to the flat prior) Unit root controverse. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 36

37 Effective prior mass Dynare excludes parameters such that the steady state does not exist, or such that the BK conditions are not satisfied. The effective prior mass can be less than 1. Comparison of marginal densities of the data is not informative if the prior mass is not invariant across models. The estimation of the posterior mode is more difficult if the effective prior mass is less than 1. June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 37

38 Efficiency (I) If possible use options use_dll (need a compiler, gcc) Do not let dynare compute the steady state! Even if there is no closed form solution for the steady state, the static model can be concentrated and reduced to a small nonlinear system of equations, which can be solved by using standard newton algorithm in the steastate file. This numerical part can be done in a mex routine called by the steastate file. Alternative initialization of the Kalman filter (lik_init=4). June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 38

39 Efficiency (II) Fixed point of the state equation Fixed point of the Riccati equation Execution time Likelihood Marginal density α β γ a γ m ρ ψ δ σ ε a σ ε m Table 1: Estimation of fs2000. In percentage the maximum deviation is less than June 28, 2011 Université du Maine, GAINS & CEPREMAP Page 39

ESTIMATION of a DSGE MODEL

ESTIMATION of a DSGE MODEL ESTIMATION of a DSGE MODEL Paris, October 17 2005 STÉPHANE ADJEMIAN stephane.adjemian@ens.fr UNIVERSITÉ DU MAINE & CEPREMAP Slides.tex ESTIMATION of a DSGE MODEL STÉPHANE ADJEMIAN 16/10/2005 21:37 p. 1/3

More information

DSGE Methods. Estimation of DSGE models: Maximum Likelihood & Bayesian. Willi Mutschler, M.Sc.

DSGE Methods. Estimation of DSGE models: Maximum Likelihood & Bayesian. Willi Mutschler, M.Sc. DSGE Methods Estimation of DSGE models: Maximum Likelihood & Bayesian Willi Mutschler, M.Sc. Institute of Econometrics and Economic Statistics University of Münster willi.mutschler@uni-muenster.de Summer

More information

The Metropolis-Hastings Algorithm. June 8, 2012

The Metropolis-Hastings Algorithm. June 8, 2012 The Metropolis-Hastings Algorithm June 8, 22 The Plan. Understand what a simulated distribution is 2. Understand why the Metropolis-Hastings algorithm works 3. Learn how to apply the Metropolis-Hastings

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

ECO 513 Fall 2008 C.Sims KALMAN FILTER. s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. u t = r t. u 0 0 t 1 + y t = [ H I ] u t.

ECO 513 Fall 2008 C.Sims KALMAN FILTER. s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. u t = r t. u 0 0 t 1 + y t = [ H I ] u t. ECO 513 Fall 2008 C.Sims KALMAN FILTER Model in the form 1. THE KALMAN FILTER Plant equation : s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. Var(ε t ) = Ω, Var(ν t ) = Ξ. ε t ν t and (ε t,

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus

More information

Building and simulating DSGE models part 2

Building and simulating DSGE models part 2 Building and simulating DSGE models part II DSGE Models and Real Life I. Replicating business cycles A lot of papers consider a smart calibration to replicate business cycle statistics (e.g. Kydland Prescott).

More information

17 : Markov Chain Monte Carlo

17 : Markov Chain Monte Carlo 10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo

More information

MID-TERM EXAM ANSWERS. p t + δ t = Rp t 1 + η t (1.1)

MID-TERM EXAM ANSWERS. p t + δ t = Rp t 1 + η t (1.1) ECO 513 Fall 2005 C.Sims MID-TERM EXAM ANSWERS (1) Suppose a stock price p t and the stock dividend δ t satisfy these equations: p t + δ t = Rp t 1 + η t (1.1) δ t = γδ t 1 + φp t 1 + ε t, (1.2) where

More information

State Space and Hidden Markov Models

State Space and Hidden Markov Models State Space and Hidden Markov Models Kunsch H.R. State Space and Hidden Markov Models. ETH- Zurich Zurich; Aliaksandr Hubin Oslo 2014 Contents 1. Introduction 2. Markov Chains 3. Hidden Markov and State

More information

Markov chain Monte Carlo

Markov chain Monte Carlo 1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop

More information

Multivariate Normal & Wishart

Multivariate Normal & Wishart Multivariate Normal & Wishart Hoff Chapter 7 October 21, 2010 Reading Comprehesion Example Twenty-two children are given a reading comprehsion test before and after receiving a particular instruction method.

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Bayesian Computations for DSGE Models

Bayesian Computations for DSGE Models Bayesian Computations for DSGE Models Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 This Lecture is Based on Bayesian Estimation of DSGE Models Edward P. Herbst &

More information

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference 1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE

More information

Lecture 4: Dynamic models

Lecture 4: Dynamic models linear s Lecture 4: s Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

Bayesian Monte Carlo Filtering for Stochastic Volatility Models

Bayesian Monte Carlo Filtering for Stochastic Volatility Models Bayesian Monte Carlo Filtering for Stochastic Volatility Models Roberto Casarin CEREMADE University Paris IX (Dauphine) and Dept. of Economics University Ca Foscari, Venice Abstract Modelling of the financial

More information

Dynamic Macro 1 / 114. Bayesian Estimation. Summer Bonn University

Dynamic Macro 1 / 114. Bayesian Estimation. Summer Bonn University Dynamic Macro Bayesian Estimation Petr Sedláček Bonn University Summer 2015 1 / 114 Overall plan Motivation Week 1: Use of computational tools, simple DSGE model Tools necessary to solve models and a solution

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2 Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate

More information

Sequential Monte Carlo Methods (for DSGE Models)

Sequential Monte Carlo Methods (for DSGE Models) Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References These lectures use material from our joint work: Tempered

More information

David Giles Bayesian Econometrics

David Giles Bayesian Econometrics David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian

More information

Ergodicity in data assimilation methods

Ergodicity in data assimilation methods Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

(I AL BL 2 )z t = (I CL)ζ t, where

(I AL BL 2 )z t = (I CL)ζ t, where ECO 513 Fall 2011 MIDTERM EXAM The exam lasts 90 minutes. Answer all three questions. (1 Consider this model: x t = 1.2x t 1.62x t 2 +.2y t 1.23y t 2 + ε t.7ε t 1.9ν t 1 (1 [ εt y t = 1.4y t 1.62y t 2

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

Bayesian Estimation of DSGE Models: Lessons from Second-order Approximations

Bayesian Estimation of DSGE Models: Lessons from Second-order Approximations Bayesian Estimation of DSGE Models: Lessons from Second-order Approximations Sungbae An Singapore Management University Bank Indonesia/BIS Workshop: STRUCTURAL DYNAMIC MACROECONOMIC MODELS IN ASIA-PACIFIC

More information

GAUSSIAN PROCESS REGRESSION

GAUSSIAN PROCESS REGRESSION GAUSSIAN PROCESS REGRESSION CSE 515T Spring 2015 1. BACKGROUND The kernel trick again... The Kernel Trick Consider again the linear regression model: y(x) = φ(x) w + ε, with prior p(w) = N (w; 0, Σ). The

More information

Estimation of moment-based models with latent variables

Estimation of moment-based models with latent variables Estimation of moment-based models with latent variables work in progress Ra aella Giacomini and Giuseppe Ragusa UCL/Cemmap and UCI/Luiss UPenn, 5/4/2010 Giacomini and Ragusa (UCL/Cemmap and UCI/Luiss)Moments

More information

Bayesian Inference: Concept and Practice

Bayesian Inference: Concept and Practice Inference: Concept and Practice fundamentals Johan A. Elkink School of Politics & International Relations University College Dublin 5 June 2017 1 2 3 Bayes theorem In order to estimate the parameters of

More information

An Introduction to Bayesian Linear Regression

An Introduction to Bayesian Linear Regression An Introduction to Bayesian Linear Regression APPM 5720: Bayesian Computation Fall 2018 A SIMPLE LINEAR MODEL Suppose that we observe explanatory variables x 1, x 2,..., x n and dependent variables y 1,

More information

Short Questions (Do two out of three) 15 points each

Short Questions (Do two out of three) 15 points each Econometrics Short Questions Do two out of three) 5 points each ) Let y = Xβ + u and Z be a set of instruments for X When we estimate β with OLS we project y onto the space spanned by X along a path orthogonal

More information

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53 State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State

More information

Basic math for biology

Basic math for biology Basic math for biology Lei Li Florida State University, Feb 6, 2002 The EM algorithm: setup Parametric models: {P θ }. Data: full data (Y, X); partial data Y. Missing data: X. Likelihood and maximum likelihood

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham NC 778-5 - Revised April,

More information

Eco517 Fall 2014 C. Sims MIDTERM EXAM

Eco517 Fall 2014 C. Sims MIDTERM EXAM Eco57 Fall 204 C. Sims MIDTERM EXAM You have 90 minutes for this exam and there are a total of 90 points. The points for each question are listed at the beginning of the question. Answer all questions.

More information

Bayesian parameter estimation in predictive engineering

Bayesian parameter estimation in predictive engineering Bayesian parameter estimation in predictive engineering Damon McDougall Institute for Computational Engineering and Sciences, UT Austin 14th August 2014 1/27 Motivation Understand physical phenomena Observations

More information

DSGE Model Forecasting

DSGE Model Forecasting University of Pennsylvania EABCN Training School May 1, 216 Introduction The use of DSGE models at central banks has triggered a strong interest in their forecast performance. The subsequent material draws

More information

Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis

Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis Stéphanie Allassonnière CIS, JHU July, 15th 28 Context : Computational Anatomy Context and motivations :

More information

Bayesian Linear Regression

Bayesian Linear Regression Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective

More information

Dynamic models. Dependent data The AR(p) model The MA(q) model Hidden Markov models. 6 Dynamic models

Dynamic models. Dependent data The AR(p) model The MA(q) model Hidden Markov models. 6 Dynamic models 6 Dependent data The AR(p) model The MA(q) model Hidden Markov models Dependent data Dependent data Huge portion of real-life data involving dependent datapoints Example (Capture-recapture) capture histories

More information

Estimation of Dynamic Regression Models

Estimation of Dynamic Regression Models University of Pavia 2007 Estimation of Dynamic Regression Models Eduardo Rossi University of Pavia Factorization of the density DGP: D t (x t χ t 1, d t ; Ψ) x t represent all the variables in the economy.

More information

Introduction to Bayesian Statistics with WinBUGS Part 4 Priors and Hierarchical Models

Introduction to Bayesian Statistics with WinBUGS Part 4 Priors and Hierarchical Models Introduction to Bayesian Statistics with WinBUGS Part 4 Priors and Hierarchical Models Matthew S. Johnson New York ASA Chapter Workshop CUNY Graduate Center New York, NY hspace1in December 17, 2009 December

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

Chapter 4 - Fundamentals of spatial processes Lecture notes

Chapter 4 - Fundamentals of spatial processes Lecture notes Chapter 4 - Fundamentals of spatial processes Lecture notes Geir Storvik January 21, 2013 STK4150 - Intro 2 Spatial processes Typically correlation between nearby sites Mostly positive correlation Negative

More information

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition. Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49 State-space Model Eduardo Rossi University of Pavia November 2013 Rossi State-space Model Financial Econometrics - 2013 1 / 49 Outline 1 Introduction 2 The Kalman filter 3 Forecast errors 4 State smoothing

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

... Econometric Methods for the Analysis of Dynamic General Equilibrium Models

... Econometric Methods for the Analysis of Dynamic General Equilibrium Models ... Econometric Methods for the Analysis of Dynamic General Equilibrium Models 1 Overview Multiple Equation Methods State space-observer form Three Examples of Versatility of state space-observer form:

More information

Signaling Effects of Monetary Policy

Signaling Effects of Monetary Policy Signaling Effects of Monetary Policy Leonardo Melosi London Business School 24 May 2012 Motivation Disperse information about aggregate fundamentals Morris and Shin (2003), Sims (2003), and Woodford (2002)

More information

Advanced Statistical Modelling

Advanced Statistical Modelling Markov chain Monte Carlo (MCMC) Methods and Their Applications in Bayesian Statistics School of Technology and Business Studies/Statistics Dalarna University Borlänge, Sweden. Feb. 05, 2014. Outlines 1

More information

Bayesian inference for multivariate skew-normal and skew-t distributions

Bayesian inference for multivariate skew-normal and skew-t distributions Bayesian inference for multivariate skew-normal and skew-t distributions Brunero Liseo Sapienza Università di Roma Banff, May 2013 Outline Joint research with Antonio Parisi (Roma Tor Vergata) 1. Inferential

More information

Monetary and Exchange Rate Policy Under Remittance Fluctuations. Technical Appendix and Additional Results

Monetary and Exchange Rate Policy Under Remittance Fluctuations. Technical Appendix and Additional Results Monetary and Exchange Rate Policy Under Remittance Fluctuations Technical Appendix and Additional Results Federico Mandelman February In this appendix, I provide technical details on the Bayesian estimation.

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D.

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Ruppert A. EMPIRICAL ESTIMATE OF THE KERNEL MIXTURE Here we

More information

Graphical Models for Collaborative Filtering

Graphical Models for Collaborative Filtering Graphical Models for Collaborative Filtering Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Sequence modeling HMM, Kalman Filter, etc.: Similarity: the same graphical model topology,

More information

Gibbs Sampling in Endogenous Variables Models

Gibbs Sampling in Endogenous Variables Models Gibbs Sampling in Endogenous Variables Models Econ 690 Purdue University Outline 1 Motivation 2 Identification Issues 3 Posterior Simulation #1 4 Posterior Simulation #2 Motivation In this lecture we take

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17 MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making

More information

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano Dynamic Factor Models and Factor Augmented Vector Autoregressions Lawrence J Christiano Dynamic Factor Models and Factor Augmented Vector Autoregressions Problem: the time series dimension of data is relatively

More information

Combining Macroeconomic Models for Prediction

Combining Macroeconomic Models for Prediction Combining Macroeconomic Models for Prediction John Geweke University of Technology Sydney 15th Australasian Macro Workshop April 8, 2010 Outline 1 Optimal prediction pools 2 Models and data 3 Optimal pools

More information

Inference in state-space models with multiple paths from conditional SMC

Inference in state-space models with multiple paths from conditional SMC Inference in state-space models with multiple paths from conditional SMC Sinan Yıldırım (Sabancı) joint work with Christophe Andrieu (Bristol), Arnaud Doucet (Oxford) and Nicolas Chopin (ENSAE) September

More information

Kernel adaptive Sequential Monte Carlo

Kernel adaptive Sequential Monte Carlo Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline

More information

Empirical Evaluation and Estimation of Large-Scale, Nonlinear Economic Models

Empirical Evaluation and Estimation of Large-Scale, Nonlinear Economic Models Empirical Evaluation and Estimation of Large-Scale, Nonlinear Economic Models Michal Andrle, RES The views expressed herein are those of the author and should not be attributed to the International Monetary

More information

Chapter 1. Introduction. 1.1 Background

Chapter 1. Introduction. 1.1 Background Chapter 1 Introduction Science is facts; just as houses are made of stones, so is science made of facts; but a pile of stones is not a house and a collection of facts is not necessarily science. Henri

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Kalman Filter and Parameter Identification. Florian Herzog

Kalman Filter and Parameter Identification. Florian Herzog Kalman Filter and Parameter Identification Florian Herzog 2013 Continuous-time Kalman Filter In this chapter, we shall use stochastic processes with independent increments w 1 (.) and w 2 (.) at the input

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

Estimation and Inference on Dynamic Panel Data Models with Stochastic Volatility

Estimation and Inference on Dynamic Panel Data Models with Stochastic Volatility Estimation and Inference on Dynamic Panel Data Models with Stochastic Volatility Wen Xu Department of Economics & Oxford-Man Institute University of Oxford (Preliminary, Comments Welcome) Theme y it =

More information

Switching Regime Estimation

Switching Regime Estimation Switching Regime Estimation Series de Tiempo BIrkbeck March 2013 Martin Sola (FE) Markov Switching models 01/13 1 / 52 The economy (the time series) often behaves very different in periods such as booms

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

Based on slides by Richard Zemel

Based on slides by Richard Zemel CSC 412/2506 Winter 2018 Probabilistic Learning and Reasoning Lecture 3: Directed Graphical Models and Latent Variables Based on slides by Richard Zemel Learning outcomes What aspects of a model can we

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

STA 294: Stochastic Processes & Bayesian Nonparametrics

STA 294: Stochastic Processes & Bayesian Nonparametrics MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a

More information

10. Exchangeability and hierarchical models Objective. Recommended reading

10. Exchangeability and hierarchical models Objective. Recommended reading 10. Exchangeability and hierarchical models Objective Introduce exchangeability and its relation to Bayesian hierarchical models. Show how to fit such models using fully and empirical Bayesian methods.

More information

Model comparison. Christopher A. Sims Princeton University October 18, 2016

Model comparison. Christopher A. Sims Princeton University October 18, 2016 ECO 513 Fall 2008 Model comparison Christopher A. Sims Princeton University sims@princeton.edu October 18, 2016 c 2016 by Christopher A. Sims. This document may be reproduced for educational and research

More information

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall Machine Learning Gaussian Mixture Models Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall 2012 1 The Generative Model POV We think of the data as being generated from some process. We assume

More information

Bayesian room-acoustic modal analysis

Bayesian room-acoustic modal analysis Bayesian room-acoustic modal analysis Wesley Henderson a) Jonathan Botts b) Ning Xiang c) Graduate Program in Architectural Acoustics, School of Architecture, Rensselaer Polytechnic Institute, Troy, New

More information

MCMC 2: Lecture 3 SIR models - more topics. Phil O Neill Theo Kypraios School of Mathematical Sciences University of Nottingham

MCMC 2: Lecture 3 SIR models - more topics. Phil O Neill Theo Kypraios School of Mathematical Sciences University of Nottingham MCMC 2: Lecture 3 SIR models - more topics Phil O Neill Theo Kypraios School of Mathematical Sciences University of Nottingham Contents 1. What can be estimated? 2. Reparameterisation 3. Marginalisation

More information

1 Bayesian Linear Regression (BLR)

1 Bayesian Linear Regression (BLR) Statistical Techniques in Robotics (STR, S15) Lecture#10 (Wednesday, February 11) Lecturer: Byron Boots Gaussian Properties, Bayesian Linear Regression 1 Bayesian Linear Regression (BLR) In linear regression,

More information

Hierarchical Modeling for Univariate Spatial Data

Hierarchical Modeling for Univariate Spatial Data Hierarchical Modeling for Univariate Spatial Data Geography 890, Hierarchical Bayesian Models for Environmental Spatial Data Analysis February 15, 2011 1 Spatial Domain 2 Geography 890 Spatial Domain This

More information

Probabilistic Graphical Models Homework 2: Due February 24, 2014 at 4 pm

Probabilistic Graphical Models Homework 2: Due February 24, 2014 at 4 pm Probabilistic Graphical Models 10-708 Homework 2: Due February 24, 2014 at 4 pm Directions. This homework assignment covers the material presented in Lectures 4-8. You must complete all four problems to

More information

CS281A/Stat241A Lecture 22

CS281A/Stat241A Lecture 22 CS281A/Stat241A Lecture 22 p. 1/4 CS281A/Stat241A Lecture 22 Monte Carlo Methods Peter Bartlett CS281A/Stat241A Lecture 22 p. 2/4 Key ideas of this lecture Sampling in Bayesian methods: Predictive distribution

More information

Pseudo-marginal MCMC methods for inference in latent variable models

Pseudo-marginal MCMC methods for inference in latent variable models Pseudo-marginal MCMC methods for inference in latent variable models Arnaud Doucet Department of Statistics, Oxford University Joint work with George Deligiannidis (Oxford) & Mike Pitt (Kings) MCQMC, 19/08/2016

More information

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007 Bayesian inference Fredrik Ronquist and Peter Beerli October 3, 2007 1 Introduction The last few decades has seen a growing interest in Bayesian inference, an alternative approach to statistical inference.

More information

Markov Chain Monte Carlo Methods

Markov Chain Monte Carlo Methods Markov Chain Monte Carlo Methods John Geweke University of Iowa, USA 2005 Institute on Computational Economics University of Chicago - Argonne National Laboaratories July 22, 2005 The problem p (θ, ω I)

More information

Discrete time processes

Discrete time processes Discrete time processes Predictions are difficult. Especially about the future Mark Twain. Florian Herzog 2013 Modeling observed data When we model observed (realized) data, we encounter usually the following

More information

Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models

Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models Francis X. Diebold Frank Schorfheide Minchul Shin University of Pennsylvania May 4, 2014 1 / 33 Motivation The use

More information

MARGINAL MARKOV CHAIN MONTE CARLO METHODS

MARGINAL MARKOV CHAIN MONTE CARLO METHODS Statistica Sinica 20 (2010), 1423-1454 MARGINAL MARKOV CHAIN MONTE CARLO METHODS David A. van Dyk University of California, Irvine Abstract: Marginal Data Augmentation and Parameter-Expanded Data Augmentation

More information

Bayesian Estimation of Input Output Tables for Russia

Bayesian Estimation of Input Output Tables for Russia Bayesian Estimation of Input Output Tables for Russia Oleg Lugovoy (EDF, RANE) Andrey Polbin (RANE) Vladimir Potashnikov (RANE) WIOD Conference April 24, 2012 Groningen Outline Motivation Objectives Bayesian

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

Outline. Supervised Learning. Hong Chang. Institute of Computing Technology, Chinese Academy of Sciences. Machine Learning Methods (Fall 2012)

Outline. Supervised Learning. Hong Chang. Institute of Computing Technology, Chinese Academy of Sciences. Machine Learning Methods (Fall 2012) Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Linear Models for Regression Linear Regression Probabilistic Interpretation

More information