Bayesian Inference for DSGE Models. Lawrence J. Christiano

Size: px
Start display at page:

Download "Bayesian Inference for DSGE Models. Lawrence J. Christiano"

Transcription

1 Bayesian Inference for DSGE Models Lawrence J. Christiano

2 Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation. MCMC algorithm. Laplace approximation

3 State Space/Observer Form Compact summary of the model, and of the mapping between the model and data used in the analysis. Typically, data are available in log form. So, the following is useful: If x is steady state of x t : ˆx t x t x, x =) x t x = 1 + ˆx t xt =) log = log (1 + ˆx t ) ˆx t x Suppose we have a model solution in hand: 1 z t = Az t1 + Bs t s t = Ps t1 + e t, Ee t et = D. 1 Notation taken from solution lecture notes, Korea_212/lecture_on_solving_rev.pdf

4 State Space/Observer Form Suppose we have a model in which the date t endogenous variables are capital, K t+1, and labor, N t : z t = ˆK t+1 ˆN t, s t = ˆ# t, e t = e t. Data may include variables in z t and/or other variables. for example, suppose available data are N t and GDP, y t and production function in model is: so that y t = # t K a t N 1a t, ŷ t = ˆ# t + a ˆK t +(1 a) ˆN t = ( 1 a ) z t + ( a ) z t1 + s t From the properties of ŷ t and ˆN t : Yt data log yt log y = log N = t log N + ŷt ˆN t

5 State Space/Observer Form Model prediction for data: Yt data log y = log N + h log y = log N + 1 a 1 ŷt ˆN t i h a z t + i h i 1 z t1 + s t x t = z t z t1 ˆ# t = a + Hx! t, a = log y log N h 1 a a 1, H = 1 i The Observer Equation may include measurement error, w t : Y data t = a + Hx t + w t, Ew t w t = R. Semantics: x t is the state of the system (do not confuse with the economic state (K t, # t )!).

6 State Space/Observer Form Law of motion of the state, x t (state-space equation): x t = Fx t1 + u t, Eu t u t = Q z t+1 z t s t+1! = u t = B I! e t, Q = " A # BP I P " BDB BD DB D # z t z t1 s t, F =! + B I! " A BP I P e t+1, #.

7 Uses of State Space/Observer Form Estimation of q and forecasting x t and Yt data Can take into account situations in which data represent a mixture of quarterly, monthly, daily observations. Data Rich estimation. Could include several data measures (e.g., employment based on surveys of establishments and surveys of households) on a single model concept. Useful for solving the following forecasting problems: Filtering (mainly of technical interest in computing likelihood function): Smoothing: h P x t Y data t1, Ydata t2 h P x t Y data T,..., Ydata,..., Ydata 1 1 i, t = 1, 2,..., T. i, t = 1, 2,..., T. Example: real rate of interest and output gap can be recovered from x t using simple New Keynesian model. Useful for deriving a model s implications vector autoregressions

8 Mixed Monthly/Quarterly Observations Di erent data arrive at di erent frequencies: daily, monthly, quarterly, etc. This feature can be easily handled in state space-observer system. Example: suppose inflation and hours are monthly, t =, 1/3, 2/3, 1, 4/3, 5/3, 2,... suppose gdp is quarterly, t =, 1, 2, 3,... GDP 1 t monthly inflation t monthly inflation Yt data t1/3 = monthly inflation B hours t C A hours t1/3 hours t2/3, t =, 1, 2,.... that is, we can think of our data set as actually being quarterly, with quarterly observations on the first month s inflation, quarterly observations on the second month s inflation, etc.

9 Mixed Monthly/Quarterly Observations Problem: find state-space observer system in which observed data are: Y data t = Solution: easy! GDP t monthly inflation t monthly inflation t1/3 monthly inflation t2/3 hours t hours t1/3 hours t2/3 1, t =, 1, 2,.... C A

10 Mixed Monthly/Quarterly Observations Model timing: t =, 1/3, 2/3,... z t = Az t1/3 + Bs t, s t = Ps t1/3 + e t, Ee t e t = D. Monthly state-space observer system, t =, 1/3, 2/3,... x t = Fx t1/3 + u t, Eu t u t = Q, u t iid t =, 1/3, 2/3,... Y t = Hx t, Y t = y t p t h t!. Note: first order vector autoregressive representation for quarterly state z } { x t = F 3 x t1 + u t + Fu t1/3 + F 2 u t2/3, u t + Fu t1/3 + F 2 u t2/3 ~iid for t =, 1, 2,...!!

11 Mixed Monthly/Quarterly Observations Consider 1the following 2 system: 3 x x t 1 3 A = 4 F3 x t1 F 2 x t 4 3 x t 2 3 F x t 5 3 Define x t so that x t x t 1 3 x t A, F = 2 4 F3 F 2 F 3 1 A + 5, ũ t = " I F F 2 I F I " I F F 2 I F I x t = F x t1 + ũ t, ũ t iid in quarterly data, t =, 1, 2,... u t u t 1 3 u t 2 3 u t u t 1 3 u t A. 1 A, Eũ t ũ t = Q = " I F F 2 I F I #" D D D #" I F F 2 I F I #

12 Mixed Monthly/Quarterly Observations Conclude: state space-observer system for mixed monthly/quarterly data, for t =, 1, 2,... x t = F x t1 + ũ t, ũ t iid, Eũ t ũ t = Q, Y data t = H x t + w t, w t iid, Ew t w t = R. Here, H selects elements of x t needed to construct Y data t can easily handle distinction between whether quarterly data represent monthly averages (as in flow variables), or point-in-time observations on one month in the quarter (as in stock variables). Can use Kalman filter to forecast ( nowcast ) current quarter data based on first month s (day s, week s) observations.

13 Connection Between DSGE s and VAR s Fernandez-Villaverde, Rubio-Ramirez, Sargent, Watson Result Vector Autoregression where u t is iid. Y t = B 1 Y t1 + B 2 Y t u t, Matching impulse response functions strategy for building DSGE models fits VARs and assumes u t are a rotation of economic shocks (for details, see later notes). Can use the state space, observer representation to assess this assumption from the perspective of a DSGE.

14 Connection Between DSGE s and VAR s System (ignoring constant terms and measurement error): ( State equation ) x t = Fx t1 + De t, D = B!, I ( Observer equation ) Y t = Hx t. Substituting: Y t = HFx t1 + HDe t Suppose HD is square and invertible. Then e t = (HD) 1 Y t (HD) 1 HFx t1 () Substitute latter into the state equation: x t = Fx t1 + D (HD) 1 Y t D (HD) 1 HFx t1 = h i I D (HD) 1 H Fx t1 + D (HD) 1 Y t.

15 Connection Between DSGE s and VAR s We have: x t = Mx t1 + D (HD) 1 Y t, M = If eigenvalues of M are less than unity, h i I D (HD) 1 H F. x t = D (HD) 1 Y t + MD (HD) 1 Y t1 + M 2 D (HD) 1 Y t Substituting into () e t = (HD) 1 Y t (HD) 1 HF h i D (HD) 1 Y t1 + MD (HD) 1 Y t2 + M 2 D (HD) 1 Y t or, where Y t = B 1 Y t1 + B 2 Y t u t, u t = HDe t, B j = HFM j1 D (HD) 1, j = 1, 2,... The latter is the VAR representation.

16 Connection Between DSGE s and VAR s The VAR repersentation is: Y t = B 1 Y t1 + B 2 Y t u t, where u t = HDe t, B j = HFM j1 D (HD) 1, j = 1, 2,... Notes: e t is invertible because it lies in space of current and past Y t s. VAR is infinite-ordered. assumed system is square (same number of elements in e t and Y t ). Sims-Zha (Macroeconomic Dynamics) show how to recover e t from current and past Y t when the dimension of e t is greater than the dimension of Y t.

17 Quick Review of Probability Theory Two random variables, x 2 (x 1, x 2 ) and y 2 (y 1, y 2 ). Joint distribution: p (x, y) x 1 x 2 y 1 p 11 p 12 y 2 p 21 p 22 = x 1 x 2 y y where Restriction: p ij = probability x = x i, y = y j. Z x,y p (x, y) dxdy = 1.

18 Quick Review of Probability Theory Joint distribution: p (x, y) x 1 x 2 y 1 p 11 p 12 y 2 p 21 p 22 = x 1 x 2 y y Marginal distribution of x : p (x) Probabilities of various values of x without reference to the value of y: or, p (x) = n p11 + p 21 =.4 x = x 1 p 12 + p 22 =.6 x = x 2. Z p (x) = p (x, y) dy y

19 Quick Review of Probability Theory Joint distribution: p (x, y) x 1 x 2 y 1 p 11 p 12 y 2 p 21 p 22 = x 1 x 2 y y Conditional distribution of x given y : p (x y) Probability of x given that the value of y is known or, ( p (x1 y 1 ) p (x y 1 ) = p (x 2 y 1 ) p (x y) = p 11 p 11 +p 12 = p 11 p(y 1 ) =.5.45 =.11 p 12 p 11 +p 12 = p 12 p(y 1 ) =.4.45 =.89 p (x, y) p (y).

20 Quick Review of Probability Theory Joint distribution: p (x, y) Mode x 1 x 2 y p (y 1 ) =.45 y p (y 2 ) =.55 p (x 1 ) =.4 p (x 2 ) =.6 Mode of joint distribution (in the example): argmax x,y p (x, y) = (x 2, y 1 ) Mode of the marginal distribution: argmax x p (x) = x 2, argmax y p (y) = y 2 Note: mode of the marginal and of joint distribution conceptually di erent.

21 Maximum Likelihood Estimation State space-observer system: x t+1 = Fx t + u t+1, Eu t u t = Q, Y data t = a + Hx t + w t, Ew t w t = R Reduced form parameters, (F, Q, a, H, R), functions of q. Choose q to maximize likelihood, p Y data q : p Y data q = p Y1 data,..., YT data q = p Y1 data q p Y2 data Y1 data, q computed using Kalman Filter z } { p Yt data Yt1 data Ydata 1, q p,, Ydata 1, q YT data Ydata T1 Kalman filter straightforward (see, e.g., Hamilton s textbook).

22 Bayesian Inference Bayesian inference is about describing the mapping from prior beliefs about q, summarized in p (q), to new posterior beliefs in the light of observing the data, Y data. General property of probabilities: p Y data, q = which implies Bayes rule: p Y data q p (q) p q Y data p Y data, p q Y data = p Y data q p (q) p Y data, mapping from prior to posterior induced by Y data.

23 Bayesian Inference Report features of the posterior distribution, p q Y data. The value of q that maximizes p q Y data, mode of posterior distribution. Compare marginal prior, p (q i ), with marginal posterior of individual elements of q, g q i Y data : g q i Y data Z = p q Y data dq j6=i (multiple integration!!) q j6=i Probability intervals about the mode of q ( Bayesian confidence intervals ), need g q i Y data. Marginal likelihood for assessing model fit : p Y data Z = p Y data q p (q) dq (multiple integration) q

24 Monte Carlo Integration: Simple Example Much of Bayesian inference is about multiple integration. Numerical methods for multiple integration: Quadrature integration (example: approximating the integral as the sum of the areas of triangles beneath the integrand). Monte Carlo Integration: uses random number generator. Example of Monte Carlo Integration: suppose you want to evaluate Z b a f (x) dx, - a < b. select a density function, g (x) for x 2 [a, b] and note: Z b a f (x) dx = Z b a f (x) f (x) g (x) dx = E g (x) g (x), where E is the expectation operator, given g (x).

25 Monte Carlo Integration: Simple Example Previous result: can express an integral as an expectation relative to a (arbitrary, subject to obvious regularity conditions) density function. Use the law of large numbers (LLN) to approximate the expectation. step 1: draw x i independently from density, g, for i = 1,..., M. step 2: evaluate f (x i ) /g (x i ) and compute: Exercise. µ M 1 M M f (x i ) Â g (x i=1 i )! M! E f (x) g (x). Consider an integral where you have an analytic solution available, e.g., R 1 x2 dx. Evaluate the accuracy of the Monte Carlo method using various distributions on [, 1] like uniform or Beta.

26 Monte Carlo Integration: Simple Example Standard classical sampling theory applies. Independence of f (x i ) /g (x i ) over i implies:! M 1 f (x var M Â i ) = v M g (x i ) M, v M var f (xi ) g (x i ) i=1 ' 1 M M Â i=1 f (xi ) g (x i ) µ M 2. Central Limit Theorem Estimate of R b f a (x) dx is a realization from a Nomal distribution with mean estimated by µ M and variance, v M /M. With 95% probability, r Z r vm b µ M 1.96 M vm f (x) dx µ M M Pick g to minimize variance in f (x i ) /g (x i ) and M to minimize (subject to computing cost) v M /M. a

27 Markov Chain, Monte Carlo (MCMC) Algorithms Among the top 1 algorithms "with the greatest influence on the development and practice of science and engineering in the 2th century". Reference: January/February 2 issue of Computing in Science & Engineering, a joint publication of the American Institute of Physics and the IEEE Computer Society. Developed in 1946 by John von Neumann, Stan Ulam, and Nick Metropolis (see

28 MCMC Algorithm: Overview compute a sequence, q (1), q (2),..., q (M), of values of the N 1 vector of model parameters in such a way that h i lim Frequency q (i) close to q = p q Y data. M! Use q (1), q (2),..., q (M) to obtain an approximation for Eq, Var (q) under posterior distribution, p q Y data g q i Y data = R p q Y data dqdq q i6=j p Y data = R q p Y data q p (q) dq posterior distribution of any function of q, f (q) (e.g., impulse responses functions, second moments). MCMC also useful for computing posterior mode, arg max q p q Y data.

29 MCMC Algorithm: setting up Let G (q) denote the log of the posterior distribution (excluding an additive constant): G (q) = log p Y data q + log p (q) ; Compute posterior mode: q = arg max G (q). q Compute the positive definite matrix, V : V 2 G (q) q q 1 q=q Later, we will see that V is a rough estimate of the variance-covariance matrix of q under the posterior distribution.

30 MCMC Algorithm: Metropolis-Hastings q (1) = q to compute q (r), for r > 1 step 1: select candidate q (r), x, draw {z} x from q (r1) + N1 step 2: compute scalar, l : jump distribution z } 1{ k {z}, VA, k is a scalar N1 p Y data x p (x) l = p Y data q (r1) p q (r1) step 3: compute q (r) : q (r) = q (r1) if u > l x if u < l, u is a realization from uniform [, 1]

31 Practical issues What is a sensible value for k? set k so that you accept (i.e., q (r) = x) in step 3 of MCMC algorithm are roughly 23 percent of time What value of M should you set? want convergence, in the sense that if M is increased further, the econometric results do not change substantially in practice, M = 1, (a small value) up to M = 1,,. large M is time-consuming. could use Laplace approximation (after checking its accuracy) in initial phases of research project. more on Laplace below. Burn-in: in practice, some initial q (i) s are discarded to minimize the impact of initial conditions on the results. Multiple chains: may promote e ciency. increase independence among q (i) s. can do MCMC utilizing parallel computing (Dynare can do this).

32 MCMC Algorithm: Why Does it Work? Proposition that MCMC works may be surprising. Whether or not it works does not depend on the details, i.e., precisely how you choose the jump distribution (of course, you had better use k > and V positive definite). Proof: see, e.g., Robert, C. P. (21), The Bayesian Choice, Second Edition, New York: Springer-Verlag. The details may matter by improving the e ciency of the MCMC algorithm, i.e., by influencing what value of M you need. Some Intuition the sequence, q (1), q (2),..., q (M), is relatively heavily populated by q s that have high probability and relatively lightly populated by low probability q s. Additional intuition can be obtained by positing a simple scalar distribution and using MATLAB to verify that MCMC approximates it well (see, e.g., question 2 in assignment 9).

33 MCMC Algorithm: using the Results To approximate marginal posterior distribution, g q i Y data, of q i, compute and display the histogram of q (1) i, q (2) i,..., q (M) i, i = 1,..., M. Other objects of interest: mean and variance of posterior distribution q : Eq ' q 1 M M Â q (j), Var (q) ' 1 M h ih M Â q (j) q q (j) qi. j=1 j=1

34 MCMC Algorithm: using the Results More complicated objects of interest: impulse response functions, model second moments, forecasts, Kalman smoothed estimates of real rate, natural rate, etc. All these things can be represented as non-linear functions of the model parameters, i.e., f (q). can approximate the distribution of f (q) using Var (f (q)) ' 1 M f q (1),..., f q (M)! Ef (q) ' f 1 M M h  f q (i) f i=1 M  f q (i), i=1 ih f q (i) f i

35 MCMC: Remaining Issues In addition to the first and second moments already discused, would also like to have the marginal likelihood of the data. Marginal likelihood is a Bayesian measure of model fit.

36 MCMC Algorithm: the Marginal Likelihood Consider the following sample average: M 1 h q (j) M Â j=1 p Y data q (j) p q (j), where h (q) is an arbitrary density function over the N dimensional variable, q. By the law of large numbers, M 1 h q (j) M Â j=1 p Y data q (j) p q (j)! E M!! h (q) p Y data q p (q)

37 MCMC Algorithm: the Marginal Likelihood 1 M Z = M Â j=1 q h q (j) p Y data q (j) p h (q) p Y data q p (q) q (j)! M! E! h (q) p Y data q p (q)! p Y data q p (q) p Y data dq = 1 p Y data. When h (q) = p (q), harmonic mean estimator of the marginal likelihood. Ideally, want an h such that the variance of h q (j) p Y data q (j) p q (j) is small (recall the earlier discussion of Monte Carlo integration). More on this below.

38 Laplace Approximation to Posterior Distribution In practice, MCMC algorithm very time intensive. Laplace approximation is easy to compute and in many cases it provides a quick and dirty approximation that is quite good. Let q 2 R N denote the Ndimensional vector of parameters and, as before, G (q) log p Y data q p (q) p Y data q ~likelihood of data p (q) ~prior on parameters q ~maximum of G (q) (i.e., mode)

39 Laplace Approximation Second order Taylor series expansion of G (q) log p Y data q p (q) about q = q : G (q) G (q ) + G q (q )(q q ) 1 2 (q q ) G qq (q )(q q ), where G qq (q ) = 2 log p Y data q p (q) q q Interior optimality of q implies: Then: ' p p q=q G q (q ) =, G qq (q ) positive definite p (q) Y data q Y data q p (q ) exp 1 2 (q q ) G qq (q )(q q ).

40 Laplace Approximation to Posterior Distribution Property of Normal distribution: Z 1 G q (2p) N qq (q ) 1 2 exp (q q ) G qq (q )(q q ) dq = 1 Then, Z p Y data q p (q) dq ' Z p Y data q p (q ) exp 1 2 (q q ) G qq (q )(q q ) d = p Y data q p (q ). 1 G qq (q ) 1 2 (2p) N 2

41 Conclude: Laplace Approximation p Y data ' p Y data q p (q ). 1 G qq (q ) 1 2 (2p) N 2 Laplace approximation to posterior distribution: p Y data q p (q) p Y data ' 1 (2p) N 2 exp G qq (q ) (q q ) G qq (q )(q q ) So, posterior of q i (i.e., g q i Y data )isapproximately h q i ~N qi, G qq (q ) 1i. ii

42 Modified Harmonic Mean Estimator of Marginal Likelihood Harmonic mean estimator of the marginal likelihood, p Y data : M h q (j) 31 M Â j=1 p Y data q (j) p q (j) 5, with h (q) set to p (q). In this case, the marginal likelihood is the harmonic mean of the likelihood, evaluated at the values of q generated by the MCMC algorithm. Problem: the variance of the object being averaged is likely to be high, requiring high M for accuracy. When h (q) is instead equated to Laplace approximation of posterior distribution, then h (q) is approximately proportional to p Y data q (j) p being averaged in the last expression is low. q (j) so that the variance of the variable

43 The Marginal Likelihood and Model Comparison Suppose we have two models, Model 1 and Model 2. compute p Y data Model 1 and p Y data Model 2 Suppose p Y data Model 1 > p Y data Model 2. Then, posterior odds on Model 1 higher than Model 2. Model 1 fits better than Model 2 Can use this to compare across two di erent models, or to evaluate contribution to fit of various model features: habit persistence, adjustment costs, etc. For an application of this and the other methods in these notes, see Smets and Wouters, AER 27.

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian

More information

... Econometric Methods for the Analysis of Dynamic General Equilibrium Models

... Econometric Methods for the Analysis of Dynamic General Equilibrium Models ... Econometric Methods for the Analysis of Dynamic General Equilibrium Models 1 Overview Multiple Equation Methods State space-observer form Three Examples of Versatility of state space-observer form:

More information

DSGE Methods. Estimation of DSGE models: Maximum Likelihood & Bayesian. Willi Mutschler, M.Sc.

DSGE Methods. Estimation of DSGE models: Maximum Likelihood & Bayesian. Willi Mutschler, M.Sc. DSGE Methods Estimation of DSGE models: Maximum Likelihood & Bayesian Willi Mutschler, M.Sc. Institute of Econometrics and Economic Statistics University of Münster willi.mutschler@uni-muenster.de Summer

More information

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano Dynamic Factor Models and Factor Augmented Vector Autoregressions Lawrence J Christiano Dynamic Factor Models and Factor Augmented Vector Autoregressions Problem: the time series dimension of data is relatively

More information

Combining Macroeconomic Models for Prediction

Combining Macroeconomic Models for Prediction Combining Macroeconomic Models for Prediction John Geweke University of Technology Sydney 15th Australasian Macro Workshop April 8, 2010 Outline 1 Optimal prediction pools 2 Models and data 3 Optimal pools

More information

The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo

The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo NBER Summer Institute Minicourse What s New in Econometrics: Time Series Lecture 5 July 5, 2008 The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo Lecture 5, July 2, 2008 Outline. Models

More information

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January

More information

Conditional Forecasts

Conditional Forecasts Conditional Forecasts Lawrence J. Christiano September 8, 17 Outline Suppose you have two sets of variables: y t and x t. Would like to forecast y T+j, j = 1,,..., f, conditional on specified future values

More information

Monetary and Exchange Rate Policy Under Remittance Fluctuations. Technical Appendix and Additional Results

Monetary and Exchange Rate Policy Under Remittance Fluctuations. Technical Appendix and Additional Results Monetary and Exchange Rate Policy Under Remittance Fluctuations Technical Appendix and Additional Results Federico Mandelman February In this appendix, I provide technical details on the Bayesian estimation.

More information

Dynamic Factor Models, Factor Augmented VARs, and SVARs in Macroeconomics. -- Part 2: SVARs and SDFMs --

Dynamic Factor Models, Factor Augmented VARs, and SVARs in Macroeconomics. -- Part 2: SVARs and SDFMs -- Dynamic Factor Models, Factor Augmented VARs, and SVARs in Macroeconomics -- Part 2: SVARs and SDFMs -- Mark Watson Princeton University Central Bank of Chile October 22-24, 208 Reference: Stock, James

More information

Estimation of moment-based models with latent variables

Estimation of moment-based models with latent variables Estimation of moment-based models with latent variables work in progress Ra aella Giacomini and Giuseppe Ragusa UCL/Cemmap and UCI/Luiss UPenn, 5/4/2010 Giacomini and Ragusa (UCL/Cemmap and UCI/Luiss)Moments

More information

Estimating Macroeconomic Models: A Likelihood Approach

Estimating Macroeconomic Models: A Likelihood Approach Estimating Macroeconomic Models: A Likelihood Approach Jesús Fernández-Villaverde University of Pennsylvania, NBER, and CEPR Juan Rubio-Ramírez Federal Reserve Bank of Atlanta Estimating Dynamic Macroeconomic

More information

Inference when identifying assumptions are doubted. A. Theory B. Applications

Inference when identifying assumptions are doubted. A. Theory B. Applications Inference when identifying assumptions are doubted A. Theory B. Applications 1 A. Theory Structural model of interest: A y t B 1 y t1 B m y tm u t nn n1 u t i.i.d. N0, D D diagonal 2 Bayesian approach:

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

Advanced Econometrics III, Lecture 5, Dynamic General Equilibrium Models Example 1: A simple RBC model... 2

Advanced Econometrics III, Lecture 5, Dynamic General Equilibrium Models Example 1: A simple RBC model... 2 Advanced Econometrics III, Lecture 5, 2017 1 Contents 1 Dynamic General Equilibrium Models 2 1.1 Example 1: A simple RBC model.................... 2 1.2 Approximation Method Based on Linearization.............

More information

Inference when identifying assumptions are doubted. A. Theory. Structural model of interest: B 1 y t1. u t. B m y tm. u t i.i.d.

Inference when identifying assumptions are doubted. A. Theory. Structural model of interest: B 1 y t1. u t. B m y tm. u t i.i.d. Inference when identifying assumptions are doubted A. Theory B. Applications Structural model of interest: A y t B y t B m y tm nn n i.i.d. N, D D diagonal A. Theory Bayesian approach: Summarize whatever

More information

Ambiguous Business Cycles: Online Appendix

Ambiguous Business Cycles: Online Appendix Ambiguous Business Cycles: Online Appendix By Cosmin Ilut and Martin Schneider This paper studies a New Keynesian business cycle model with agents who are averse to ambiguity (Knightian uncertainty). Shocks

More information

Bayesian Methods with Monte Carlo Markov Chains II

Bayesian Methods with Monte Carlo Markov Chains II Bayesian Methods with Monte Carlo Markov Chains II Henry Horng-Shing Lu Institute of Statistics National Chiao Tung University hslu@stat.nctu.edu.tw http://tigpbp.iis.sinica.edu.tw/courses.htm 1 Part 3

More information

Graduate Macro Theory II: Notes on Quantitative Analysis in DSGE Models

Graduate Macro Theory II: Notes on Quantitative Analysis in DSGE Models Graduate Macro Theory II: Notes on Quantitative Analysis in DSGE Models Eric Sims University of Notre Dame Spring 2011 This note describes very briefly how to conduct quantitative analysis on a linearized

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Lecture 4: State Estimation in Hidden Markov Models (cont.)

Lecture 4: State Estimation in Hidden Markov Models (cont.) EE378A Statistical Signal Processing Lecture 4-04/13/2017 Lecture 4: State Estimation in Hidden Markov Models (cont.) Lecturer: Tsachy Weissman Scribe: David Wugofski In this lecture we build on previous

More information

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions

More information

17 : Markov Chain Monte Carlo

17 : Markov Chain Monte Carlo 10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

Whither News Shocks?

Whither News Shocks? Discussion of Whither News Shocks? Barsky, Basu and Lee Christiano Outline Identification assumptions for news shocks Empirical Findings Using NK model used to think about BBL identification. Why should

More information

Prediction Using Several Macroeconomic Models

Prediction Using Several Macroeconomic Models Gianni Amisano European Central Bank 1 John Geweke University of Technology Sydney, Erasmus University, and University of Colorado European Central Bank May 5, 2012 1 The views expressed do not represent

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

Y t = log (employment t )

Y t = log (employment t ) Advanced Macroeconomics, Christiano Econ 416 Homework #7 Due: November 21 1. Consider the linearized equilibrium conditions of the New Keynesian model, on the slide, The Equilibrium Conditions in the handout,

More information

Bayesian Estimation of DSGE Models: Lessons from Second-order Approximations

Bayesian Estimation of DSGE Models: Lessons from Second-order Approximations Bayesian Estimation of DSGE Models: Lessons from Second-order Approximations Sungbae An Singapore Management University Bank Indonesia/BIS Workshop: STRUCTURAL DYNAMIC MACROECONOMIC MODELS IN ASIA-PACIFIC

More information

Sequential Monte Carlo Methods (for DSGE Models)

Sequential Monte Carlo Methods (for DSGE Models) Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References These lectures use material from our joint work: Tempered

More information

Econometrics I, Estimation

Econometrics I, Estimation Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the

More information

Non-Markovian Regime Switching with Endogenous States and Time-Varying State Strengths

Non-Markovian Regime Switching with Endogenous States and Time-Varying State Strengths Non-Markovian Regime Switching with Endogenous States and Time-Varying State Strengths January 2004 Siddhartha Chib Olin School of Business Washington University chib@olin.wustl.edu Michael Dueker Federal

More information

Building and simulating DSGE models part 2

Building and simulating DSGE models part 2 Building and simulating DSGE models part II DSGE Models and Real Life I. Replicating business cycles A lot of papers consider a smart calibration to replicate business cycle statistics (e.g. Kydland Prescott).

More information

Bayesian Computations for DSGE Models

Bayesian Computations for DSGE Models Bayesian Computations for DSGE Models Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 This Lecture is Based on Bayesian Estimation of DSGE Models Edward P. Herbst &

More information

Part 6: Multivariate Normal and Linear Models

Part 6: Multivariate Normal and Linear Models Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7

More information

Bayesian DSGE Model Estimation

Bayesian DSGE Model Estimation Bayesian DSGE Model Estimation Lecture six Alexander Kriwoluzky Universiteit van Amsterdam February 12th, 2010 Aim of the week Prior vs. Posterior Kalman smoother Chain Diagnostics 1 Motivation We understand

More information

The Metropolis-Hastings Algorithm. June 8, 2012

The Metropolis-Hastings Algorithm. June 8, 2012 The Metropolis-Hastings Algorithm June 8, 22 The Plan. Understand what a simulated distribution is 2. Understand why the Metropolis-Hastings algorithm works 3. Learn how to apply the Metropolis-Hastings

More information

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability

More information

ESTIMATION of a DSGE MODEL

ESTIMATION of a DSGE MODEL ESTIMATION of a DSGE MODEL Paris, October 17 2005 STÉPHANE ADJEMIAN stephane.adjemian@ens.fr UNIVERSITÉ DU MAINE & CEPREMAP Slides.tex ESTIMATION of a DSGE MODEL STÉPHANE ADJEMIAN 16/10/2005 21:37 p. 1/3

More information

Identifying the Monetary Policy Shock Christiano et al. (1999)

Identifying the Monetary Policy Shock Christiano et al. (1999) Identifying the Monetary Policy Shock Christiano et al. (1999) The question we are asking is: What are the consequences of a monetary policy shock a shock which is purely related to monetary conditions

More information

Kalman Filter. Lawrence J. Christiano

Kalman Filter. Lawrence J. Christiano Kalman Filter Lawrence J. Christiano Background The Kalman filter is a powerful tool, which can be used in a variety of contexts. can be used for filtering and smoothing. To help make it concrete, we will

More information

Long-Run Covariability

Long-Run Covariability Long-Run Covariability Ulrich K. Müller and Mark W. Watson Princeton University October 2016 Motivation Study the long-run covariability/relationship between economic variables great ratios, long-run Phillips

More information

Shrinkage in Set-Identified SVARs

Shrinkage in Set-Identified SVARs Shrinkage in Set-Identified SVARs Alessio Volpicella (Queen Mary, University of London) 2018 IAAE Annual Conference, Université du Québec à Montréal (UQAM) and Université de Montréal (UdeM) 26-29 June

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

DSGE MODELS WITH STUDENT-t ERRORS

DSGE MODELS WITH STUDENT-t ERRORS Econometric Reviews, 33(1 4):152 171, 2014 Copyright Taylor & Francis Group, LLC ISSN: 0747-4938 print/1532-4168 online DOI: 10.1080/07474938.2013.807152 DSGE MODELS WITH STUDENT-t ERRORS Siddhartha Chib

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Sentiment Shocks as Drivers of Business Cycles

Sentiment Shocks as Drivers of Business Cycles Sentiment Shocks as Drivers of Business Cycles Agustín H. Arias October 30th, 2014 Abstract This paper studies the role of sentiment shocks as a source of business cycle fluctuations. Considering a standard

More information

Equilibrium Conditions for the Simple New Keynesian Model

Equilibrium Conditions for the Simple New Keynesian Model Equilibrium Conditions for the Simple New Keynesian Model Lawrence J. Christiano August 4, 04 Baseline NK model with no capital and with a competitive labor market. private sector equilibrium conditions

More information

Lecture 4 The Centralized Economy: Extensions

Lecture 4 The Centralized Economy: Extensions Lecture 4 The Centralized Economy: Extensions Leopold von Thadden University of Mainz and ECB (on leave) Advanced Macroeconomics, Winter Term 2013 1 / 36 I Motivation This Lecture considers some applications

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

High-dimensional Problems in Finance and Economics. Thomas M. Mertens

High-dimensional Problems in Finance and Economics. Thomas M. Mertens High-dimensional Problems in Finance and Economics Thomas M. Mertens NYU Stern Risk Economics Lab April 17, 2012 1 / 78 Motivation Many problems in finance and economics are high dimensional. Dynamic Optimization:

More information

ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS

ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS 1. THE CLASS OF MODELS y t {y s, s < t} p(y t θ t, {y s, s < t}) θ t = θ(s t ) P[S t = i S t 1 = j] = h ij. 2. WHAT S HANDY ABOUT IT Evaluating the

More information

Bayesian Estimation of DSGE Models

Bayesian Estimation of DSGE Models Bayesian Estimation of DSGE Models Stéphane Adjemian Université du Maine, GAINS & CEPREMAP stephane.adjemian@univ-lemans.fr http://www.dynare.org/stepan June 28, 2011 June 28, 2011 Université du Maine,

More information

MARKOV CHAIN MONTE CARLO

MARKOV CHAIN MONTE CARLO MARKOV CHAIN MONTE CARLO RYAN WANG Abstract. This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with

More information

DSGE Models in a Liquidity Trap and Japan s Lost Decade

DSGE Models in a Liquidity Trap and Japan s Lost Decade DSGE Models in a Liquidity Trap and Japan s Lost Decade Koiti Yano Economic and Social Research Institute ESRI International Conference 2009 June 29, 2009 1 / 27 Definition of a Liquidity Trap Terminology

More information

Bayesian Linear Regression [DRAFT - In Progress]

Bayesian Linear Regression [DRAFT - In Progress] Bayesian Linear Regression [DRAFT - In Progress] David S. Rosenberg Abstract Here we develop some basics of Bayesian linear regression. Most of the calculations for this document come from the basic theory

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1 Parameter Estimation William H. Jefferys University of Texas at Austin bill@bayesrules.net Parameter Estimation 7/26/05 1 Elements of Inference Inference problems contain two indispensable elements: Data

More information

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation

More information

Lecture : Probabilistic Machine Learning

Lecture : Probabilistic Machine Learning Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

I. Bayesian econometrics

I. Bayesian econometrics I. Bayesian econometrics A. Introduction B. Bayesian inference in the univariate regression model C. Statistical decision theory D. Large sample results E. Diffuse priors F. Numerical Bayesian methods

More information

Machine Learning. Probability Basics. Marc Toussaint University of Stuttgart Summer 2014

Machine Learning. Probability Basics. Marc Toussaint University of Stuttgart Summer 2014 Machine Learning Probability Basics Basic definitions: Random variables, joint, conditional, marginal distribution, Bayes theorem & examples; Probability distributions: Binomial, Beta, Multinomial, Dirichlet,

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Macroeconomics Theory II

Macroeconomics Theory II Macroeconomics Theory II Francesco Franco FEUNL February 2016 Francesco Franco Macroeconomics Theory II 1/23 Housekeeping. Class organization. Website with notes and papers as no "Mas-Collel" in macro

More information

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Lecture 8: Bayesian Estimation of Parameters in State Space Models in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space

More information

Gaussian Mixture Approximations of Impulse Responses and the Non-Linear Effects of Monetary Shocks

Gaussian Mixture Approximations of Impulse Responses and the Non-Linear Effects of Monetary Shocks Gaussian Mixture Approximations of Impulse Responses and the Non-Linear Effects of Monetary Shocks Regis Barnichon (CREI, Universitat Pompeu Fabra) Christian Matthes (Richmond Fed) Effects of monetary

More information

Bayesian Estimation of Input Output Tables for Russia

Bayesian Estimation of Input Output Tables for Russia Bayesian Estimation of Input Output Tables for Russia Oleg Lugovoy (EDF, RANE) Andrey Polbin (RANE) Vladimir Potashnikov (RANE) WIOD Conference April 24, 2012 Groningen Outline Motivation Objectives Bayesian

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

Bayesian Estimation with Sparse Grids

Bayesian Estimation with Sparse Grids Bayesian Estimation with Sparse Grids Kenneth L. Judd and Thomas M. Mertens Institute on Computational Economics August 7, 27 / 48 Outline Introduction 2 Sparse grids Construction Integration with sparse

More information

Financial Factors in Economic Fluctuations. Lawrence Christiano Roberto Motto Massimo Rostagno

Financial Factors in Economic Fluctuations. Lawrence Christiano Roberto Motto Massimo Rostagno Financial Factors in Economic Fluctuations Lawrence Christiano Roberto Motto Massimo Rostagno Background Much progress made on constructing and estimating models that fit quarterly data well (Smets-Wouters,

More information

Part 8: GLMs and Hierarchical LMs and GLMs

Part 8: GLMs and Hierarchical LMs and GLMs Part 8: GLMs and Hierarchical LMs and GLMs 1 Example: Song sparrow reproductive success Arcese et al., (1992) provide data on a sample from a population of 52 female song sparrows studied over the course

More information

Assessing Structural Convergence between Romanian Economy and Euro Area: A Bayesian Approach

Assessing Structural Convergence between Romanian Economy and Euro Area: A Bayesian Approach Vol. 3, No.3, July 2013, pp. 372 383 ISSN: 2225-8329 2013 HRMARS www.hrmars.com Assessing Structural Convergence between Romanian Economy and Euro Area: A Bayesian Approach Alexie ALUPOAIEI 1 Ana-Maria

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As

More information

Economics Discussion Paper Series EDP Measuring monetary policy deviations from the Taylor rule

Economics Discussion Paper Series EDP Measuring monetary policy deviations from the Taylor rule Economics Discussion Paper Series EDP-1803 Measuring monetary policy deviations from the Taylor rule João Madeira Nuno Palma February 2018 Economics School of Social Sciences The University of Manchester

More information

The Bayesian Approach to Multi-equation Econometric Model Estimation

The Bayesian Approach to Multi-equation Econometric Model Estimation Journal of Statistical and Econometric Methods, vol.3, no.1, 2014, 85-96 ISSN: 2241-0384 (print), 2241-0376 (online) Scienpress Ltd, 2014 The Bayesian Approach to Multi-equation Econometric Model Estimation

More information

Title. Description. var intro Introduction to vector autoregressive models

Title. Description. var intro Introduction to vector autoregressive models Title var intro Introduction to vector autoregressive models Description Stata has a suite of commands for fitting, forecasting, interpreting, and performing inference on vector autoregressive (VAR) models

More information

The Ising model and Markov chain Monte Carlo

The Ising model and Markov chain Monte Carlo The Ising model and Markov chain Monte Carlo Ramesh Sridharan These notes give a short description of the Ising model for images and an introduction to Metropolis-Hastings and Gibbs Markov Chain Monte

More information

Monte Carlo Methods. Leon Gu CSD, CMU

Monte Carlo Methods. Leon Gu CSD, CMU Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte

More information

David Giles Bayesian Econometrics

David Giles Bayesian Econometrics David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian

More information

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models Fall 22 Contents Introduction 2. An illustrative example........................... 2.2 Discussion...................................

More information

Bayesian Model Comparison:

Bayesian Model Comparison: Bayesian Model Comparison: Modeling Petrobrás log-returns Hedibert Freitas Lopes February 2014 Log price: y t = log p t Time span: 12/29/2000-12/31/2013 (n = 3268 days) LOG PRICE 1 2 3 4 0 500 1000 1500

More information

Lecture 6: Markov Chain Monte Carlo

Lecture 6: Markov Chain Monte Carlo Lecture 6: Markov Chain Monte Carlo D. Jason Koskinen koskinen@nbi.ku.dk Photo by Howard Jackman University of Copenhagen Advanced Methods in Applied Statistics Feb - Apr 2016 Niels Bohr Institute 2 Outline

More information

Matching DSGE models,vars, and state space models. Fabio Canova EUI and CEPR September 2012

Matching DSGE models,vars, and state space models. Fabio Canova EUI and CEPR September 2012 Matching DSGE models,vars, and state space models Fabio Canova EUI and CEPR September 2012 Outline Alternative representations of the solution of a DSGE model. Fundamentalness and finite VAR representation

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Probabilities Marc Toussaint University of Stuttgart Winter 2018/19 Motivation: AI systems need to reason about what they know, or not know. Uncertainty may have so many sources:

More information

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester Physics 403 Numerical Methods, Maximum Likelihood, and Least Squares Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Quadratic Approximation

More information

Answers and expectations

Answers and expectations Answers and expectations For a function f(x) and distribution P(x), the expectation of f with respect to P is The expectation is the average of f, when x is drawn from the probability distribution P E

More information

Open Economy Macroeconomics: Theory, methods and applications

Open Economy Macroeconomics: Theory, methods and applications Open Economy Macroeconomics: Theory, methods and applications Lecture 4: The state space representation and the Kalman Filter Hernán D. Seoane UC3M January, 2016 Today s lecture State space representation

More information

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time

More information

Differentiation and Integration

Differentiation and Integration Differentiation and Integration (Lectures on Numerical Analysis for Economists II) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 12, 2018 1 University of Pennsylvania 2 Boston College Motivation

More information

Chapter 1. Introduction. 1.1 Background

Chapter 1. Introduction. 1.1 Background Chapter 1 Introduction Science is facts; just as houses are made of stones, so is science made of facts; but a pile of stones is not a house and a collection of facts is not necessarily science. Henri

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) School of Computer Science 10-708 Probabilistic Graphical Models Markov Chain Monte Carlo (MCMC) Readings: MacKay Ch. 29 Jordan Ch. 21 Matt Gormley Lecture 16 March 14, 2016 1 Homework 2 Housekeeping Due

More information

Markov Networks.

Markov Networks. Markov Networks www.biostat.wisc.edu/~dpage/cs760/ Goals for the lecture you should understand the following concepts Markov network syntax Markov network semantics Potential functions Partition function

More information

Prediction with Misspeci ed Models. John Geweke* and Gianni Amisano**

Prediction with Misspeci ed Models. John Geweke* and Gianni Amisano** Prediction with Misspeci ed Models John Geweke* and Gianni Amisano** Many decision-makers in the public and private sectors routinely consult the implications of formal economic and statistical models

More information

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems John Bardsley, University of Montana Collaborators: H. Haario, J. Kaipio, M. Laine, Y. Marzouk, A. Seppänen, A. Solonen, Z.

More information