Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference
|
|
- Winfred Ramsey
- 5 years ago
- Views:
Transcription
1 1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference Ed Herbst 1 Frank Schorfheide 2 1 Federal Reserve Board 2 University of Pennsylvania February 5, 2016
2 Topics Deriving a posterior distribution in a linear regression model Direct sampling Bayesian decision making
3 Bayesian Inference Ingredients of Bayesian Analysis: Likelihood function p(y φ) Prior density p(φ) Marginal data density p(y ) = p(y φ)p(φ)dφ Bayes Theorem: p(φ Y ) = p(y φ)p(φ) p(y )
4 Linear Regression / AR Models Consider AR(1) model: y t = y t 1 φ + u t, u t iidn(0, 1). Let x t = y t 1. Write as or y t = x tφ + u t, u t iidn(0, 1), Y = X φ + U. We can easily allow for multiple regressors. Assume φ is k 1. Notice: we treat the variance of the errors as know. The generalization to unknown variance is straightforward but tedious. Likelihood function: p(y φ) = (2π) T /2 exp { 1 } 2 (Y X φ) (Y X φ).
5 A Convenient Prior Prior: φ N ) { (0 k 1, τ 2 I k k, p(φ) = (2πτ 2 ) k/2 exp 1 } 2τ 2 φ φ Large τ means diffuse prior. Small τ means tight prior.
6 Deriving the Posterior Bayes Theorem: p(φ Y ) p(y φ)p(φ) { exp 1 } 2 [(Y X φ) (Y X φ) + τ 2 φ φ]. Guess: what if φ Y N( φ T, V T ). Then { p(θ Y ) exp 1 } 2 (φ φ T ) 1 V T (φ φ T ). Rewrite exponential term Y Y φ X Y Y X φ + φ X X φ + τ 2 φ φ = Y Y φ X Y Y X φ + φ (X X + τ 2 I)φ ( ) ( ) = φ (X X + τ 2 I) 1 X Y X X + τ 2 I ( ) φ (X X + τ 2 I) 1 X Y +Y Y Y X (X X + τ 2 I) 1 X Y.
7 Deriving the Posterior Exponential term is a quadratic function of φ. Deduce: posterior distribution of φ must be a multivariate normal distribution φ Y N( φ T, V T ) with φ T = (X X + τ 2 I) 1 X Y τ : τ 0: V T = (X X + τ 2 I) 1. φ Y approx ( ) N ˆφ mle, (X X ) 1. φ Y approx Pointmass at 0
8 Marginal Data Density Plays an important role in Bayesian model selection and averaging. Write p(y θ)p(θ) p(y ) = p(θ Y ) { = exp 1 } 2 [Y Y Y X (X X + τ 2 I) 1 X Y ] (2π) T /2 I + τ 2 X X 1/2. The exponential term measures the goodness-of-fit. I + τ 2 X X is a penalty for model complexity.
9 Posterior We will often abbreviate posterior distributions p(φ Y ) by π(φ) and posterior expectations of h(φ) by E π [h] = E π [h(φ)] = h(φ)π(φ)dφ = h(φ)p(φ Y )dφ. We will focus on algorithms that generate draws {φ i } N i=1 from posterior distributions of parameters in time series models. These draws can then be transformed into objects of interest, h(φ i ), and under suitable conditions a Monte Carlo average of the form h N = 1 N N h(φ i ) E π [h]. i=1 Strong law of large numbers (SLLN), central limit theorem (CLT)...
10 Direct Sampling In the simple linear regression model with Gaussian posterior it is possible to sample directly. For i = 1 to N, draw φ i from N ( φ, Vφ ). Provided that V π [h(φ)] < we can deduce from Kolmogorov s SLLN and the Lindeberg-Levy CLT that a.s. h N E π [h] N ( h N E π [h] ) = N ( 0, V π [h(φ)] ).
11 Decision Making The posterior expected loss associated with a decision δ( ) is given by ρ ( δ( ) Y ) = L ( θ, δ(y ) ) p(θ Y )dθ. Θ A Bayes decision is a decision that minimizes the posterior expected loss: δ (Y ) = argmin d ρ ( δ( ) Y ). Since in most applications it is not feasible to derive the posterior expected risk analytically, we replace ρ ( δ( ) Y ) by a Monte Carlo approximation of the form ρ N ( δ( ) Y ) = 1 N N L ( θ i, δ( ) ). i=1 A numerical approximation to the Bayes decision δ ( ) is then given by δ N(Y ) = argmin d ρ N ( δ( ) Y ).
12 Inference Point estimation: Quadratic loss: posterior mean Absolute error loss: posterior median Interval/Set estimation P π {θ C(Y )} = 1 α: highest posterior density sets equal-tail-probability intervals
13 Forecasting Example: h 1 y T +h = θ h y T + θ s u T +h s s=0 h-step ahead conditional distribution: y T +h (Y 1:T, θ) N (θ h y T, 1 ) θh. 1 θ Posterior predictive distribution: p(y T +h Y 1:T ) = p(y T +h y T, θ)p(θ Y 1:T )dθ. For each draw θ i from the posterior distribution p(θ Y 1:T ) sample a sequence of innovations u i T +1,..., ui T +h and compute y i T +h as a function of θ i, u i T +1,..., ui T +h, and Y 1:T.
14 Model Uncertainty Assign prior probabilities γ j,0 to models M j, j = 1,..., J. Posterior model probabilities are given by γ j,t = γ j,0 p(y M j ) J j=1 γ j,0p(y M j ), where p(y M j ) = p(y θ (j), M j )p(θ (j) M j )dθ (j) Log marginal data densities are one-step-ahead predictive scores: ln p(y M j ) T = ln p(y t θ (j), Y 1:t 1, M j )p(θ (j) Y 1:t 1, M j )dθ (j). t=1 Model averaging: J p(h Y ) = γ j,t p(h j (θ (j) ) Y, M j ). j=1
15 A Non-Gaussian Posterior Suppose that y t is determined by the AR(1) model but object of interest is θ, which can be bounded based on φ: φ θ and θ φ + 1. Parameter θ is set-identified. The interval Θ(φ) = [φ, φ + 1] is called the identified set. Prior for θ conditional on φ of the form θ φ U[φ, φ + 1].
16 A Non-Gaussian Posterior Joint posterior of θ and φ: p(θ, φ Y ) = p(φ Y )p(θ φ, Y ) p(y φ)p(θ φ)p(φ). Since θ does not enter the likelihood function, we deduce that p(y φ)p(φ) p(φ Y ) = p(y φ)p(φ)dφ p(θ φ, Y ) = p(θ φ). In our example the marginal posterior distribution of θ is given by π(θ) = θ θ 1 p(φ Y )p(θ φ)dφ ( ) ( ) θ φ θ 1 φ = Φ N Φ N, V V where Φ N (x) is the cumulative density function of a N(0, 1).
17 What if the Posterior is Non-Gaussian? 1.0 π(θ) θ Posterior distribution π(θ) for φ = 0.5 and V φ equal to 1/4 (dotted), 1/20 (dashed), and 1/100 (solid).
18 Importance Sampling Approximate π( ) by using a different, tractable density g(θ) that is easy to sample from. For more general problems, posterior density may be non-normalized. So we write π(θ) = p(y θ)p(θ) p(y ) = f (θ) Z. Importance sampling is based on the identity E π [h(θ)] = h(θ)π(θ)dθ = 1 h(θ) f (θ) Z g(θ) g(θ)dθ. The ratio w(θ) = f (θ) g(θ) is called the (unnormalized) importance weight. Θ
19 Importance Sampling 1 For i = 1 to N, draw θ i iid g(θ) and compute the unnormalized importance weights w i = w(θ i ) = f (θi ) g(θ i ). 2 Compute the normalized importance weights W i = 1 N w i N i=1 w i. An approximation of E π [h(θ)] is given by h N = 1 N N W i h(θ i ). i=1
20 Importance Sampling Distribution θ Posterior density π(θ) (solid) as well as two importance sampling densities ( concentrated (dashed) and diffuse (dotted)) g(θ).
21 Accuracy Since we are generating iid draws from g(θ), it s fairly straightforward to derive a CLT: It can be shown that N( hn E π [h]) = N ( 0, Ω(h) ), where Ω(h) = V g [(π/g)(h E π [h])]. Using a crude approximation (see, e.g., Liu (2008)), we can factorize Ω(h) as follows: Ω(h) V π [h] ( V g [π/g] + 1 ). The approximation highlights that the larger the variance of the importance weights, the less accurate the Monte Carlo approximation relative to the accuracy that could be achieved with an iid sample from the posterior. Users often monitor ESS = N V π[h] Ω(h) N 1 + V g [π/g].
22 Inefficiency Factors for Concentrated IS Density N Large sample inefficiency factors InEff = Ω(h)/V π [h] (dashed) and as their small sample approximations (solid) based on N run = 1, 000. We consider h(θ) = θ (triangles) and h(θ) = θ 2 (circles). The solid line (no symbols) depicts the approximate inefficiency factor 1 + V g [π/g].
23 Inefficiency Factors for Diffuse IS Density N Large sample inefficiency factors InEff = Ω(h)/V π [h] (dashed) and as their small sample approximations (solid) based on N run = 1, 000. We consider h(θ) = θ (triangles) and h(θ) = θ 2 (circles). The solid line (no symbols) depicts the approximate inefficiency factor 1 + V g [π/g].
24 Markov Chain Monte Carlo (MCMC) Main idea: create a sequence of serially correlated draws such that the distribution of θ i converges to the posterior distribution p(θ Y ).
25 Generic Metropolis-Hastings Algorithm For i = 1 to N: 1 Draw ϑ from a density q(ϑ θ i 1 ). 2 Set θ i = ϑ with probability { α(ϑ θ i 1 ) = min 1, and θ i = θ i 1 otherwise. p(y ϑ)p(ϑ)/q(ϑ θ i 1 } ) p(y θ i 1 )p(θ i 1 )/q(θ i 1 ϑ) Recall p(θ Y ) p(y θ)p(θ). We draw θ i conditional on a parameter draw θ i 1 : leads to Markov transition kernel K(θ θ).
26 Importance Invariance Property It can be shown that p(θ Y ) = K(θ θ)p( θ Y )d θ. Write K(θ θ) = u(θ θ) + r( θ)δ θ (θ). u(θ θ) is the density kernel (note that u(θ ) does not integrated to one) for accepted draws: u(θ θ) = α(θ θ)q(θ θ). Rejection probability: [1 ] r( θ) = α(θ θ) q(θ θ)dθ = 1 u(θ θ)dθ.
27 Importance Invariance Property Reversibility: Conditional on the sampler not rejecting the proposed draw, the density associated with a transition from θ to θ is identical to the density associated with a transition from { θ to θ: } p(θ Y )/q(θ θ) p( θ Y )u(θ θ) = p( θ Y )q(θ θ) min 1, p( θ Y )/q( θ θ) = min { p( θ Y )q(θ θ), p(θ Y )q( θ θ) } { } p( θ Y )/q( θ θ) = p(θ Y )q( θ θ) min p(θ Y )/q(θ θ), 1 = p(θ Y )u( θ θ). Using the reversibility result, we can now verify the invariance property: K(θ θ)p( θ Y )d θ = = = p(θ Y ) u(θ θ)p( θ Y )d θ + u( θ θ)p(θ Y )d θ + r(θ)p(θ Y ) r( θ)δ θ (θ)p( θ Y )d θ
28 A Discrete Example Suppose parameter vector θ is scalar and takes only two values: Θ = {τ 1, τ 2 } The posterior distribution p(θ Y ) can be represented by a set of probabilities collected in the vector π, say π = [π 1, π 2 ] with π 2 > π 1. Suppose we obtain ϑ based on transition matrix Q: [ ] q (1 q) Q =. (1 q) q
29 Discrete MH Algorithm Iteration i: suppose that θ i = τ j. Based on transition matrix [ Q = q (1 q) (1 q) q ], determine a proposed state ϑ = τ s. With probability α(τ s τ j ) the proposed state is accepted. Set θ i = ϑ = τ s. With probability 1 α(τ s τ j ) stay in old state and set θ i = θ i = τ j. Choose (Q terms cancel because of symmetry) { α(τ s τ j ) = min 1, π } s. π j
30 Discrete MH Algorithm: Transition Matrix The resulting chain s transition matrix is: [ q (1 ( q) ) K = (1 q) π1 π 2 q + (1 q) 1 π1 π 2 Straightforward calculations reveal that the transition matrix K has eigenvalues: λ 1 (K) = 1, λ 2 (K) = q (1 q). 1 π 1 Equilibrium distribution is eigenvector associated with unit eigenvalue. For q [0, 1) the equilibrium distribution is unique. π 1 ].
31 Convergence The persistence of the Markov chain depends on second eigenvalue, which depends on the proposal distribution Q. Define the transformed parameter ξ i = θi τ 1 τ 2 τ 1. We can represent the Markov chain associated with ξ i as first-order autoregressive process ξ i = (1 k 11 ) + λ 2 (K)ξ i + ν i. Conditional on ξ i = j, j = 0, 1, the innovation ν i has support on k jj and (1 k jj ), its conditional mean is equal to zero, and its conditional variance is equal to k jj (1 k jj ).
32 Convergence Autocovariance function of h(θ (s) ): COV (h(θ i ), h(θ (i l) )) = ( h(τ 2 ) h(τ 1 ) ) ( ) l 2 π 1 π1 (1 π 1 ) q (1 q) 1 π 1 ( ) l π 1 = V π [h] q (1 q) 1 π 1 If q = π 1 then the autocovariances are equal to zero and the draws h(θ i ) are serially uncorrelated (in fact, in our simple discrete setting they are also independent).
33 Convergence Define the Monte Carlo estimate h N = 1 N N h(θ i ). i=1 Deduce from CLT N( h N E π [h]) = N ( 0, Ω(h) ), where ΩV h is the long-run covariance matrix ( L ( ) ) l Ω(h) = lim V L l π 1 π[h] q (1 q). L L 1 π 1 l=1 In turn, the asymptotic inefficiency factor is given by InEff = Ω(h) V π [h] = lim L L l=1 L l L ( ) l π 1 q (1 q). 1 π 1
34 Autocorrelation Function of θ i q = 0.00 q = 0.20 q = 0.50 q =
35 Asymptotic Inefficiency InEff q
36 Small Sample Variance V[ h N ] versus HAC Estimates of Ω(h)
Introduction to Bayesian Inference
University of Pennsylvania EABCN Training School May 10, 2016 Bayesian Inference Ingredients of Bayesian Analysis: Likelihood function p(y φ) Prior density p(φ) Marginal data density p(y ) = p(y φ)p(φ)dφ
More informationSequential Monte Carlo Methods
University of Pennsylvania Bradley Visitor Lectures October 23, 2017 Introduction Unfortunately, standard MCMC can be inaccurate, especially in medium and large-scale DSGE models: disentangling importance
More informationBayesian Computations for DSGE Models
Bayesian Computations for DSGE Models Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 This Lecture is Based on Bayesian Estimation of DSGE Models Edward P. Herbst &
More informationSequential Monte Carlo Methods (for DSGE Models)
Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References Solution and Estimation of DSGE Models, with J. Fernandez-Villaverde
More informationSequential Monte Carlo Methods
University of Pennsylvania EABCN Training School May 10, 2016 Introduction Unfortunately, standard MCMC can be inaccurate, especially in medium and large-scale DSGE models: disentangling importance of
More informationSequential Monte Carlo Methods (for DSGE Models)
Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References These lectures use material from our joint work: Tempered
More informationMonte Carlo in Bayesian Statistics
Monte Carlo in Bayesian Statistics Matthew Thomas SAMBa - University of Bath m.l.thomas@bath.ac.uk December 4, 2014 Matthew Thomas (SAMBa) Monte Carlo in Bayesian Statistics December 4, 2014 1 / 16 Overview
More informationMCMC algorithms for fitting Bayesian models
MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models
More informationRiemann Manifold Methods in Bayesian Statistics
Ricardo Ehlers ehlers@icmc.usp.br Applied Maths and Stats University of São Paulo, Brazil Working Group in Statistical Learning University College Dublin September 2015 Bayesian inference is based on Bayes
More informationSequential Monte Carlo Methods
University of Pennsylvania Econ 722 Part 1 February 13, 2019 Introduction Posterior expectations can be approximated by Monte Carlo averages. If we have draws from {θ s } N s=1 from p(θ Y ), then (under
More informationIntroduction to Bayesian Computation
Introduction to Bayesian Computation Dr. Jarad Niemi STAT 544 - Iowa State University March 20, 2018 Jarad Niemi (STAT544@ISU) Introduction to Bayesian Computation March 20, 2018 1 / 30 Bayesian computation
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters
More informationBAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA
BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci
More informationMarkov Chain Monte Carlo methods
Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning
More informationPoint, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models
Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models Francis X. Diebold Frank Schorfheide Minchul Shin University of Pennsylvania May 4, 2014 1 / 33 Motivation The use
More informationRisk Estimation and Uncertainty Quantification by Markov Chain Monte Carlo Methods
Risk Estimation and Uncertainty Quantification by Markov Chain Monte Carlo Methods Konstantin Zuev Institute for Risk and Uncertainty University of Liverpool http://www.liv.ac.uk/risk-and-uncertainty/staff/k-zuev/
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More informationLearning the hyper-parameters. Luca Martino
Learning the hyper-parameters Luca Martino 2017 2017 1 / 28 Parameters and hyper-parameters 1. All the described methods depend on some choice of hyper-parameters... 2. For instance, do you recall λ (bandwidth
More informationMonetary and Exchange Rate Policy Under Remittance Fluctuations. Technical Appendix and Additional Results
Monetary and Exchange Rate Policy Under Remittance Fluctuations Technical Appendix and Additional Results Federico Mandelman February In this appendix, I provide technical details on the Bayesian estimation.
More informationMarkov Chain Monte Carlo (MCMC)
Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can
More informationLikelihood-free MCMC
Bayesian inference for stable distributions with applications in finance Department of Mathematics University of Leicester September 2, 2011 MSc project final presentation Outline 1 2 3 4 Classical Monte
More informationBayesian Inference and MCMC
Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the
More informationModel comparison. Christopher A. Sims Princeton University October 18, 2016
ECO 513 Fall 2008 Model comparison Christopher A. Sims Princeton University sims@princeton.edu October 18, 2016 c 2016 by Christopher A. Sims. This document may be reproduced for educational and research
More informationComputationalToolsforComparing AsymmetricGARCHModelsviaBayes Factors. RicardoS.Ehlers
ComputationalToolsforComparing AsymmetricGARCHModelsviaBayes Factors RicardoS.Ehlers Laboratório de Estatística e Geoinformação- UFPR http://leg.ufpr.br/ ehlers ehlers@leg.ufpr.br II Workshop on Statistical
More informationσ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =
Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,
More informationBayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence
Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate
More informationEco517 Fall 2013 C. Sims MCMC. October 8, 2013
Eco517 Fall 2013 C. Sims MCMC October 8, 2013 c 2013 by Christopher A. Sims. This document may be reproduced for educational and research purposes, so long as the copies contain this notice and are retained
More informationBayesian Phylogenetics:
Bayesian Phylogenetics: an introduction Marc A. Suchard msuchard@ucla.edu UCLA Who is this man? How sure are you? The one true tree? Methods we ve learned so far try to find a single tree that best describes
More informationMarkov chain Monte Carlo
Markov chain Monte Carlo Karl Oskar Ekvall Galin L. Jones University of Minnesota March 12, 2019 Abstract Practically relevant statistical models often give rise to probability distributions that are analytically
More informationMetropolis Hastings. Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601. Module 9
Metropolis Hastings Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601 Module 9 1 The Metropolis-Hastings algorithm is a general term for a family of Markov chain simulation methods
More informationDavid Giles Bayesian Econometrics
David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian
More informationComputer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo
Group Prof. Daniel Cremers 10a. Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative is Markov Chain
More informationExercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters
Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for
More informationHierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31
Hierarchical models Dr. Jarad Niemi Iowa State University August 31, 2017 Jarad Niemi (Iowa State) Hierarchical models August 31, 2017 1 / 31 Normal hierarchical model Let Y ig N(θ g, σ 2 ) for i = 1,...,
More informationLECTURE 15 Markov chain Monte Carlo
LECTURE 15 Markov chain Monte Carlo There are many settings when posterior computation is a challenge in that one does not have a closed form expression for the posterior distribution. Markov chain Monte
More informationParameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn
Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation
More informationLabor-Supply Shifts and Economic Fluctuations. Technical Appendix
Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January
More informationStat 535 C - Statistical Computing & Monte Carlo Methods. Lecture 15-7th March Arnaud Doucet
Stat 535 C - Statistical Computing & Monte Carlo Methods Lecture 15-7th March 2006 Arnaud Doucet Email: arnaud@cs.ubc.ca 1 1.1 Outline Mixture and composition of kernels. Hybrid algorithms. Examples Overview
More information6.1 Approximating the posterior by calculating on a grid
Chapter 6 Approximations 6.1 Approximating the posterior by calculating on a grid When one is confronted with a low-dimensional problem with a continuous parameter, then it is usually feasible to approximate
More informationHypothesis Testing. Econ 690. Purdue University. Justin L. Tobias (Purdue) Testing 1 / 33
Hypothesis Testing Econ 690 Purdue University Justin L. Tobias (Purdue) Testing 1 / 33 Outline 1 Basic Testing Framework 2 Testing with HPD intervals 3 Example 4 Savage Dickey Density Ratio 5 Bartlett
More informationA Review of Pseudo-Marginal Markov Chain Monte Carlo
A Review of Pseudo-Marginal Markov Chain Monte Carlo Discussed by: Yizhe Zhang October 21, 2016 Outline 1 Overview 2 Paper review 3 experiment 4 conclusion Motivation & overview Notation: θ denotes the
More informationReminder of some Markov Chain properties:
Reminder of some Markov Chain properties: 1. a transition from one state to another occurs probabilistically 2. only state that matters is where you currently are (i.e. given present, future is independent
More informationMarkov Chain Monte Carlo Methods
Markov Chain Monte Carlo Methods John Geweke University of Iowa, USA 2005 Institute on Computational Economics University of Chicago - Argonne National Laboaratories July 22, 2005 The problem p (θ, ω I)
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov
More informationPrelim Examination. Friday August 11, Time limit: 150 minutes
University of Pennsylvania Economics 706, Fall 2017 Prelim Prelim Examination Friday August 11, 2017. Time limit: 150 minutes Instructions: (i) The total number of points is 80, the number of points for
More informationEconomics 583: Econometric Theory I A Primer on Asymptotics
Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:
More informationThe Metropolis-Hastings Algorithm. June 8, 2012
The Metropolis-Hastings Algorithm June 8, 22 The Plan. Understand what a simulated distribution is 2. Understand why the Metropolis-Hastings algorithm works 3. Learn how to apply the Metropolis-Hastings
More informationStable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence
Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham NC 778-5 - Revised April,
More informationBayesian Model Comparison:
Bayesian Model Comparison: Modeling Petrobrás log-returns Hedibert Freitas Lopes February 2014 Log price: y t = log p t Time span: 12/29/2000-12/31/2013 (n = 3268 days) LOG PRICE 1 2 3 4 0 500 1000 1500
More informationHierarchical Models & Bayesian Model Selection
Hierarchical Models & Bayesian Model Selection Geoffrey Roeder Departments of Computer Science and Statistics University of British Columbia Jan. 20, 2016 Contact information Please report any typos or
More informationMarkov Chain Monte Carlo
Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee and Andrew O. Finley 2 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More informationComputational statistics
Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated
More informationHierarchical Modeling for Spatial Data
Bayesian Spatial Modelling Spatial model specifications: P(y X, θ). Prior specifications: P(θ). Posterior inference of model parameters: P(θ y). Predictions at new locations: P(y 0 y). Model comparisons.
More informationMarkov chain Monte Carlo
1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop
More informationASYMPTOTICALLY INDEPENDENT MARKOV SAMPLING: A NEW MARKOV CHAIN MONTE CARLO SCHEME FOR BAYESIAN INFERENCE
International Journal for Uncertainty Quantification, 3 (5): 445 474 (213) ASYMPTOTICALLY IDEPEDET MARKOV SAMPLIG: A EW MARKOV CHAI MOTE CARLO SCHEME FOR BAYESIA IFERECE James L. Beck & Konstantin M. Zuev
More informationMetropolis-Hastings Algorithm
Strength of the Gibbs sampler Metropolis-Hastings Algorithm Easy algorithm to think about. Exploits the factorization properties of the joint probability distribution. No difficult choices to be made to
More information10. Exchangeability and hierarchical models Objective. Recommended reading
10. Exchangeability and hierarchical models Objective Introduce exchangeability and its relation to Bayesian hierarchical models. Show how to fit such models using fully and empirical Bayesian methods.
More informationPseudo-marginal MCMC methods for inference in latent variable models
Pseudo-marginal MCMC methods for inference in latent variable models Arnaud Doucet Department of Statistics, Oxford University Joint work with George Deligiannidis (Oxford) & Mike Pitt (Kings) MCQMC, 19/08/2016
More informationApril 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning
for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.
More informationPart III. A Decision-Theoretic Approach and Bayesian testing
Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to
More information17 : Markov Chain Monte Carlo
10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo
More informationGAUSSIAN PROCESS REGRESSION
GAUSSIAN PROCESS REGRESSION CSE 515T Spring 2015 1. BACKGROUND The kernel trick again... The Kernel Trick Consider again the linear regression model: y(x) = φ(x) w + ε, with prior p(w) = N (w; 0, Σ). The
More informationMONTE CARLO METHODS. Hedibert Freitas Lopes
MONTE CARLO METHODS Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu
More informationOn Bayesian Computation
On Bayesian Computation Michael I. Jordan with Elaine Angelino, Maxim Rabinovich, Martin Wainwright and Yun Yang Previous Work: Information Constraints on Inference Minimize the minimax risk under constraints
More informationCSC 2541: Bayesian Methods for Machine Learning
CSC 2541: Bayesian Methods for Machine Learning Radford M. Neal, University of Toronto, 2011 Lecture 10 Alternatives to Monte Carlo Computation Since about 1990, Markov chain Monte Carlo has been the dominant
More informationMultivariate Normal & Wishart
Multivariate Normal & Wishart Hoff Chapter 7 October 21, 2010 Reading Comprehesion Example Twenty-two children are given a reading comprehsion test before and after receiving a particular instruction method.
More informationLecture 7 and 8: Markov Chain Monte Carlo
Lecture 7 and 8: Markov Chain Monte Carlo 4F13: Machine Learning Zoubin Ghahramani and Carl Edward Rasmussen Department of Engineering University of Cambridge http://mlg.eng.cam.ac.uk/teaching/4f13/ Ghahramani
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian
More informationBayesian Linear Models
Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Department of Forestry & Department of Geography, Michigan State University, Lansing Michigan, U.S.A. 2 Biostatistics, School of Public
More informationNested Sampling. Brendon J. Brewer. brewer/ Department of Statistics The University of Auckland
Department of Statistics The University of Auckland https://www.stat.auckland.ac.nz/ brewer/ is a Monte Carlo method (not necessarily MCMC) that was introduced by John Skilling in 2004. It is very popular
More informationMarkov Chain Monte Carlo, Numerical Integration
Markov Chain Monte Carlo, Numerical Integration (See Statistics) Trevor Gallen Fall 2015 1 / 1 Agenda Numerical Integration: MCMC methods Estimating Markov Chains Estimating latent variables 2 / 1 Numerical
More informationLecture 8: Bayesian Estimation of Parameters in State Space Models
in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More informationMIT Spring 2016
MIT 18.655 Dr. Kempthorne Spring 2016 1 MIT 18.655 Outline 1 2 MIT 18.655 Decision Problem: Basic Components P = {P θ : θ Θ} : parametric model. Θ = {θ}: Parameter space. A{a} : Action space. L(θ, a) :
More informationBayesian Linear Models
Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More informationCS281A/Stat241A Lecture 22
CS281A/Stat241A Lecture 22 p. 1/4 CS281A/Stat241A Lecture 22 Monte Carlo Methods Peter Bartlett CS281A/Stat241A Lecture 22 p. 2/4 Key ideas of this lecture Sampling in Bayesian methods: Predictive distribution
More informationStatistics & Data Sciences: First Year Prelim Exam May 2018
Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book
More informationExample: Ground Motion Attenuation
Example: Ground Motion Attenuation Problem: Predict the probability distribution for Peak Ground Acceleration (PGA), the level of ground shaking caused by an earthquake Earthquake records are used to update
More informationFoundations of Statistical Inference
Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 32 Lecture 14 : Variational Bayes
More information(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis
Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals
More informationInfinite-State Markov-switching for Dynamic. Volatility Models : Web Appendix
Infinite-State Markov-switching for Dynamic Volatility Models : Web Appendix Arnaud Dufays 1 Centre de Recherche en Economie et Statistique March 19, 2014 1 Comparison of the two MS-GARCH approximations
More informationPart 8: GLMs and Hierarchical LMs and GLMs
Part 8: GLMs and Hierarchical LMs and GLMs 1 Example: Song sparrow reproductive success Arcese et al., (1992) provide data on a sample from a population of 52 female song sparrows studied over the course
More informationStat 535 C - Statistical Computing & Monte Carlo Methods. Lecture February Arnaud Doucet
Stat 535 C - Statistical Computing & Monte Carlo Methods Lecture 13-28 February 2006 Arnaud Doucet Email: arnaud@cs.ubc.ca 1 1.1 Outline Limitations of Gibbs sampling. Metropolis-Hastings algorithm. Proof
More informationMotivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University
Econ 690 Purdue University In virtually all of the previous lectures, our models have made use of normality assumptions. From a computational point of view, the reason for this assumption is clear: combined
More informationBayesian Linear Regression
Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective
More informationAdaptive HMC via the Infinite Exponential Family
Adaptive HMC via the Infinite Exponential Family Arthur Gretton Gatsby Unit, CSML, University College London RegML, 2017 Arthur Gretton (Gatsby Unit, UCL) Adaptive HMC via the Infinite Exponential Family
More informationBAYESIAN ECONOMETRICS
BAYESIAN ECONOMETRICS 1. OUTLINE (I) The difference between Bayesian and non-bayesian inference. (II) Confidence sets and confidence intervals. Why? (III) Bayesian interpretation of frequentist data analysis.
More informationControl Variates for Markov Chain Monte Carlo
Control Variates for Markov Chain Monte Carlo Dellaportas, P., Kontoyiannis, I., and Tsourti, Z. Dept of Statistics, AUEB Dept of Informatics, AUEB 1st Greek Stochastics Meeting Monte Carlo: Probability
More informationDAG models and Markov Chain Monte Carlo methods a short overview
DAG models and Markov Chain Monte Carlo methods a short overview Søren Højsgaard Institute of Genetics and Biotechnology University of Aarhus August 18, 2008 Printed: August 18, 2008 File: DAGMC-Lecture.tex
More informationTEORIA BAYESIANA Ralph S. Silva
TEORIA BAYESIANA Ralph S. Silva Departamento de Métodos Estatísticos Instituto de Matemática Universidade Federal do Rio de Janeiro Sumário Numerical Integration Polynomial quadrature is intended to approximate
More informationDensity Estimation. Seungjin Choi
Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/
More informationPattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods
Pattern Recognition and Machine Learning Chapter 11: Sampling Methods Elise Arnaud Jakob Verbeek May 22, 2008 Outline of the chapter 11.1 Basic Sampling Algorithms 11.2 Markov Chain Monte Carlo 11.3 Gibbs
More informationCSC 2541: Bayesian Methods for Machine Learning
CSC 2541: Bayesian Methods for Machine Learning Radford M. Neal, University of Toronto, 2011 Lecture 3 More Markov Chain Monte Carlo Methods The Metropolis algorithm isn t the only way to do MCMC. We ll
More information1 Geometry of high dimensional probability distributions
Hamiltonian Monte Carlo October 20, 2018 Debdeep Pati References: Neal, Radford M. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo 2.11 (2011): 2. Betancourt, Michael. A conceptual
More informationSpatial Statistics Chapter 4 Basics of Bayesian Inference and Computation
Spatial Statistics Chapter 4 Basics of Bayesian Inference and Computation So far we have discussed types of spatial data, some basic modeling frameworks and exploratory techniques. We have not discussed
More informationGeneral Bayesian Inference I
General Bayesian Inference I Outline: Basic concepts, One-parameter models, Noninformative priors. Reading: Chapters 10 and 11 in Kay-I. (Occasional) Simplified Notation. When there is no potential for
More informationCOS513 LECTURE 8 STATISTICAL CONCEPTS
COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions
More information