The Effects of Monetary Policy on Stock Market Bubbles: Some Evidence

Similar documents
Macroeconomic Forecasting and Structural Change

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Research Division Federal Reserve Bank of St. Louis Working Paper Series

Working Paper Series

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Structural changes in the US economy: is there a role for monetary policy?

Testing Restrictions and Comparing Models

Web Appendix for The Dynamics of Reciprocity, Accountability, and Credibility

Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models

Bayesian data analysis in practice: Three simple examples

Lecture 4: Dynamic models

Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus. Abstract

Non-Markovian Regime Switching with Endogenous States and Time-Varying State Strengths

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D.

Large Vector Autoregressions with asymmetric priors and time varying volatilities

Technical appendices: Business cycle accounting for the Japanese economy using the parameterized expectations algorithm

Massachusetts Institute of Technology Department of Economics Time Series Lecture 6: Additional Results for VAR s

DSGE MODELS WITH STUDENT-t ERRORS

Bayesian Inference for the Multivariate Normal

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

2.5 Forecasting and Impulse Response Functions

SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, )

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31

Zita Oravecz, Francis Tuerlinckx, & Joachim Vandekerckhove Department of Psychology. Department of Psychology University of Leuven, Belgium

Working Paper Series. This paper can be downloaded without charge from:

Bayesian Dynamic Linear Modelling for. Complex Computer Models

1 Data Arrays and Decompositions

Frequentist-Bayesian Model Comparisons: A Simple Example

The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo

Assessing Structural VAR s

Timevarying VARs. Wouter J. Den Haan London School of Economics. c Wouter J. Den Haan

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

Estimating Unobservable Inflation Expectations in the New Keynesian Phillips Curve

Structural changes in the US economy: is there a role for monetary policy?

Lecture 5: Spatial probit models. James P. LeSage University of Toledo Department of Economics Toledo, OH

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Forecasting with Bayesian Global Vector Autoregressive Models

Online tests of Kalman filter consistency

Bayesian modelling. Hans-Peter Helfrich. University of Bonn. Theodor-Brinkmann-Graduate School

Cross-sectional space-time modeling using ARNN(p, n) processes

Vector Auto-Regressive Models

VAR models with non-gaussian shocks

VAR Models and Applications

Assessing Structural VAR s

GARCH Models Estimation and Inference

Structural Vector Autoregressions with Markov Switching. Markku Lanne University of Helsinki. Helmut Lütkepohl European University Institute, Florence

Assessing Structural VAR s

GARCH Models Estimation and Inference. Eduardo Rossi University of Pavia

Linear Models A linear model is defined by the expression

Bayesian Inference. Chapter 9. Linear models and regression

Core Inflation and Trend Inflation. Appendix

doi /j. issn % % JOURNAL OF CHONGQING UNIVERSITY Social Science Edition Vol. 22 No.

A quasi-bayesian nonparametric approach to time varying parameter VAR models Job Market Paper

GARCH Models Estimation and Inference

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

New Information Response Functions

Controlled sequential Monte Carlo

Spatio-temporal precipitation modeling based on time-varying regressions

Internet Appendix to Are Stocks Really Less Volatile in the Long Run?

Gibbs Sampling in Linear Models #2

Accounting for Complex Sample Designs via Mixture Models

Switching Regime Estimation

Riemann Manifold Methods in Bayesian Statistics

On the Empirical (Ir)Relevance of the Zero Lower Bound Constraint

Ambiguous Business Cycles: Online Appendix

ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS

Assessing Monetary Policy Models: Bayesian Inference for Heteroskedastic Structural VARs

Cholesky Stochastic Volatility. August 2, Abstract

The linear model is the most fundamental of all serious statistical models encompassing:

Factor models. March 13, 2017

Identifying SVARs with Sign Restrictions and Heteroskedasticity

TIME SERIES AND FORECASTING. Luca Gambetti UAB, Barcelona GSE Master in Macroeconomic Policy and Financial Markets

Financial Econometrics

Identifying Aggregate Liquidity Shocks with Monetary Policy Shocks: An Application using UK Data

The Bayesian Approach to Multi-equation Econometric Model Estimation

Dynamic probabilities of restrictions in state space models: An application to the New Keynesian Phillips Curve

Federal Reserve Bank of New York Staff Reports

MID-TERM EXAM ANSWERS. p t + δ t = Rp t 1 + η t (1.1)

A Mixed Frequency Steady-State Bayesian Vector Autoregression: Forecasting the Macroeconomy

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Comment about AR spectral estimation Usually an estimate is produced by computing the AR theoretical spectrum at (ˆφ, ˆσ 2 ). With our Monte Carlo

This is the author s version of a work that was submitted/accepted for publication in the following source:

Marginal Specifications and a Gaussian Copula Estimation

Modeling conditional distributions with mixture models: Applications in finance and financial decision-making

Bayesian Graphical Models for Structural Vector AutoregressiveMarch Processes 21, / 1

Bayesian Semiparametric GARCH Models

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari

Gaussian Multiscale Spatio-temporal Models for Areal Data

Bayesian Semiparametric GARCH Models

Large time varying parameter VARs for macroeconomic forecasting

MULTILEVEL IMPUTATION 1

Fractional Imputation in Survey Sampling: A Comparative Review

Assessing Structural VAR s

The Size and Evolution of the Government Spending Multiplier in France - Technical Appendix -

Forecasting with Bayesian Vector Autoregressions

Assessing Structural Convergence between Romanian Economy and Euro Area: A Bayesian Approach

Bayesian spatial hierarchical modeling for temperature extremes

Introduction to Bayesian Inference

Alternative implementations of Monte Carlo EM algorithms for likelihood inferences

Transcription:

The Effects of Monetary Policy on Stock Market Bubbles: Some Evidence Jordi Gali Luca Gambetti ONLINE APPENDIX The appendix describes the estimation of the time-varying coefficients VAR model. The model is estimated using the Gibbs sampling algorithm along the lines described in Del Negro and Primiceri (2013). Each iteration of the algorithm is composed of seven steps where a draw for a set of parameters is made conditional on the value of the remaining parameters. To clarify the notation, let w t be a generic column vector. We denote w T = [w 1,..., w T ]. Below we report the conditional distributions used in the seven steps of the algorithm: 1. p(σ T x T, θ T, φ T, Ω, Ξ, Ψ, s T ) 2. p(φ T x T, θ T, σ T, Ω, Ξ, Ψ) 3. p(θ T x T, σ T, φ T, Ω, Ξ, Ψ) 4. p(ω x T, θ T, σ T, φ T, Ξ, Ψ) 5. p(ξ x T, θ T, σ T, φ T, Ω, Ψ) 6. p(ψ i x T, θ T, σ T, φ T, Ω, Ξ), i = 1, 2, 3, 4 7. p(s T x T, θ T, σ T, φ T, Ω, Ξ, Ψ) 1 CREI, Universitat Pompeu Fabra and Barcelona GSE. E-mail: jgali@crei.cat UAB and Barcelona GSE. E-mail: luca.gambetti@uab.es 1 s t be a n 1 vector whose elements can take integer values between 1 to 7 which is used for drawing σ T. 1

Priors Specification We assume that the covariance matrices Ω, Ξ and Ψ and the initial states, θ 0, φ 0 and log σ 0, are independent, the prior distributions for the initial states are Normal and the prior distributions for Ω 1, Ξ 1 and Ψ 1 i are Wishart. More precisely θ 0 N(ˆθ, 4 ˆV θ ) log σ 0 N(log ˆσ 0, I n ) φ i0 N(ˆφ i, ˆV φi ) Ω 1 W (Ω 1, ρ 1 ) Ξ 1 W (Ξ 1, ρ 2 ) Ψ 1 i W (Ψ 1 i, ρ 3i ) where W (S, d) denotes a Wishart distribution with scale matrix S and degrees of freedom d and I n is a n n identity matrix where n is the number of variables in the VAR. Prior means and variances of the Normal distributions are calibrated using a time invariant VAR for x t estimated using the first τ = 48 observations. ˆθ and ˆV θ are set equal to the OLS estimates. Let ˆΣ be the covariance matrix of the residuals û t of the initial time-invariant VAR. We apply the same decomposition discussed in the text ˆΣ = ˆF ˆDˆF and set log ˆσ 0 equal to the log of the diagonal elements of ˆD 1/2. ˆφ i is set equal to the OLS estimates of the coefficients of the regression of û i+1,t, the i + 1-th element of û t, on û 1,t,..., û i,t and ˆV φi equal to the estimated variances. We parametrize the scale matrices as follows Ω = ρ 1 (λ 1 ˆVf θ ), Ξ = ρ 2 (λ 2I n ) and Ψ i = ρ 3i (λ 3 ˆVf φ i ). The degrees of freedom for the priors on the covariance matrices ρ 1 and ρ 2 are set equal to the number of rows Ω 1 and I n plus one respectively while ρ 3i is i + 1 for i = 1,..., n 1. We assume λ 1 = 0.005, λ 2 = 0.01 and λ 3 = 0.01. Finally ˆV f θ and ˆV f φ i are obtained as ˆV θ and ˆV φi but using the estimates of the whole sample. 2

Gibbs sampling algorithm Let T be the total number of observations, in our case equal to 204. To draw realizations from the posterior distribution we use T = T τ/2 observations starting from τ/2 + 1. 2 The algorithm works as follows: Step 1 : sample σ T. The states σ T are drawn using the algorithm of Kim, Shephard and Chib (1998, KSC hereafter). Let x t F 1 t (x t W t θ t ) = D 1/2 t u t, where u t N(0, I n ), W t = (I n w t ), and w t = [1 x t 1...x t p]. Notice that conditional on x T, θ T, and φ T, x t is observable. Therefore, by squaring and taking logs, we obtain the following state-space representation where x i,t = log(x i,t 2 ), υ i,t = log(u 2 i,t) and r t = log σ i,t. 3 x t = 2r t + υ t (1) r t = r t 1 + ζ t (2) The above system is nonnormal since the innovation in (1) is distributed as log χ 2 (1). Following KSC, we use a mixture of 7 Normal densities with mean m j 1.2704, and variance v 2 j (j=1,...,7) to approximate the system with a Gaussian one (see Table A1 for the values used). In practice the algorithm of Carter and Kohn (1994, CK henceforth) is used to draw r t using, as density of υ i,t, the one indicated by the value of s i,t : (υ i,t s i,t = j) N(m j 1.2704, v 2 j ). More precisely r t is drawn from N(r t t+1, R t t+1 ), where r t t+1 = E(r t r t+1, x t, θ T, φ T, Ω, Ξ, Ψ, s T, ) and R t t+1 = V ar(r t r t+1, x t, θ T, φ T, Ω, Ξ, Ψ, s T ) are the conditional mean and variance obtained from the backward recursion equations. 2 We start the sample from τ/2 + 1 instead of τ in order to not to lose too many data points. 3 We do not use any offsetting constant since given that the variables are in logs multiplied by 100, we do not have numerical problems. 3

Table A1 j q j m j vj 2 1.0000 0.0073-10.1300 5.7960 2.0000 0.1056-3.9728 2.6137 3.0000 0.0000-8.5669 5.1795 4.0000 0.0440 2.7779 0.1674 5.0000 0.3400 0.6194 0.6401 6.0000 0.2457 1.7952 0.3402 7.0000 0.2575-1.0882 1.2626 Step 2 : sample φ T. Let ˆx t = x t W t θ t. The i + 1-th (i = 1,..., n 1) equation of the system F 1 t ˆx t = D 1/2 t u t can be written as ˆx i+1,t = ˆx [1,i],t φ i,t + σ i,t u i+1,t i = 2,..., n (3) where σ i,t and u i,t are the ith elements of σ t and u t respectively and ˆx [1,i],t = [ˆx 1,t,..., ˆx i,t ]. Conditional on θ T and σ T, equation (3) is the observable equation of a state-space model where the states are φ i,t. Moreover, since φ i,t and φ j,t are independent for i j, the algorithm of CK can be applied equation by equation to draw φ i,t from a N(φ i,t t+1, Φ i,t t+1 ), where φ i,t t+1 = E(φ i,t φ i,t+1, x t, θ T, σ T, Ω, Ξ, Ψ) and Φ i,t t+1 = V ar(φ i,t φ i,t+1, x t, θ T, σ T, Ω, Ξ, Ψ). Step 3: sample θ T. Consider the state-space representation x t = W t θ t + ε t (4) θ t = θ t 1 + ω t. (5) θ t is drawn from a N(θ t t+1, P t t+1 ), where θ t t+1 = E(θ t θ t+1, x t, σ T, φ T, Ω, Ξ, Ψ) and P t t+1 = V ar(θ t θ t+1, x t, σ T, φ T, Ω, Ξ, Ψ) are obtained using the CK algorithm. 4

Step 4: sample Ω. A draw is obtained as follows: Ω = (MM ) 1 where M is an (n 2 p + n) ρ 1 matrix whose columns are independent draws from a N(0, Ω 1 ) where Ω = Ω + T t=1 θ t( θ t ) (see Gelman et. al., 1995). Step 5 : sample Ξ. As above, Ξ = (MM ) 1 where M is an n ρ 2 matrix whose columns are independent draws from a N(0, Ξ 1 ) where Ξ = Ξ+ T t=1 log σ t( log σ t ). Step 6: sample Ψ i i = 1,..., 5. As above, Ψ i = (MM ) 1 where M is an i ρ 3i matrix whose columns are independent draws from a N(0, Ψ 1 i ) where Ψ i = Ψ i + T t=1 φ i,t( φ i,t ). Step 7: sample s T. Each s i,t is independently sampled from the discrete density P r(s i,t = j x i,t, r i,t ) q j f N (x i,t 2r i,t +m j 1.2704, v 2 j ), where f N (x µ, σ 2 ) denotes the Normal pdf with mean µ and variance σ 2, and q j is the probability reported in Table A1 associated to the j-th density. We make 22000 draws discarding the first 20000 and collecting one out of two of the remaining 2000 draws. The results presented in the paper are therefore based on 1000 draws from the posterior distribution. Parameters convergence is assessed using trace plots. References Carter, Chris K., and Robert Kohn (1994): On Gibbs Sampling for State Space Models, Biometrika, 81:541-553. Del Negro, Marco and Giorgio Primiceri (2013): Time-Varying Structural Vector Autoregressions and Monetary Policy: A Corrigendum Federal Reserve Bank of New York Staff Report No. 619. Gelman Andrew, John B. Carlin, Hal S. Stern, and Donald B. Rubin (1995), Bayesian Data Analysis, Chapman and Hall, London. 5

Kim Sangjoon, Neil Shepard and Siddhartha Chib (1998), Stochastic Volatility: Likelihood Inference and Comparison with ARCH Models, Review of Economic Studies, 65, 361-393. 6