... Assessing Structural VAR s by Lawrence J. Christiano, Martin Eichenbaum and Robert Vigfusson
Background Structural Vector Autoregressions Can be Used to Address the Following Type of Question: How Does the Economy Respond to Particular Economic Shocks? The Answer to this Type of Question Can Be Very Useful in the Construction of Dynamic, General Equilibrium Models To be useful in practice, must have good sampling properties. 4
What We Do Investigate the Sampling Properties of SVARs, When Data are Generated by Estimated DSGE Models. Three Questions: Bias in SVAR Estimator of Shock Responses? What is Magnitude of Sampling Uncertainty? How Precise is Inference with SVARs? Bias in SVAR Estimator of Sampling Uncertainty? Would Econometrician Know it, if SVAR were Inaccurate? We focus on: how do hours worked respond to a technology shock? 10
What We Do... Throughout, We Assume The Identification Assumptions Motivated by Economic Theory Are Correct Example: Only Shock Driving Labor Productivity in Long Run is Technology Shock There is a type of specification error that the econometrician using VARs necessarily makes (Cooley-Dwyer): VAR has finite lags DSGE implies VAR with infinite lags We assess the consequences of this type of specification error 12
What We Do... We Look at Two Classes of Identifying Restrictions Long-run identification Exploit implications that some models have for long-run effects of shocks Short-run identification Exploit model assumptions about the timing of decisions relative to the arrival of information. 15
Key Findings With Short Run Restrictions, SVARs Work Remarkably Well Little Bias, and Inference Precise (Sampling Uncertainty Small). With Long Run Restrictions, For Model Parameterizations that Fit the Data Well, SVARs Work Well Little Bias in Point Estimates or Estimates of Sampling Uncertainty. Estimates May Lack Precision, But not Always. Examples Can Be Found In Which There is Noticeable Bias, But Have Extremely Low Likelihood on US Data Analyst Who Looks at Standard Errors Would Not Be Misled 19
Outline Analyze Performance of SVARs Identified with Long Run Restrictions Reconcile Our Findings for Long-Run IdentificationwithCKM Analyze Performance of SVARs Identified with Short Run Restrictions 21
A Conventional RBC Model Preferences: E 0 X t=0 (β (1 + γ)) t [log c t + ψ log (1 l t )]. Constraints: c t +(1+τ x )[(1+γ) k t+1 (1 δ) k t ] (1 τ lt ) w t l t + r t k t + T t. Shocks: c t +(1+γ) k t+1 (1 δ) k t k θ t (z t l t ) 1 θ. log z t = µ Z + σ z ε z t τ lt+1 =(1 ρ l ) τ l + ρ l τ lt + σ l ε l t+1 Information: Time t Decisions Made After Realization of All Time t Shocks 19
Long-Run Properties of Our RBC Model ε z t is only shock that has a permanent impact on output and labor productivity a t y t /l t. Exclusion property: lim [E ta t+j E t 1 a t+j ]=f(ε z t only), j Sign property: f is an increasing function. 20
Parameterizing the Model Parameters: Exogenous Shock Processes: We Estimate These Other Parameters: Same as CKM β θ δ ψ γ τ x τ l µ z 0.98 1/4 1 3 1 (1.06)1/4 2.5 1.01 1/4 1 0.3 0.243 1.02 1/4 1 Baseline Specifications of Exogenous Shocks Processes: Our Baseline Specification Chari-Kehoe-McGrattan (July, 2005) Baseline Specification 22
Model Specifications Our Baseline Model (MLE Specification) Exogenous Shocks: log z t = µ z +0.00953ε z t τ l,t =(1 0.986) τ l +0.986τ l,t 1 +0.0056ε l t CKM Baseline Model (CKM Specification) Exogenous Shocks: also estimated via maximum likelihood log z t =0.00516 + 0.0131 ε z t τ lt = τ l +0.952τ l,t 1 +0.0136 ε l t. Note: τ lt innovation variance large compared with KP We Will Investigate Why this is so, Later 28
Model Specifications... Fraction of Variance in Hours Worked Due to Technology, V h Small In Both Models Baseline MLE Specification: 3.73 percent Baseline CKM Specification: 2.76 percent Tough Assignment for VAR. 29
Figure 1: A Simulated Time Series for Hours Two Shock MLE Specification 1.45 ln(hours) 1.5 1.55 1.6 1.65 20 40 60 80 100 120 140 160 180 Time Two Shock CKM Specification ln(hours) 1.45 1.5 1.55 1.6 1.65 20 40 60 80 100 120 140 160 180 Time Both Shocks Only Technology Shocks
Estimating Effects of a Positive Technology Shock Vector Autoregression: Y t+1 = B 1 Y t 1 +... + B p Y t p + u t+1,eu t u 0 t = V, u t = Cε t, Eε t ε 0 t = I, CC 0 = V Y t = µ µ log at ε z,ε log l t = t,a t ε t = Y t 2t l t Impulse Response Function to Positive Technology Shock (ε z t ): Y t E t 1 Y t = C 1 ε z t,e t Y t+1 E t 1 Y t+1 = B 1 C 1 ε z t Need B 1,...,B p,c 1. 28
Identification Problem From Applying OLS To Both Equations in VAR, We Know : B 1,..., B p,v Problem, Need first Column of C, C 1 Following Restrictions Not Enough: CC 0 = V Identification Problem: Not Enough Restrictions to Pin Down C 1 Need More Restrictions 30
Identification Problem... Impulse Response to Positive Technology Shock (ε z t ): lim [E ta t+j E t 1 a t+j ]=(10)[I (B 1 +... + B p )] 1 C j Exclusion Property of RBC Model Motivates the Restriction: D [I (B 1 +... + B p )] 1 C = x 0 number number Sign Property of RBC Model Motivates the Restriction, x 0. DD 0 =[I (B 1 +... + B p )] 1 V I (B 1 +... + B p ) 0 1 µ ε z t, ε 2t Exclusion/Sign Properties Uniquely Pin Down First Column of D, D 1, Then, C 1 =[I (B 1 +... + B p )] D 1 = f LR (V,B 1 +... + B p ) 34
Six Results from the Frequency Domain Suppose {Y t } is a covariance stationary process with no deterministic component. By Wold s Decomposition Theorem (see, e.g., Sargent, Macroeconomic Theory, chapter XI, section 11) this stochastic process has the following representation: Y t = D(L)e t,ee t e 0 t = V = D 0 + D 1 L + D 2 L 2 +... e t, where {e t } is serially uncorrelated and X D i Di 0 <. Define: i=0 S Y (z) =D(z)VD(z 1 ), where z is a variable. 1
Six Results from the Frequency Domain... Result #1: S Y (z) =C(0) + C(1)z + C(2)z 2 + C(3)z 3 +... where +C( 1)z 1 + C( 2)z 2 + C( 3)z 3 +... C(τ) EY t Y 0 t τ Proof: do the multiplication and collect terms in powers of z. Example: Y t = D 0 e t + D 1 e t 1, C(0) EY t Y 0 t = D 0 VD 0 0 + D 1 VD 0 0 C(τ) EY t Y 0 t τ = D 1 VD 0 0 τ =1 0 τ>1 D 1 VD 0 0 τ = 1 0 τ< 1. 3
Six Results from the Frequency Domain... Note, S Y (z) =[D 0 + D 1 z] V D 0 0 + D 0 1z 1 = D 0 VD 0 0 + D 1 VD 0 1 +D 1 VD 0 0z + D 0 VD 1 z 1 = C(0) + C(1)z + C( 1)z 1, consistent with Result #1. 4
Six Results from the Frequency Domain... Result #2 1 2π Z π π e iωh dω = ½ 1 h =0 0 h 6= 0. Proof: Result Obvious for h =0. Consider h 6= 0 Note: e iωh =cos( ωh)+i sin ( ωh) =cos(ωh) i sin (ωh) Note: sin (πk) =0, for all integer k. Then, Z π π e iωh dω = 1 e iπh e iπh ih = 1 [cos (πh) i sin (πh) (cos (πh)+isin (πh))] ih = 2 sin (πh) =0 h 5
Six Results from the Frequency Domain... Result #1 and #2 Imply Result #3: C(τ) = 1 2π Z π π S Y (e iω )e iωτ dω. Proof: Z π π S Y (e iω )e iωτ dω by Result #1 z} { = by Result #2 z} { = =2πC (τ). Z π π Z π π C(0) + C(1)e iω + C(2)e 2iω +... + C (1) 0 e iω + C (2) 0 e 2iω e iωτ dω C (τ) e τiω e τiω dω 6
Six Results from the Frequency Domain... The Effects of Filtering Consider the Filtered Data: Ỹ t = F (L)Y t,f(l) = X j= F j L j The logic that establishes Result #1 implies F (z)d(z)vd z 1 0 F z 1 0 = C (0) + C (1) z + C (2) z 2 +... + C (1) 0 z 1 + C (2) 0 z 2 +... so that, by Result #3 EỸtỸ 0 t τ = CỸ (τ) = 1 2π Z π π F (e iω )S Y (e iω )F (e iω ) 0 e iωτ dω 7
Six Results from the Frequency Domain... A Filter of Particular Interest (the Band Pass Filter): F D (L), such that F D e iω = ½ 1 ω D {ω : ω [a, b] [ b, a]} 0 otherwise Then, CỸ (τ) = 1 2π Z π π F (e iω )S Y (e iω )F (e iω ) 0 e iωτ dω = 1 2π " Z a S Y (e iω )e iωτ dω + b Z b a S Y (e iω )e iωτ dω # Band Pass Filter Shuts Off Power for Frequencies Outside D. 8
Six Results from the Frequency Domain... Suppose D D =, Then Result #4: Proof: Consider: Z t = By Result #3 C Z (τ) = 1 2π Z π π F D (L)Y t F D(L)Y t. µ FD (L)Y t F D(L)Y t µ FD (L) = D(L)e F D(L) t, FD (e iω )S Y (e iω )F D (e iω ) 0 F D (e iω )S Y (e iω )F D(e iω ) 0 F D(e iω )S Y (e iω )F D (e iω ) 0 F D(e iω )S Y (e iω )F D(e iω ) 0 dω = 1 2π Z π π FD (e iω )S Y (e iω )F D (e iω ) 0 0 0 F D (e iω )S Y (e iω )F D (eiω ) 0 dω Note that the Upper Right and Lower Left Blocks Are Zero. 9
Six Results from the Frequency Domain... Now Supppose ( ) D D = and D D =[ π, π]. Then, result #5 F D (L)Y t + F D (L)Y t = Y t. The equality means that the stochastic process on the left of the equality has the same covariance function as the stochastic process on the right. Proof of result #5 - let η t = τz t,τ=[i.i], where I is the identity matrix with the same dimension as Y t. To establish the result, we must establish that the covariance function of {η t } and {Y t } coincide. 10
Six Results from the Frequency Domain... Proof of Result #5. Let µ FD (z) S η (z) =τ F D (z) D(z)VD(z 1 ) 0 F D (z 1 ) 0 F D(z 1 ) 0 τ 0 =[F D (z)+f D(z)] D(z)VD(z 1 ) 0 F D (z 1 ) 0 + F D(z 1 ) 0 By Result #3 C η (τ) = 1 2π Z π π FD (e iω )+F D(e iω ) D(e iω )VD(e iω ) 0 F D (e iω ) 0 + F D(e iω ) 0 e iωτ dω By ( ) : so F D (e iω )+F D(e iω )=1, for all ω ( π, π) Z π C η (τ) = 1 D(e iω )VD(e iω ) 0 e iωτ dω = C Y (τ), 2π π which establishes the result sought. 11
Six Results from the Frequency Domain... Orthogonal decomposition result. Let D 1,D 2,...,D N denote a partition of the interal, ( π, π), into N pieces, with (a i,b i ) corresponding to each D i,ω i an arbitrary element in [a i,b i ] and D i D j = for all i 6= j, D 1 D 2 D N =[ π, π]. Write, Y it = F Di (L) Y t. Then an obvious extension of Result #5 yields NX Y it = Y t,y it Y jt for all i 6= j. i=1 Thus, Y it, i =1,...,N represents an orthogonal decomposition of Y t. Note that the variance of Y it is " Z C Yi (0) = 1 ai Z # bi S Y (e iω )dω + S Y (e iω )dω 2π b i a i 12
Six Results from the Frequency Domain... By a simple change-of-variable argument, so that Z ai b i S Y (e iω )dω = C Yi (0) = 1 π Z bi a i Z bi a i S Y (e iω )dω, S Y (e iω )dω Suppose we have a very fine partition, D 1,...,D N, with N large. By continuity of S Y (e iω ) w.r.t. ω C Yi (0) = 1 π Z bi a i S Y (e iω )dω ' 1 π S Y (e iω i )(b i a i ) We have established Result #6: A stationary stochastic proces, {Y t }, can be decomposed into orthogonal frequency components each with variance proportional to S Y (e iω ), for ω (0,π). 13
Six Results from the Frequency Domain... Alternative route to Result #6. Spectral Decomposition Theorem: Any Covariance Stationary Process Can Be Written: Y t = Z π 0 [a(ω)cos(ωt) +b(ω)sin(ωt)] dω. where the four random variables, a (ω),a(ω 0 ),b(ω),b(ω 0 ), are independent of each other for all ω, ω 0 (0,π). Note: Spectral Decomposition Theorem also provides an additive decomposition of Y t across frequencies. cos and sin are periodic with period 2π cos (ωt) =cos(ωt 0 ), ωt 0 = ωt +2π, t 0 t = 2π ω. So, Frequency ω Corresponds to Period 2π/ω. This Completes Tour of Frequency Domain! 14
Back to VARs The VAR: X t = B 1 X t 1 + B 2 X t 2 +... + B p X t p + u t,u t = Cε t,cc 0 = V D [I B(1)] 1 C. Identification: (exclusion restriction) (1, 2) element of D = number 0,...,0 numbers numbers (sign restriction) (1, 1) element of D is positive To Find C 1, Solve: DD 0 =[I B(1)] 1 V [I B(1) 0 ] 1,C 1 =[I B(1)] D 1 53
Frequency Zero Spectral Density Note: DD 0 =[I (B 1 +... + B p )] 1 V I (B 1 +... + B p ) 0 1 = S0 What is S 0? Spectral Density of Y t (result #1): S Y (e iω )= I B(e iω )e iω 1 V I B(e iω ) 0 e iω 1 = C(0) + C(1)e iω + C(2)e i2ω + C(3)e i3ω +... +C( 1)e iω + C( 2)e i2ω + C( 3)e i3ω +... So, S 0 is Zero Frequency Spectral Density of Y t : S 0 = S Y (e i 0 )= I B(e i 0 )e i 0 1 V I B(e i 0 ) 0 e i 0 1 = C (k) =EY t Y 0 t k. X k= C (k), 57
Frequency Zero Spectral Density So, S 0 is Zero Frequency Spectral Density of Y t : S 0 = S Y (e i 0 )= I B(e i 0 )e i 0 1 V I B(e i 0 ) 0 e i 0 1 = C (k) =EY t Y 0 t k. Note: X k= C (k), DD 0 =[I (B 1 +... + B p )] 1 V I (B 1 +... + B p ) 0 1 = S0 An Alternative Way to Compute D 1 (and, hence, C 1 ) Is to Use a Different Estimator of S 0 S 0 = rx k= r 1 k r Ĉ (k), Ĉ(k) = 1 T TX t=k EY t Y 0 t k 59
Frequency Zero Spectral Density... Modified SVAR Procedure Similar to Extending Lag Length, But Non- Parametric 60
Experiments with Estimated Models Simulate 1000 data sets, each of length 180 observations, using DSGE model as Data Generating Mechanism. On Each Data Set: Estimate a four lag VAR. Report Mean Impulse Response Function Across 1000 Synthetic data sets (Solid, Black Line) Report Mean, Plus/Minus Two Standard Deviations of Estimator (Grey area) Report Mean of Econometrician s Confidence Interval Across 1000 syntheticdatasets(circles) 35
Response of Hours to A Technology Shock Long Run Identification Assumption 2.5 MLE Specification 2.5 CKM Specification 2 2 1.5 1.5 1 1 0.5 0.5 0 0 0.5 0.5 1 0 2 4 6 8 10 1 0 2 4 6 8 10
Diagnosing the Results WhatisGoingoninCKMExample:WhyisThereBias? Problem Lies in DifficultyofEstimatingtheSumofVARCoefficients. Recall: C 1 = f LR (V,B 1 +... + B p ) Regressions Only Care About B 1 +... + B q IfThereisLotsofPowerat Low Frequencies 46
Sims Approximation Theorem Suppose that the True VAR Has the Following Representation: Y t = B(L)Y t 1 + u t,u t Y t s,s>0. Econometrician Estimates Finite-Parameter Approximation to B(L) : Y t = ˆB 1 Y t 1 + ˆB 2 Y t 2 +... + ˆB p Y t p + u t,eu t u 0 t = ˆV Ĉ = hĉ1.ĉ2i,ε t = µ ε z t ε 2t, Ĉ 1 = f LR ³ ˆV, ˆB1 +... + ˆB p Concern: ˆB(L) May Have Too Few Lags (p too small) How Does Specification Error Affect Inference About Impulse Responses? 22
Sims Approximation Theorem... OLS Estimation Focuses on Residual: û t = Y t ˆB(L)Y t 1 = h B(L) ˆB(L) i Y t 1 + u t By Orthogonality of u t and past Y t : Var(û t )=Var ³h B(L) ˆB(L) i Y t 1 + V = 1 2π Z π π h B(e iω ) ˆB e iω i h S Y (e iω ) B(e iω ) 0 ˆB e iω i 0 dω + V B(e iω )=B 0 + B 1 e iω + B 2 e 2iω +... ˆB(e iω )= ˆB 0 + ˆB 1 e iω + ˆB 2 e 2iω +... + ˆB p e piω 23
Sims Approximation Theorem... In Population, ˆB, ˆV Chosen to Solve (Sims, 1972) ˆV =minˆb 1 2π R h π π B(e iω ) ˆB e iω i h S Y (e iω ) B(e iω ) 0 ˆB e iω i 0 dω + V With No Specification Error ˆB(L) =B(L), ˆV = V With Short Lag Lengths, ˆV Accurate ˆB 1 +... + ˆB p Accurate Only By Chance (i.e., if S Y (e i0 ) large) No Reason to Expect Ŝ0 to be Accurate 24
Implications of Sims Approximation Formula CKM Example Would Have Had Less Bias if: If VAR Was Better at Low-Frequency Part of Estimation If there Were More Low-Frequency Power in CKM Example 47
Improving on Low-Frequency Part of Standard SVAR Procedure Replace Ŝ0 Implicit in Standard SVAR Procedure, with Non-parametric Estimator of S 0 48
Standard Method The Importance of Frequency Zero MLE Specification Bartlett Window 2 1.5 1 0.5 0 0.5 2 1.5 1 0.5 0 0.5 1 0 2 4 6 8 10 CKM Specification 1 0 2 4 6 8 10 2 1.5 1 0.5 0 0.5 1 0 2 4 6 8 10 2 1.5 1 0.5 0 0.5 1 0 2 4 6 8 10
Injecting More Power into Low Frequencies Preference Shock in CKM Example: τ lt = τ l +0.952τ l,t 1 +0.0136 ε l t. Replace it with: where τ lt = τ l +0.995τ l,t 1 + σ ε l t, σ adjusted so Variance(τ lt ) Unchanged 49
2 1.5 1 0.5 0 0.5 1 0 2 4 6 8 10 CKM Baseline Model except ρ l = 0.995 with la 2 1.5 1 0.5 0 0.5 1 0 2 4 6 8 10
Sims Approximation Theorem... Example: CKM Specification FirstSixLagMatricesinInfinite-Order VAR Representation B 1 = 0.013 0.041 0.0065 0.94,B 2 = B j =0.955 B j 1,j=3, 4,... 0.012 0.00 0.0062 0.00,B 3 = 0.012 0.00 0.0059 0.00,... Population Estimate of Four-lag VAR ˆB 1 = 0.017 0.043 0.0087 0.94, ˆB2 = 0.017 0.00 0.0085 0.00, ˆB3 = 0.012 0.00 0.0059 0.00, Actual and Estimated Sum of VAR Coefficients ˆB (1) = 0.050 0.032 0.026 0.94,B(1) = 0.28 0.022 0.14 0.93, P 4 j=1 B j = 0.047 0.039 0.024 0.94 48
Sims Approximation Theorem... 0.014 Response with Standard long Run Identification 0.012 0.01 0.008 True Responses Response Using Estimated B s and True C1 0.006 0.004 0.002 Y t = 0 0 5 10 15 20 25 30 35 40 45 50 h I ˆB 1 L ˆB 2 L 2 ˆB 3 L 3 ˆB 4 L 4i 1 Ĉ1 ε 1,t ³ Ĉ 1 = f LR ˆV, ˆB1 +... + p ˆB. 49
Reconciling with CKM CKM Conclude: Simulation Experiments Indicate Long-run SVARs Not Fruitful for Building DSGE Models. Present an Empirical Example, Which Suggests VARs are Unreasonably Sensitive to Minor Data Changes We Disagree: Three Reasons The Data Overwhelmingly Reject CKM s Parameterization Even if the World Did Correspond to One of CKM s Examples, No Econometrician Would Be Misled Econometrician Who Pays Attention to Standard Errors Not Misled In Example, Technology Shocks Have Almost Nothing to do With Hours Worked In CKM s Empirical Example, Data Changes are Substantial 57
CKM Baseline Model is Rejected by the Data CKM estimate their Baseline Model using MLE with Measurement Error. Let Y t =( log y t, log l t, log i t, log G t ) 0, Observer Equation: Y t = X t + u t,eu t u 0 t = R, R is a diagonal matrix, u t : 4 1 vector of iid measurement error, X t : model implications for Y t 42
CKM Baseline Model is Rejected by the Data... CKM Allow for Four Shocks. (τ l,t,z t,τ xt,g t ) G t = g t z t CKM fix the elements on the diagonal of R to equal 1/100 Var(Y t ) For Purposes of Estimating the Baseline Model, Assume: So, g t = ḡ, τ xt = τ x. log G t = log z t + small measurement error t. 44
Figure 11: The Treatment of CKM Measurement Error 2 CKM specification, July Measurement Error LLF = 329 2 CKM July measurement error, No G LLF = 1833 Percent 1 Percent 1 0 0 0 2 4 6 8 10 Period After Shock 0 2 4 6 8 10 Period After Shock CKM May measurement error CKM May measurement error, No G 2 LLF = 2590 2 LLF = 2034 Percent 1 0 Percent 1 0 0 2 4 6 8 10 Period After Shock 0 2 4 6 8 10 Period After Shock CKM Free measurement error CKM Free measurement error, No G 2 LLF = 2804 2 LLF = 2188 Percent 1 Percent 1 0 0 0 2 4 6 8 10 Period After Shock 0 2 4 6 8 10 Period After Shock
CKM Present An Empirical Example Which Illustrates Unreasonable Sensitivity of VARs Three Studies of Response of Hours Worked to Technology: Francis and Ramey (2004), Christiano-Eichenbaum-Vigfusson Gali and Rabanal (2004) They Use Different Measures of Hours Worked VAR-based Estimate of Response of Hours Worked to Technology Sensitive to Measure of Hours Worked Used Throwing Out VARs Because of This Sensitivity Like Throwing Away the Messenger When there is Bad News Better Response: Dig Deeper into Different Measures of Hours Worked to Understand Why They Have Such Different Properties 64
Figure 13: Data Sensitivity and Inference in VARs FR Data Estimated Hours Response Starting in 1948 CEV Data GR Data Percent 1 0.5 0 0.5 1 0 5 10 Period After the Shock Percent 1 0.5 0 0.5 1 0 5 10 Period After the Shock Percent 1 0.5 0 0.5 1 0 5 10 Period After the Shock FR Data Estimated Hours Response Starting in 1959 CEV Data GR Data 1 1 1 Percent 0.5 0 Percent 0.5 0 Percent 0.5 0 0.5 0.5 0.5 0 5 10 Period After the Shock 0 5 10 Period After the Shock Hours Per Capita 0 5 10 Period After the Shock 1.1 2000q1 = 1 1 0.9 0.8 0.7 1950 1955 1960 1965 1970 1975 1980 1985 1990 1995 2000 FR CEV GR
A Summing Up So Far With Long Run Restrictions, For RBC Models that Fit the Data Well, Structural VARs Work Well Examples Can Be Found With Some Bias Reflects Difficulty of Estimating Sum of VAR Coefficients Bias is Small Relative to Sampling Uncertainty Econometrician Would Correctly Assess Sampling Uncertainty Golden Rule: Pay Attention to Standard Errors! 41
Turning to SVARS with Short Run Identifying Restrictions Bulk of SVAR Literature Concerned with Short-Run Identification Substantive Economic Issues Hinge on Accuracy of SVARs with Short-run Identification Ed Green s Review of Mike Woodford s Recent Book on Monetary Economics Recent Monetary DSGE Models Deviate from Original Rational Expectations Models (Lucas-Prescott, Lucas, Kydland-Prescott, Sargent, Long-Plosser, and Lucas-Stokey) By Incorporating Various Frictions. Motivated by Analysis of SVARs with Short-run Identification. 66
SVARS with Short Run Identifying Restrictions Adapt our Conventional RBC Model, to Study VARs Identified with Short-run Restrictions Results Based on Short-run Restrictions Allow Us to Diagnose Results Based on Long-run Restrictions Recursive version of the RBC Model First, τ lt is observed Second, labor decision is made. Third, other shocks are realized. Then, everything else happens. 58
The Recursive Version of the RBC Model Key Short Run Restrictions: log l t = f (ε l,t, lagged shocks) log Y t l t = g (ε z t,ε l,t, lagged shocks), Recover ε z t : Regress log Y t l t on log l t Residual is measure of ε z t. This Procedure is Mapped into an SVAR identified with a Choleski decompostion of ˆV. 59
The Recursive Version of the RBC Model... The Estimated VAR: Y t = B 1 Y t 1 + B 2 Y t 2 +... + B p Y t p + u t,eu t u 0 t = V u t = Cε t,cc 0 = V. µ ε z C =[C 1.C 2 ],ε t = t ε 2t Impulse Response Response Functions Require: B 1,...,B p,c 1 Short-run Restrictions Uniquely Pin Down C 1 : ³ C 1 = f SR ˆV Note: Sum of VAR Coefficients Not Needed 60
Response of Hours to A Technology Shock MLE Specification (recursive version) Short Run Identification Assumption CKM Specification (recursive version) 0.8 0.6 0.4 0.2 0 0.2 0.8 0.6 0.4 0.2 0 0.2 0 2 4 6 8 10 0 2 4 6 8 10
SVARs with Short Run Restrictions Perform remarkably well Inference is Precise and Correct 68
VARs and Models with Nominal Frictions Data Generating Mechanism: an estimated DSGE model embodying nominal wage and price frictions as well as real and monetary shocks ACEL (2004) Three shocks Neutral shock to technology, Shock to capital-embodied technology Shock to monetary policy. Each shock accounts for about 1/3 of cyclical output variance in the model 65
Analysis of VARS using the ACEL model as DGP Neutral Technology Shock 0.6 Investment Specific Technology Shock 0.5 0.4 0.3 0.2 0.1 0 0 2 4 6 8 10 Monetary Policy Shock 0.5 0.4 0.3 0.2 0.1 0 0 2 4 6 8 10 0.4 0.3 0.2 0.1 0 0.1 0 2 4 6 8 10
Conclusion SVARs Address Question: How Does Economy Respond to a Particular Shock? When the Data Contain A Lot of Information: SVAR s Provide a Reliable Answer When the Data Contain Little Information: SVARs Indicate this Correctly, By Delivering Large (Accurate) Standard Errors 85
Conclusion... In Case of Short Run Restrictions Answer is Typically There is Enough Information to Answer the Question Reliably In Case of Long-Run Restrictions: Answer is Often, There Isn t Enough Information to Answer the Question Reliably Still, there are Examples Where SVARs Do Pick Up Useful Information. 87
0-0.2-0.4-0.6-0.8 Inflation 0 5 10 15