Perturbation Methods

Similar documents
Perturbation Methods I: Basic Results

An Introduction to Perturbation Methods in Macroeconomics. Jesús Fernández-Villaverde University of Pennsylvania

A Model with Collateral Constraints

Solution Methods. Jesús Fernández-Villaverde. University of Pennsylvania. March 16, 2016

Perturbation Methods II: General Case

Linearization. (Lectures on Solution Methods for Economists V: Appendix) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 21, 2018

Suggested Solutions to Homework #6 Econ 511b (Part I), Spring 2004

Bayesian Estimation of DSGE Models: Lessons from Second-order Approximations

Value Function Iteration

A simple macro dynamic model with endogenous saving rate: the representative agent model

Approximation around the risky steady state

Dynare. Wouter J. Den Haan London School of Economics. c by Wouter J. Den Haan

Lecture 7: Linear-Quadratic Dynamic Programming Real Business Cycle Models

First order approximation of stochastic models

Applications for solving DSGE models. October 25th, 2011

Estimating Macroeconomic Models: A Likelihood Approach

1 Bewley Economies with Aggregate Uncertainty

High-dimensional Problems in Finance and Economics. Thomas M. Mertens

1. Using the model and notations covered in class, the expected returns are:

Graduate Macro Theory II: Notes on Solving Linearized Rational Expectations Models

Linearized Euler Equation Methods

Solving Nonlinear Rational Expectations Models by Approximating the Stochastic Equilibrium System

Nonlinear and Stable Perturbation-Based Approximations

Second-order approximation of dynamic models without the use of tensors

Real Business Cycle Model (RBC)

Eco504, Part II Spring 2010 C. Sims PITFALLS OF LINEAR APPROXIMATION OF STOCHASTIC MODELS

MA Advanced Macroeconomics: 7. The Real Business Cycle Model

Deterministic Models

Small Open Economy RBC Model Uribe, Chapter 4

Why Nonlinear/Non-Gaussian DSGE Models?

1 The Basic RBC Model

A comparison of numerical methods for the. Solution of continuous-time DSGE models. Juan Carlos Parra Alvarez

Session 2 Working with Dynare

Lecture 2 The Centralized Economy

Incomplete Markets, Heterogeneity and Macroeconomic Dynamics

Solving a Dynamic (Stochastic) General Equilibrium Model under the Discrete Time Framework

Introduction to Numerical Methods

Economic Growth: Lecture 13, Stochastic Growth

Perturbation and Projection Methods for Solving DSGE Models

Risk and Ambiguity in Models of Business Cycles

Macroeconomics Theory II

Ambiguous Business Cycles: Online Appendix

Matlab Programming and Quantitative Economic Theory

HOMEWORK #3 This homework assignment is due at NOON on Friday, November 17 in Marnix Amand s mailbox.

Practical Dynamic Programming: An Introduction. Associated programs dpexample.m: deterministic dpexample2.m: stochastic

An approximate consumption function

Solutions Methods in DSGE (Open) models

Risk Matters: Breaking Certainty Equivalence

Advanced Econometrics III, Lecture 5, Dynamic General Equilibrium Models Example 1: A simple RBC model... 2

Foundation of (virtually) all DSGE models (e.g., RBC model) is Solow growth model

Perturbation Methods with Nonlinear Changes of Variables K L. J H I S, CA April, 2002

Optimal Simple And Implementable Monetary and Fiscal Rules

DYNARE SUMMER SCHOOL

Stochastic simulations with DYNARE. A practical guide.

Asset pricing in DSGE models comparison of different approximation methods

FEDERAL RESERVE BANK of ATLANTA

Topic 5: The Difference Equation

Monetary Economics: Solutions Problem Set 1

Lecture 4: Analysis of Optimal Trajectories, Transition Dynamics in Growth Model

Taking Perturbation to the Accuracy Frontier: A Hybrid of Local and Global Solutions

Linear and Loglinear Approximations (Started: July 7, 2005; Revised: February 6, 2009)

Solving DSGE Models with a Nonlinear Moving Average

problem. max Both k (0) and h (0) are given at time 0. (a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming

Getting to page 31 in Galí (2008)

Macroeconomics Qualifying Examination

Comparing Solution Methods for. Dynamic Equilibrium Economies

Perturbation techniques

Working Paper Series. on approximating Dsge models. expansions. No 1264 / november by Giovanni Lombardo

Lecture XI. Approximating the Invariant Distribution

Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6

Online Appendix for Slow Information Diffusion and the Inertial Behavior of Durable Consumption

Solving DSGE Models with a Nonlinear Moving Average

Public Economics The Macroeconomic Perspective Chapter 2: The Ramsey Model. Burkhard Heer University of Augsburg, Germany

ECOM 009 Macroeconomics B. Lecture 2

A Modern Equilibrium Model. Jesús Fernández-Villaverde University of Pennsylvania

Projection. Wouter J. Den Haan London School of Economics. c by Wouter J. Den Haan

Perturbation Methods for Markov-Switching DSGE Models

(a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming

1. Money in the utility function (start)

Topic 2. Consumption/Saving and Productivity shocks

Neoclassical Business Cycle Model

Graduate Macroeconomics - Econ 551

NBER WORKING PAPER SERIES SOLUTION AND ESTIMATION METHODS FOR DSGE MODELS. Jesús Fernández-Villaverde Juan F. Rubio Ramírez Frank Schorfheide

DSGE (3): BASIC NEOCLASSICAL MODELS (2): APPROACHES OF SOLUTIONS

Assumption 5. The technology is represented by a production function, F : R 3 + R +, F (K t, N t, A t )

Higher-Order Dynamics in Asset-Pricing Models with Recursive Preferences

Projection Methods. Felix Kubler 1. October 10, DBF, University of Zurich and Swiss Finance Institute

WORKING PAPER SERIES A THEORY OF PRUNING NO 1696 / JULY Giovanni Lombardo and Harald Uhlig

Suggested Solutions to Homework #3 Econ 511b (Part I), Spring 2004

Graduate Macro Theory II: Notes on Quantitative Analysis in DSGE Models

Lecture 4: Dynamic Programming

Computing DSGE Models with Recursive Preferences

Solving Deterministic Models

Lecture 7: Stochastic Dynamic Programing and Markov Processes

... Solving Dynamic General Equilibrium Models Using Log Linear Approximation

"0". Doing the stuff on SVARs from the February 28 slides

Lecture VI. Linearization Methods

Solving models with (lots of) idiosyncratic risk

SOLVING LINEAR RATIONAL EXPECTATIONS MODELS. Three ways to solve a linear model

The Small-Open-Economy Real Business Cycle Model

Transcription:

Perturbation Methods Jesús Fernández-Villaverde University of Pennsylvania May 28, 2015 Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 1 / 91

Introduction Introduction Remember that we want to solve a functional equation of the form: H (d) = 0 for an unknown decision rule d. Perturbation solves the problem by specifying: d n (x, θ) = n θ i (x x 0 ) i i=0 We use implicit-function theorems to find coeffi cients θ i s. Inherently local approximation. However, often good global properties. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 2 / 91

Motivation Introduction Many complicated mathematical problems have: 1 either a particular case 2 or a related problem. that is easy to solve. Often, we can use the solution of the simpler problem as a building block of the general solution. Very successful in physics. Sometimes perturbation is known as asymptotic methods. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 3 / 91

Introduction The World Simplest Perturbation What is 26? Without your Iphone calculator, it is a boring arithmetic calculation. But note that: 26 = 25 (1 + 0.04) = 5 1.04 5 1.02 = 5.1 Exact solution is 5.099. We have solved a much simpler problem ( 25) and added a small coeffi cient to it. More in general y = x 2 (1 + ε) = x 1 + ε where x is an integer and ε the perturbation parameter. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 4 / 91

Introduction Applications to Economics Judd and Guu (1993) showed how to apply it to economic problems. Recently, perturbation methods have been gaining much popularity. In particular, second- and third-order approximations are easy to compute and notably improve accuracy. A first-order perturbation theory and linearization deliver the same output. Hence, we can use much of what we already know about linearization. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 5 / 91

Introduction Regular versus Singular Perturbations Regular perturbation: a small change in the problem induces a small change in the solution. Singular perturbation: a small change in the problem induces a large change in the solution. Example: excess demand function. Most problems in economics involve regular perturbations. Sometimes, however, we can have singularities. Example: introducing a new asset in an incomplete markets model. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 6 / 91

References Introduction General: 1 A First Look at Perturbation Theory by James G. Simmonds and James E. Mann Jr. 2 Advanced Mathematical Methods for Scientists and Engineers: Asymptotic Methods and Perturbation Theory by Carl M. Bender, Steven A. Orszag. Economics: 1 Perturbation Methods for General Dynamic Stochastic Models by Hehui Jin and Kenneth Judd. 2 Perturbation Methods with Nonlinear Changes of Variables by Kenneth Judd. 3 A gentle introduction: Solving Dynamic General Equilibrium Models Using a Second-Order Approximation to the Policy Function by Martín Uribe and Stephanie Schmitt-Grohe. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 7 / 91

A Baby Example A Baby Example: A Basic RBC Model: max E 0 t=0 β t log c t Equilibrium conditions: s.t. c t + k t+1 = e z t k α t + (1 δ) k t, t > 0 z t = ρz t 1 + σε t, ε t N (0, 1) 1 1 ( = βe t 1 + αe z t+1 kt+1 α 1 c t c δ) t+1 c t + k t+1 = e z t k α t + (1 δ) k t z t = ρz t 1 + σε t Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 8 / 91

Computing a Solution A Baby Example The previous problem does not have a known paper and pencil solution except when (unrealistically) δ = 1. Then, income and substitution effect from a technology shock cancel each other (labor constant and consumption is a fixed fraction of income). Equilibrium conditions with δ = 1: 1 αe zt+1 kt+1 α 1 = βe t c t c t+1 c t + k t+1 = e z t k α t By guess and verify : z t = ρz t 1 + σε t c t = (1 αβ) e z t k α t k t+1 = αβe z t k α t Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 9 / 91

A Baby Example Another Way to Solve the Problem Now let us suppose that you missed the lecture when guess and verify was explained. You need to compute the RBC. What you are searching for? A decision rule for consumption: c t = c (k t, z t ) and another one for capital: k t+1 = k (k t, z t ) Note that our d is just the stack of c (k t, z t ) and k (k t, z t ). Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 10 / 91

Equilibrium Conditions A Baby Example We substitute in the equilibrium conditions the budget constraint and the law of motion for technology. And we write the decision rules explicitly as function of the states. Then: 1 c (k t, z t ) = βe αe ρz t +σε t+1 k (k t, z t ) α 1 t c (k (k t, z t ), ρz t + σε t+1 ) c (k t, z t ) + k (k t, z t ) = e z t kt α System of functional equations. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 11 / 91

Main Idea A Baby Example Transform the problem rewriting it in terms of a small perturbation parameter. Solve the new problem for a particular choice of the perturbation parameter. This step is usually ambiguous since there are different ways to do so. Use the previous solution to approximate the solution of original the problem. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 12 / 91

A Baby Example A Perturbation Approach Hence, we want to transform the problem. Which perturbation parameter? Standard deviation σ. Why σ? Discrete versus continuous time. Set σ = 0 deterministic model, z t = 0 and e z t = 1. We know how to solve the deterministic steady state. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 13 / 91

A Baby Example A Parametrized Decision Rule We search for decision rule: c t = c (k t, z t ; σ) and k t+1 = k (k t, z t ; σ) Note new parameter σ. We are building a local approximation around σ = 0. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 14 / 91

Taylor s Theorem A Baby Example Equilibrium conditions: ( ) 1 E t c (k t, z t ; σ) β αe ρzt +σεt+1 k (k t, z t ; σ) α 1 = 0 c (k (k t, z t ; σ), ρz t + σε t+1 ; σ) c (k t, z t ; σ) + k (k t, z t ; σ) e z t kt α = 0 We will take derivatives with respect to k t, z t, and σ. Apply Taylor s theorem to build solution around deterministic steady state. How? Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 15 / 91

A Baby Example Asymptotic Expansion I c t = c (k t, z t ; σ) k,0,0 = c (k, 0; 0) +c k (k, 0; 0) (k t k) + c z (k, 0; 0) z t + c σ (k, 0; 0) σ + 1 2 c kk (k, 0; 0) (k t k) 2 + 1 2 c kz (k, 0; 0) (k t k) z t + 1 2 c k σ (k, 0; 0) (k t k) σ + 1 2 c zk (k, 0; 0) z t (k t k) + 1 2 c zz (k, 0; 0) zt 2 + 1 2 c z σ (k, 0; 0) z t σ + 1 2 c σk (k, 0; 0) σ (k t k) + 1 2 c σz (k, 0; 0) σz t + 1 2 c σ 2 (k, 0; 0) σ2 +... Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 16 / 91

A Baby Example Asymptotic Expansion II k t+1 = k (k t, z t ; σ) k,0,0 = k (k, 0; 0) +k k (k, 0; 0) (k t k) + k z (k, 0; 0) z t + k σ (k, 0; 0) σ + 1 2 k kk (k, 0; 0) (k t k) 2 + 1 2 k kz (k, 0; 0) (k t k) z t + 1 2 k k σ (k, 0; 0) (k t k) σ + 1 2 k zk (k, 0; 0) z t (k t k) + 1 2 k zz (k, 0; 0) zt 2 + 1 2 k z σ (k, 0; 0) z t σ + 1 2 k σk (k, 0; 0) σ (k t k) + 1 2 k σz (k, 0; 0) σz t + 1 2 k σ 2 (k, 0; 0) σ2 +... Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 17 / 91

Comment on Notation A Baby Example From now on, to save on notation, I will write [ 1 F (k t, z t ; σ) = E t c(k t,z t ;σ) β αe ρzt +σε t+1 k(kt,z t ;σ) α 1 c(k(k t,z t ;σ),ρz t +σε t+1 ;σ) c (k t, z t ; σ) + k (k t, z t ; σ) e z t kt α ] = [ 0 0 ] Note that: F (k t, z t ; σ) = H (c t, c t+1, k t, k t+1, z t ; σ) = H (c (k t, z t ; σ), c (k (k t, z t ; σ), z t+1 ; σ), k t, k (k t, z t ; σ), z t ; σ) I will use H i to represent the partial derivative of H with respect to the i component and drop the evaluation at the steady state of the functions when we do not need it. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 18 / 91

A Baby Example Zeroth-Order Approximation First, we evaluate σ = 0: F (k t, 0; 0) = 0 Steady state: or Then: 1 c = β αk α 1 c 1 = αβk α 1 c = c (k, 0; 0) = (αβ) α 1 α (αβ) 1 1 α k = k (k, 0; 0) = (αβ) 1 1 α Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 19 / 91

A Baby Example First-Order Approximation We take derivatives of F (k t, z t ; σ) around k, 0, and 0. With respect to k t : F k (k, 0; 0) = 0 With respect to z t : F z (k, 0; 0) = 0 With respect to σ: F σ (k, 0; 0) = 0 Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 20 / 91

A Baby Example Solving the System I Remember that: F (k t, z t ; σ) = H (c (k t, z t ; σ), c (k (k t, z t ; σ), z t+1 ; σ), k t, k (k t, z t ; σ), z t ; σ) = 0 Because F (k t, z t ; σ) must be equal to zero for any possible values of k t, z t, and σ, the derivatives of any order of F must also be zero. Then: F k (k, 0; 0) = H 1 c k + H 2 c k k k + H 3 + H 4 k k = 0 F z (k, 0; 0) = H 1 c z + H 2 (c k k z + c k ρ) + H 4 k z + H 5 = 0 F σ (k, 0; 0) = H 1 c σ + H 2 (c k k σ + c σ ) + H 4 k σ + H 6 = 0 Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 21 / 91

A Baby Example Solving the System II A quadratic system: F k (k, 0; 0) = H 1 c k + H 2 c k k k + H 3 + H 4 k k = 0 F z (k, 0; 0) = H 1 c z + H 2 (c k k z + c k ρ) + H 4 k z + H 5 = 0 of 4 equations on 4 unknowns: c k, c z, k k, and k z. Procedures to solve quadratic systems: 1 Blanchard and Kahn (1980). 2 Uhlig (1999). 3 Sims (2000). 4 Klein (2000). All of them equivalent. Why quadratic? Stable and unstable manifold. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 22 / 91

Solving the System III A Baby Example Also, note that: F σ (k, 0; 0) = H 1 c σ + H 2 (c k k σ + c σ ) + H 4 k σ + H 6 = 0 is a linear, and homogeneous system in c σ and k σ. Hence: c σ = k σ = 0 This means the system is certainty equivalent. Interpretation no precautionary behavior. Difference between risk-aversion and precautionary behavior. Leland (1968), Kimball (1990). Risk-aversion depends on the second derivative (concave utility). Precautionary behavior depends on the third derivative (convex marginal utility). Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 23 / 91

A Baby Example Comparison with LQ and Linearization After Kydland and Prescott (1982) a popular method to solve economic models has been to find a LQ approximation of the objective function of the agents. Close relative: linearization of equilibrium conditions. When properly implemented linearization, LQ, and first-order perturbation are equivalent. Advantages of perturbation: 1 Theorems. 2 Higher-order terms. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 24 / 91

A Baby Example Some Further Comments Note how we have used a version of the implicit-function theorem. Important tool in economics. Also, we are using the Taylor theorem to approximate the policy function. Alternatives? Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 25 / 91

A Baby Example Second-Order Approximation We take second-order derivatives of F (k t, z t ; σ) around k, 0, and 0: F kk (k, 0; 0) = 0 F kz (k, 0; 0) = 0 F k σ (k, 0; 0) = 0 F zz (k, 0; 0) = 0 F z σ (k, 0; 0) = 0 F σσ (k, 0; 0) = 0 Remember Young s theorem! We substitute the coeffi cients that we already know. A linear system of 12 equations on 12 unknowns. Why linear? Cross-terms kσ and zσ are zero. Conjecture on all the terms with odd powers of σ. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 26 / 91

Correction for Risk A Baby Example We have a term in σ 2. Captures precautionary behavior. We do not have certainty equivalence any more! Important advantage of second-order approximation. Changes ergodic distribution of states. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 27 / 91

Higher-Order Terms A Baby Example We can continue the iteration for as long as we want. Great advantage of procedure: it is recursive! Often, a few iterations will be enough. The level of accuracy depends on the goal of the exercise: 1 Welfare analysis: Kim and Kim (2001). 2 Empirical strategies: Fernández-Villaverde, Rubio-Ramírez, and Santos (2006). Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 28 / 91

A Numerical Example A Numerical Example Parameter β α ρ σ Value 0.99 0.33 0.0 0.01 Steady State: c = 0.388069 k = 0.1883 First-order terms: Second-order terms: c k (k, 0; 0) = 0.680101 k k (k, 0; 0) = 0.33 c z (k, 0; 0) = 0.388069 k z (k, 0; 0) = 0.1883 c kk (k, 0; 0) = 2.41990 k kk (k, 0; 0) = 1.1742 c kz (k, 0; 0) = 0.680099 k kz (k, 0; 0) = 0.33 c zz (k, 0; 0) = 0.388064 k zz (k, 0; 0) = 0.1883 c σ 2 (k, 0; 0) 0 k σ 2 (k, 0; 0) 0 c σ (k, 0; 0) = k σ (k, 0; 0) = c k σ (k, 0; 0) = k k σ (k, 0; 0) = c z σ (k, 0; 0) = k z σ (k, 0; 0) = 0. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 29 / 91

Comparison A Numerical Example and: c t = 0.6733e z t k 0.33 t c t 0.388069 + 0.680101 (k t k) + 0.388069z t 2.41990 (k t k) 2 + 0.680099 (k t k) z t + 0.388064 zt 2 2 2 k t+1 = 0.3267e z t k 0.33 t k t+1 0.1883 + 0.33 (k t k) + 0.1883z t 1.1742 2 (k t k) 2 + 0.33 (k t k) z t + 0.1883 zt 2 2 Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 30 / 91

A Numerical Example Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 31 / 91

A Computer A Numerical Example In practice you do all this approximations with a computer: 1 First-, second-, and third-order: Matlab and Dynare. 2 Higher-order: Mathematica, Dynare++, Fortran code by Jinn and Judd. Burden: analytical derivatives. Why are numerical derivatives a bad idea? Alternatives: automatic differentiation? Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 32 / 91

A Numerical Example Local Properties of the Solution Perturbation is a local method. It approximates the solution around the deterministic steady state of the problem. It is valid within a radius of convergence. What is the radius of convergence of a power series around x? An r R + such that x, x z < r, the power series of x will converge. A Remarkable Result from Complex Analysis The radius of convergence is always equal to the distance from the center to the nearest point where the policy function has a (non-removable) singularity. If no such point exists then the radius of convergence is infinite. Singularity here refers to poles, fractional powers, and other branch powers or discontinuities of the functional or its derivatives. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 33 / 91

Remarks A Numerical Example Intuition of the theorem: holomorphic functions are analytic. Distance is in the complex plane. Often, we can check numerically that perturbations have good non local behavior. However: problem with boundaries. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 34 / 91

A Numerical Example Non Local Accuracy Test Proposed by Judd (1992) and Judd and Guu (1997). Given the Euler equation: 1 c i (k t, z t ) = E t ( αe z t+1 k i (k t, z t ) α 1 ) c i (k i (k t, z t ), z t+1 ) we can define: ( αe EE i (k t, z t ) 1 c i z t+1 k i (k t, z t ) α 1 ) (k t, z t ) E t c i (k i (k t, z t ), z t+1 ) Units of reporting. Interpretation. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 35 / 91

A Numerical Example Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 36 / 91

The General Case The General Case Most of previous argument can be easily generalized. The set of equilibrium conditions of many DSGE models can be written as (note recursive notation) E t H(y, y, x, x ) = 0, where y t is a n y 1 vector of controls and x t is a n x 1 vector of states. Define n = n x + n y. Then H maps R n y R n y R n x R n x into R n. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 37 / 91

The General Case Partitioning the State Vector The state vector x t can be partitioned as x = [x 1 ; x 2 ] t. x 1 is a (n x n ɛ ) 1 vector of endogenous state variables. x 2 is a n ɛ 1 vector of exogenous state variables. Why do we want to partition the state vector? Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 38 / 91

The General Case Exogenous Stochastic Process Process with 3 parts: x 2 = Λx 2 + ση ɛ ɛ 1 The deterministic component Λx 2 : 1 Λ is a n ɛ n ɛ matrix, with all eigenvalues with modulus less than one. 2 More general: x 2 = Γ(x 2) + ση ɛ ɛ, where Γ is a non-linear function satisfying that all eigenvalues of its first derivative evaluated at the non-stochastic steady state lie within the unit circle. 2 The scaled innovation η ɛ ɛ where: 1 η ɛ is a known n ɛ n ɛ matrix. 2 ɛ is a n ɛ 1 i.i.d innovation with bounded support, zero mean, and variance/covariance matrix I. 3 The perturbation parameter σ. We can accommodate very general structures of x 2 through changes in the definition of the state space: i.e. stochastic volatility. Note we do not impose Gaussianity. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 39 / 91

The General Case The Perturbation Parameter The scalar σ 0 is the perturbation parameter. If we set σ = 0 we have a deterministic model. Important: there is only ONE perturbation parameter. The matrix η ɛ takes account of relative sizes of different shocks. Why bounded support? Samuelson (1970) and Jin and Judd (2002). Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 40 / 91

Solution of the Model The General Case The solution to the model is of the form: y = g(x; σ) x = h(x; σ) + σηɛ where g maps R n x R + into R n y and h maps R n x R + into R n x. The matrix η is of order n x n ɛ and is given by: [ ] η = η ɛ Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 41 / 91

Perturbation The General Case We wish to find a perturbation approximation of the functions g and h around the non-stochastic steady state, x t = x and σ = 0. We define the non-stochastic steady state as vectors ( x, ȳ) such that: H(ȳ, ȳ, x, x) = 0. Note that ȳ = g( x; 0) and x = h( x; 0). This is because, if σ = 0, then E t H = H. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 42 / 91

The General Case Plugging-in the Proposed Solution Substituting the proposed solution, we define: F (x; σ) E t H(g(x; σ), g(h(x; σ) + ησɛ, σ), x, h(x; σ) + ησɛ ) = 0 Since F (x; σ) = 0 for any values of x and σ, the derivatives of any order of F must also be equal to zero. Formally: F x k σj (x; σ) = 0 x, σ, j, k, where F x k σj (x, σ) denotes the derivative of F with respect to x taken k times and with respect to σ taken j times. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 43 / 91

The General Case First-Order Approximation We look for approximations to g and h around (x, σ) = ( x, 0): g(x; σ) = g( x; 0) + g x ( x; 0)(x x) + g σ ( x; 0)σ h(x; σ) = h( x; 0) + h x ( x; 0)(x x) + h σ ( x; 0)σ As explained earlier, and g( x; 0) = ȳ h( x; 0) = x. The four unknown coeffi cients of the first-order approximation to g and h are found by using: and F x ( x; 0) = 0 F σ ( x; 0) = 0 Before doing so, I need to introduce the tensor notation. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 44 / 91

Tensors The General Case General trick from physics. An n th -rank tensor in a m-dimensional space is an operator that has n indices and m n components and obeys certain transformation rules. [H y ] i α is the (i, α) element of the derivative of H with respect to y: 1 The derivative of H with respect to y is an n n y matrix. 2 Thus, [H y ] i α is the element of this matrix located at the intersection of the i-th row and α-th column. 3 Thus, [H y ] i α[g x ] α β [h x ] β j = n y α=1 n x β=1 Hi g α h β y α. x β x j [H y y ]i αγ: 1 H y y is a three dimensional array with n rows, n y columns, and n y pages. 2 Then [H y y ]i αγ denotes the element of H y y located at the intersection of row i, column α and page γ. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 45 / 91

The General Case Solving the System I g x and h x can be found as the solution to the system: [F x ( x; 0)] i j = [H y ] i α[g x ] α β [h x ] β j + [H y ] i α[g x ] α j + [H x ] i β [h x ] β j + [H x ] i j = i = 1,..., n; j, β = 1,..., n x ; α = 1,..., n y Note that the derivatives of H evaluated at (y, y, x, x ) = (ȳ, ȳ, x, x) are known. Then, we have a system of n n x quadratic equations in the n n x unknowns given by the elements of g x and h x. We can solve with a standard quadratic matrix equation solver. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 46 / 91

The General Case Solving the System II g σ and h σ are identified as the solution to the following n equations: Then: [F σ ( x; 0)] i = E t {[H y ] i α[g x ] α β [h σ] β + [H y ] i α[g x ] α β [η]β φ [ɛ ] φ + [H y ] i α[g σ ] α +[H y ] i α[g σ ] α + [H x ] i β [h σ] β + [H x ] i β [η]β φ [ɛ ] φ } i = 1,..., n; α = 1,..., n y ; β = 1,..., n x ; φ = 1,..., n ɛ. [F σ ( x; 0)] i = [H y ] i α[g x ] α β [h σ] β + [H y ] i α[g σ ] α + [H y ] i α[g σ ] α + [f x ] i β [h σ] β = 0; i = 1,..., n; α = 1,..., n y ; β = 1,..., n x ; φ = 1,..., n ɛ. Certainty equivalence: this equation is linear and homogeneous in g σ and h σ. Thus, if a unique solution exists, it must satisfy: h σ = 0 g σ = 0 Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 47 / 91

The General Case Second-Order Approximation I The second-order approximations to g around (x; σ) = ( x; 0) is [g(x; σ)] i = [g( x; 0)] i + [g x ( x; 0)] i a[(x x)] a + [g σ ( x; 0)] i [σ] + 1 2 [g xx ( x; 0)] i ab[(x x)] a [(x x)] b + 1 2 [g x σ( x; 0)] i a[(x x)] a [σ] + 1 2 [g σx ( x; 0)] i a[(x x)] a [σ] + 1 2 [g σσ( x; 0)] i [σ][σ] where i = 1,..., n y, a, b = 1,..., n x, and j = 1,..., n x. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 48 / 91

The General Case Second-Order Approximation II The second-order approximations to h around (x; σ) = ( x; 0) is [h(x; σ)] j = [h( x; 0)] j + [h x ( x; 0)] j a[(x x)] a + [h σ ( x; 0)] j [σ] + 1 2 [h xx ( x; 0)] j ab [(x x)] a[(x x)] b + 1 2 [h x σ( x; 0)] j a[(x x)] a [σ] + 1 2 [h σx ( x; 0)] j a[(x x)] a [σ] + 1 2 [h σσ( x; 0)] j [σ][σ], where i = 1,..., n y, a, b = 1,..., n x, and j = 1,..., n x. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 49 / 91

The General Case Second-Order Approximation III The unknowns of these expansions are [g xx ] i ab, [g x σ] i a, [g σx ] i a, [g σσ ] i, [h xx ] j ab, [h x σ] j a, [h σx ] j a, [h σσ ] j. These coeffi cients can be identified by taking the derivative of F (x; σ) with respect to x and σ twice and evaluating them at (x; σ) = ( x; 0). By the arguments provided earlier, these derivatives must be zero. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 50 / 91

Solving the System I The General Case We use F xx ( x; 0) to identify g xx ( x; 0) and h xx ( x; 0): [F xx ( x; 0)] i jk = ( ) [H y y ]i αγ[g x ] γ δ [h x ] k δ + [H y y ] i αγ[g x ] γ k + [H y x ]i αδ [h x ] k δ + [H y x ] i αk [g x ] α β [h x ] β j +[H y ] i α[g xx ] α βδ [h x ] k δ [h x ] β j + [H y ] i α[g x ] α β [h xx ] β jk ( ) + [H yy ] i αγ[g x ] γ δ [h x ] k δ + [H yy ] i αγ[g x ] γ k + [H yx ]i αδ [h x ] k δ + [H yx ] i αk [g x ] j α +[H y ] i α[g xx ] jk α ( ) + [H x y ]i βγ [g x ] γ δ [h x ] k δ + [H x y ] i βγ [g x ] γ k + [H x x ]i βδ [h x ] k δ + [H x x ] i βk [h x ] β j +[H x ] i β [h xx ] β jk +[H xy ] i jγ[g x ] γ δ [h x ] δ k + [H xy ] i jγ[g x ] γ k + [H xx ]i jδ [h x ] δ k + [H xx ] i jk = 0; i = 1,... n, j, k, β, δ = 1,... n x ; α, γ = 1,... n y. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 51 / 91

Solving the System II The General Case We know the derivatives of H. We also know the first derivatives of g and h evaluated at (y, y, x, x ) = (ȳ, ȳ, x, x). Hence, the above expression represents a system of n n x n x linear equations in then n n x n x unknowns elements of g xx and h xx. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 52 / 91

The General Case Solving the System III Similarly, g σσ and h σσ can be obtained by solving: [F σσ ( x; 0)] i = [H y ] i α[g x ] α β [h σσ] β +[H y y ]i αγ[g x ] γ δ [η]δ ξ [g x ] α β [η]β φ [I ]φ ξ +[H y x ]i αδ [η]δ ξ [g x ] α β [η]β φ [I ]φ ξ +[H y ] i α[g xx ] α βδ [η]δ ξ [η]β φ [I ]φ ξ + [H y ]i α[g σσ ] α +[H y ] i α[g σσ ] α + [H x ] i β [h σσ] β +[H x y ]i βγ [g x ] γ δ [η]δ ξ [η]β φ [I ]φ ξ +[H x x ]i βδ [η]δ ξ [η]β φ [I ]φ ξ = 0; i = 1,..., n; α, γ = 1,..., n y ; β, δ = 1,..., n x ; φ, ξ = 1,..., n ɛ a system of n linear equations in the n unknowns given by the elements of g σσ and h σσ. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 53 / 91

The General Case Cross Derivatives The cross derivatives g x σ and h x σ are zero when evaluated at ( x, 0). Why? Write the system F σx ( x; 0) = 0 taking into account that all terms containing either g σ or h σ are zero at ( x, 0). Then: [F σx ( x; 0)] i j = [H y ] i α[g x ] α β [h σx ] β j + [H y ] i α[g σx ] α γ[h x ] γ j + [H y ] i α[g σx ] α j + [H x ] i β [h σx ] β j = 0; i = 1,... n; α = 1,..., n y ; β, γ, j = 1,..., n x. a system of n n x equations in the n n x unknowns given by the elements of g σx and h σx. The system is homogeneous in the unknowns. Thus, if a unique solution exists, it is given by: g σx = 0 h σx = 0 Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 54 / 91

The General Case Structure of the Solution The perturbation solution of the model satisfies: g σ ( x; 0) = 0 h σ ( x; 0) = 0 g x σ ( x; 0) = 0 h x σ ( x; 0) = 0 Standard deviation only appears in: 1 A constant term given by 1 2 g σσσ 2 for the control vector y t. 2 The first n x n ɛ elements of 2 1 h σσσ 2. Correction for risk. Quadratic terms in endogenous state vector x 1. Those terms capture non-linear behavior. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 55 / 91

The General Case Higher-Order Approximations We can iterate this procedure as many times as we want. We can obtain n-th order approximations. Problems: 1 Existence of higher order derivatives (Santos, 1992). 2 Numerical instabilities. 3 Computational costs. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 56 / 91

Change of Variables Erik Eady It is not the process of linearization that limits insight. It is the nature of the state that we choose to linearize about. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 57 / 91

Change of Variables Change of Variables We approximated our solution in levels. We could have done it in logs. Why stop there? Why not in powers of the state variables? Judd (2002) has provided methods for changes of variables. We apply and extend ideas to the stochastic neoclassical growth model. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 58 / 91

Change of Variables A General Transformation We look at solutions of the form: Note that: ( ) c µ c µ 0 = a k ζ k ζ 0 + bz ( ) k γ k γ 0 = c k ζ k ζ 0 + dz 1 If γ, ζ, and µ are 1, we get the linear representation. 2 As γ, ζ and µ tend to zero, we get the loglinear approximation. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 59 / 91

Theory Change of Variables The first-order solution can be written as f (x) f (a) + (x a) f (a) Expand g(y) = h (f (X (y))) around b = Y (a), where X (y) is the inverse of Y (x). Then: g (y) = h (f (X (y))) = g (b) + g α (b) (Y α (x) b α ) where g α = h A fi A Xα i comes from the application of the chain rule. From this expression it is easy to see that if we have computed the values of f A i, then it is straightforward to find g α. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 60 / 91

Coeffi cients Relation Change of Variables Remember that the linear solution is: ( k k 0 ) = a 1 (k k 0 ) + b 1 z Then we show that: (l l 0 ) = c 1 (k k 0 ) + d 1 z a 3 = γ ζ k γ ζ 0 a 1 b 3 = γk γ 1 0 b 1 c 3 = µ ζ l µ 1 0 k 1 ζ 0 c 1 d 3 = µl µ 1 0 d 1 Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 61 / 91

Finding the Parameters Change of Variables Minimize over a grid the Euler Error. Some optimal results Euler Equation Errors γ ζ µ SEE 1 1 1 0.0856279 0.986534 0.991673 2.47856 0.0279944 Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 62 / 91

Sensitivity Analysis Change of Variables Different parameter values. Most interesting finding is when we change σ: Optimal Parameters for different σ s σ γ ζ µ 0.014 0.98140 0.98766 2.47753 0.028 1.04804 1.05265 1.73209 0.056 1.23753 1.22394 0.77869 A first-order approximation corrects for changes in variance! Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 63 / 91

Change of Variables Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 64 / 91

Change of Variables A Quasi-Optimal Approximation Sensitivity analysis reveals that for different parametrizations γ ζ This suggests the quasi-optimal approximation: k γ k γ ( 0 = a 3 k γ k γ ) 0 + b3 z l µ l µ ( 0 = c 3 k γ k γ ) 0 + d3 z If we define k = k γ k γ 0 and l = l µ l µ 0 we get: Linear system: k = a 3 k + b 3 z l = c 3 k + d 3 z 1 Use for analytical study. 2 Use for estimation with a Kalman Filter. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 65 / 91

Perturbing the Value Function Perturbing the Value Function We worked with the equilibrium conditions of the model. Sometimes we may want to perform a perturbation on the value function formulation of the problem. Possible reasons: 1 Gain insight. 2 Diffi culty in using equilibrium conditions. 3 Evaluate welfare. 4 Initial guess for VFI. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 66 / 91

Basic Problem Perturbing the Value Function Imagine that we have: Write it as: V (k t, z t ) = max c t V (k t, z t ; χ) = max c t [ ] (1 β) c1 γ t 1 γ + βe tv (k t+1, z t+1 ) s.t. c t + k t+1 = e z t k θ t + (1 δ) k t z t = λz t 1 + σε t, ε t N (0, 1) [ ] (1 β) c1 γ t 1 γ + βe tv (k t+1, z t+1 ; χ) s.t. c t + k t+1 = e z t k θ t + (1 δ) k t z t = λz t 1 + χσε t, ε t N (0, 1) Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 67 / 91

Alternative Perturbing the Value Function Another way to write the value function is: max c t [ V (k t, z t ; χ) = (1 β) c 1 γ t 1 γ + βe t V ( e z t kt θ + (1 δ) k t c t, λz t + χσε t+1 ; χ ) This form makes the dependences in the next period states explicit. ] The solution of this problem is value function V (k t, z t ; χ) and a policy function for consumption c (k t, z t ; χ). Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 68 / 91

Perturbing the Value Function Expanding the Value Function The second-order Taylor approximation of the value function around the deterministic steady state (k ss, 0; 0) is: where V (k t, z t ; χ) V ss + V 1,ss (k t k ss ) + V 2,ss z t + V 3,ss χ + 1 2 V 11,ss (k t k ss ) 2 + 1 2 V 12,ss (k t k ss ) z t + 1 2 V 13,ss (k t k ss ) χ + 1 2 V 21,ssz t (k t k ss ) + 1 2 V 22,ssz 2 t + 1 2 V 23,ssz t χ + 1 2 V 31,ss χ (k t k ss ) + 1 2 V 32,ss χz t + 1 2 V 33,ss χ 2 V ss = V (k ss, 0; 0) V i,ss = V i (k ss, 0; 0) for i = {1, 2, 3} V ij,ss = V ij (k ss, 0; 0) for i, j = {1, 2, 3} Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 69 / 91

Perturbing the Value Function Expanding the Value Function By certainty equivalence, we will show below that: V 3,ss = V 13,ss = V 23,ss = 0 Taking advantage of the equality of cross-derivatives, and setting χ = 1, which is just a normalization: V (k t, z t ; 1) V ss + V 1,ss (k t k ss ) + V 2,ss z t + 1 2 V 11,ss (k t k ss ) 2 + 1 2 V 22,ssz 2 tt +V 12,ss (k t k ss ) z + 1 2 V 33,ss Note that V 33,ss = 0, a difference from the standard linear-quadratic approximation to the utility functions. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 70 / 91

Perturbing the Value Function Expanding the Consumption Function The policy function for consumption can be expanded as: where: c t = c (k t, z t ; χ) c ss + c 1,ss (k t k ss ) + c 2,ss z t + c 3,ss χ c 1,ss = c 1 (k ss, 0; 0) c 2,ss = c 2 (k ss, 0; 0) c 3,ss = c 3 (k ss, 0; 0) Since the first derivatives of the consumption function only depend on the first and second derivatives of the value function, we must have c 3,ss = 0 (precautionary consumption depends on the third derivative of the value function, Kimball, 1990). Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 71 / 91

Perturbing the Value Function Linear Components of the Value Function To find the linear approximation to the value function, we take derivatives of the value function with respect to controls (c t ), states (k t, z t ), and the perturbation parameter χ. Notation: 1 V i,t : derivative of the value function with respect to its i-th argument, evaluated in (k t, z t ; χ). 2 V i,ss : derivative evaluated in the steady state, (k ss, 0; 0). 3 We follow the same notation for higher-order (cross-) derivatives. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 72 / 91

Derivatives Perturbing the Value Function Derivative with respect to c t : (1 β) c γ t βe t V 1,t+1 = 0 Derivative with respect to k t : ( ) V 1,t = βe t V 1,t+1 θe z t kt θ 1 + 1 δ Derivative with respect to z t : ] V 2,t = βe t [V 1,t+1 e z t kt θ + V 2,t+1 λ Derivative with respect to χ: V 3,t = βe t [V 2,t+1 σε t+1 + V 3,t+1 ] In the last three derivatives, we apply the envelope theorem to eliminate the derivatives of consumption with respect to k t, z t, and χ. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 73 / 91

Perturbing the Value Function System of Equations I Now, we have the system: c t + k t+1 = e z t k θ t + (1 δ) k t V (k t, z t ; χ) = (1 β) c1 γ t 1 γ + βe tv (k t+1, z t+1 ; χ) (1 β) c γ t βe t V 1,t+1 = 0 ( ) V 1,t = βe t V 1,t+1 θe z t kt θ 1 + 1 δ ] V 2,t = βe t [V 1,t+1 e z t kt θ + V 2,t+1 λ V 3,t = βe t [V 2,t+1 σε t+1 + V 3,t+1 ] z t = λz t 1 + χσε t Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 74 / 91

Perturbing the Value Function System of Equations II If we set χ = 0 and compute the steady state, we get a system of six equations on six unknowns, c ss, k ss, V ss, V 1,ss, V 2,ss, and V 3,ss : c ss + δk ss = k θ ss V ss = (1 β) c1 γ ss 1 γ + βv ss (1 β) css γ βv 1,ss = 0 ( ) V 1,ss = βv 1,ss θkss θ 1 + 1 δ [ ] V 2,ss = β V 1,ss kss θ + V 2,ss λ V 3,ss = βv 3,ss From the last equation: V 3,ss = 0. From the second equation: V ss = c ss 1 γ 1 γ. From the third equation: V 1,ss = 1 β β c γ ss. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 75 / 91

Perturbing the Value Function System of Equations III After cancelling redundant terms: c ss + δk ss = kss θ ( ) 1 = β θkss θ 1 + 1 δ [ ] V 2,ss = β V 1,ss kss θ + V 2,ss λ Then: k ss = [ ( )] 1 1 1 θ β 1 + δ θ 1 c ss = kss θ δk ss V 2,ss = 1 β 1 βλ k ssc θ ss γ V 1,ss > 0 and V 2,ss > 0, as predicted by theory. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 76 / 91

Perturbing the Value Function Quadratic Components of the Value Function From the previous derivations, we have: where: (1 β) c (k t, z t ; χ) γ βe t V 1,t+1 = 0 ( ) V 1,t = βe t V 1,t+1 θe z t kt θ 1 + 1 δ ] V 2,t = βe t [V 1,t+1 e z t kt θ + V 2,t+1 λ V 3,t = βe t [V 2,t+1 σε t+1 + V 3,t+1 ] k t+1 = e z t k θ t + (1 δ) k t c (k t, z t ; χ) z t = λz t 1 + χσε t, ε t N (0, 1) We take derivatives of each of the four equations w.t.r. k t, z t, and χ. We take advantage of the equality of cross derivatives. The envelope theorem does not hold anymore (we are taking derivatives of the derivatives of the value function). Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 77 / 91

First Equation I Perturbing the Value Function We have: (1 β) c (k t, z t ; χ) γ βe t V 1,t+1 = 0 Derivative with respect to k t : (1 β) γc (k t, z t ; χ) γ 1 c 1,t [ ( )] βe t V 11,t+1 e z t θkt θ 1 + 1 δ c 1,t = 0 In steady state: ( βv11,ss (1 β) γcss γ 1 ) )] c1,ss = β [V 11,ss (θk ss θ 1 + 1 δ or c 1,ss = V 11,ss βv 11,ss (1 β) γc γ 1 ss where we have used that 1 = β ( θk θ 1 ss + 1 δ ). Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 78 / 91

First Equation II Perturbing the Value Function Derivative with respect to z t : (1 β) γc (k t, z t ; χ) γ 1 c 2,t ( ( ) ) βe t V 11,t+1 e z t kt θ c 2,t + V 12,t+1 λ = 0 In steady state: ( βv11,ss (1 β) γcss γ 1 ) ( ) c2,ss = β V 11,ss kt θ + V 12,ss λ or c 2,ss = β βv 11,ss (1 β) γc γ 1 ss ( ) V 11,ss kss θ + V 12,ss λ Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 79 / 91

Perturbing the Value Function First Equation III Derivative with respect to χ: (1 β) γc (k t, z t ; χ) γ 1 c 3,t βe t ( V 11,t+1 c 3,t + V 12,t+1 σε t+1 + V 13,t+1 ) = 0 In steady state: ( βv11,ss (1 β) γcss γ 1 ) c3,ss = βv 13,ss or c 3,ss = ( β βv 11,ss (1 β) γc γ 1 ss )V 13,ss Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 80 / 91

Second Equation I Perturbing the Value Function We have: ( ) V 1,t = βe t V 1,t+1 θe z t kt θ 1 + 1 δ Derivative with respect to k t : [ V 11,t = βe t ( ) ( V 11,t+1 θe z t kt θ 1 + 1 δ c 1,t θe z t kt θ 1 + 1 δ +V 1,t+1 θ (θ 1) e z t k θ 2 t ) ] In steady state: V 11,ss = [ ( ) 1 V 11,ss β c 1,ss ] + βv 1,ss θ (θ 1) kss θ 2 or V 11,ss = β 1 1 β + c 1,ss V 1,ss θ (θ 1) k θ 2 ss Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 81 / 91

Second Equation II Perturbing the Value Function Derivative with respect to z t : ( V 11,t+1 e z t k θ ) ( t c 2,t θe z t kt θ 1 V 12,t = βe t ( ) +V 12,t+1 λ θe z t kt θ 1 + 1 δ ) + 1 δ + V 1,t+1 θe z t k θ 1 t In steady state: V 12,ss = V 11,ss ( k θ ss c 2,ss ) + V 12,ss λ + βv 1,ss θk θ 1 t or V 12,ss = 1 ) [V 11,ss (k ss θ c 2,ss 1 λ ] + βv 1,ss θkss θ 1 Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 82 / 91

Second Equation III Perturbing the Value Function Derivative with respect to χ: In steady state, V 13,t = βe t [ V 11,t+1 c 3,t + V 12,t+1 σε t+1 + V 13,t+1 ] but since we know that: V 13,ss = β [ V 11,ss c 3,ss + V 13,ss ] V 13,ss = β β 1 V 11,ssc 3,ss β c 3,ss = ( βv 11,ss (1 β) γcss γ 1 )V 13,ss the two equations can only hold simultaneously if V 13,ss = c 3,ss = 0. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 83 / 91

Third Equation I Perturbing the Value Function We have ] V 2,t = βe t [V 1,t+1 e z t kt θ + V 2,t+1 λ Derivative with respect to z t : [ ( V V 22,t = βe 11,t+1 e z t k θ ) t c 2,t e z t kt θ + V 12,t+1 λe z t k θ ] t t +V 1,t+1 e z t kt θ + V 21,t+1 λ ( e z t kt θ ) c 2,t + V22,t+1 λ 2 In steady state: [ ( ) V11,ss k θ V 22,t = β t c 2,ss k θ ss + V 12,ss λkss θ + V 1,ss kss θ ] +V 21,ss λ ( kss θ ) c 2,ss + V22,ss λ 2 [ ( ) β V11,ss k θ V 22,ss = t c 2,ss k θ ss + 2V 12,ss λk θ ] ss 1 βλ 2 +V 1,ss kss θ V 12,ss λc 2,ss where we have used V 12,ss = V 21,ss. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 84 / 91

Third Equation II Perturbing the Value Function Derivative with respect to χ: V 23,t = βe t [ V11,t+1 e z t k θ t c 3,t + V 12,t+1 e z t k θ t σε t+1 + V 13,t+1 e z t k θ t V 21,t+1 λc 3,t + V 22,t+1 λσε t+1 + V 23,t+1 λ ] In steady state: V 23,ss = 0 Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 85 / 91

Fourth Equation Perturbing the Value Function We have V 3,t = βe t [V 2,t+1 σε t+1 + V 3,t+1 ]. Derivative with respect to χ: V 33,t = βe t [ V21,t+1 c 3,t σε t+1 + V 22,t+1 σ 2 ε 2 t+1 + V 23,t+1σε t+1 V 31,t+1 c 3,t + V 32,t+1 σε t+1 + V 33,t+1 ] In steady state: V 33,ss = β 1 β V 22,ss Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 86 / 91

System I Perturbing the Value Function c 2,ss = c 1,ss = V 11,ss βv 11,ss (1 β) γcss γ 1 β βv 11,ss (1 β) γcss γ 1 β ( V 11,ss k θ ss + V 12,ss λ V 11,ss = 1 1 β + c V 1,ss θ (θ 1) kss θ 2 1,ss V 12,ss = 1 ) ] [V 11,ss (k ss θ c 2,ss + βv 1,ss θkss θ 1 1 λ [ ( ) β V11,ss k θ V 22,ss = t c 2,ss k θ ss + 2V 12,ss λkss θ 1 βλ 2 +V 1,ss kss θ V 12,ss λc 2,ss V 33,ss = plus c 3,ss = V 13,ss = V 23,ss = 0. β 1 β σ2 V 22,ss Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 87 / 91 ) ]

System II Perturbing the Value Function This is a system of nonlinear equations. However, it has a recursive structure. By substituting variables that we already know, we can find V 11,ss. Then, using this results and by plugging c 2,ss, we have a system of two equations, on two unknowns, V 12,ss and V 22,ss. Once the system is solved, we can find c 1,ss, c 2,ss, and V 33,ss directly. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 88 / 91

Perturbing the Value Function The Welfare Cost of the Business Cycle An advantage of performing the perturbation on the value function is that we have evaluation of welfare readily available. Note that at the deterministic steady state, we have: V (k ss, 0; χ) V ss + 1 2 V 33,ss Hence 1 2 V 33,ss is a measure of the welfare cost of the business cycle. This quantity is not necessarily negative: it may be positive. For example, in an RBC with leisure choice (Cho and Cooley, 2000). Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 89 / 91

Our Example Perturbing the Value Function We know that V ss = c ss 1 γ 1 γ. We can compute the decrease in consumption τ that will make the household indifferent between consuming (1 τ) c ss units per period with certainty or c t units with uncertainty. Thus: c 1 γ ss 1 γ + 1 2 V 33,ss = (c ss (1 τ)) 1 γ 1 γ ) css 1 γ = (1 γ) 1 2 V 33,ss ( (1 τ) 1 γ 1 or [ τ = 1 1 + 1 γ c 1 γ ss ] 1 1 2 V 1 γ 33,ss Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 90 / 91

A Numerical Example Perturbing the Value Function We pick standard parameter values by setting We get: β = 0.99, γ = 2, δ = 0.0294, θ = 0.3, and λ = 0.95. V (k t, z t ; 1) 0.54000 + 0.00295 (k t k ss ) + 0.11684z t 0.00007 (k t k ss ) 2 0.00985z 2 t 0.97508σ 2 0.00225 (k t k ss ) z t c (k t, z t ; χ) 1.85193 + 0.04220 (k t k ss ) + 0.74318z t DYNARE produces the same policy function by linearizing the equilibrium conditions of the problem. The welfare cost of the business cycle (in consumption terms) is 8.8475e-005, lower than in Lucas (1987) because of the smoothing possibilities allowed by capital. Use as an initial guess for VFI. Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 91 / 91