Second-order approximation of dynamic models without the use of tensors

Similar documents
Solving Linear Rational Expectation Models

1 Solving Linear Rational Expectations Models

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

E-Stability vis-a-vis Determinacy Results for a Broad Class. of Linear Rational Expectations Models. Bennett T. McCallum

Perturbation Methods II: General Case

I = i 0,

Economics 204 Fall 2013 Problem Set 5 Suggested Solutions

Extracting Rational Expectations Model Structural Matrices from Dynare

Matrix Differential Calculus with Applications in Statistics and Econometrics

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

Solutions Methods in DSGE (Open) models

Applications for solving DSGE models. October 25th, 2011

WORKING PAPER SERIES A THEORY OF PRUNING NO 1696 / JULY Giovanni Lombardo and Harald Uhlig

Working Paper Series. on approximating Dsge models. expansions. No 1264 / november by Giovanni Lombardo

Decentralised economies I

Fisher information for generalised linear mixed models

OR MSc Maths Revision Course

Linear Algebra (Review) Volker Tresp 2017

A Note on Eigenvalues of Perturbed Hermitian Matrices

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models

Foundations of Matrix Analysis

Linear Algebra (Review) Volker Tresp 2018

ACM106a - Homework 2 Solutions

A matrix over a field F is a rectangular array of elements from F. The symbol

1 Linear Algebra Problems

Monotone Function. Function f is called monotonically increasing, if. x 1 x 2 f (x 1 ) f (x 2 ) x 1 < x 2 f (x 1 ) < f (x 2 ) x 1 x 2

Optimization under Commitment and Discretion, the Recursive Saddlepoint Method, and Targeting Rules and Instrument Rules: Lecture Notes

Multivariate Differentiation 1

Perturbation Methods I: Basic Results

A Model with Collateral Constraints

Hands-on Matrix Algebra Using R

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems

Linear Algebra Short Course Lecture 2

Matrix differential calculus Optimization Geoff Gordon Ryan Tibshirani

Solving Nonlinear Rational Expectations Models by Approximating the Stochastic Equilibrium System

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Calculus 2502A - Advanced Calculus I Fall : Local minima and maxima

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44

HW3 - Due 02/06. Each answer must be mathematically justified. Don t forget your name. 1 2, A = 2 2

Stat 206: Linear algebra

3 (Maths) Linear Algebra

Log-Linear Approximation and Model. Solution

Algorithms for nonlinear programming problems II

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

Perturbation Methods

Submatrices and Partitioned Matrices

REVIEW OF DIFFERENTIAL CALCULUS

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.

Math 291-2: Final Exam Solutions Northwestern University, Winter 2016

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Chapter 2: Unconstrained Extrema

Solving Linear Rational Expectations Models

5 Linear Algebra and Inverse Problem

CALCULATING AND USING SECOND ORDER ACCURATE SOLUTIONS OF DISCRETE TIME DYNAMIC EQUILIBRIUM MODELS

Advanced Digital Signal Processing -Introduction

Economics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Economics 701 Advanced Macroeconomics I Project 1 Professor Sanjay Chugh Fall 2011

Stochastic simulations with DYNARE. A practical guide.

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Multivariable Calculus

MATH529 Fundamentals of Optimization Unconstrained Optimization II

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

Numerical Analysis of Differential Equations Numerical Solution of Elliptic Boundary Value

Maths for Signals and Systems Linear Algebra in Engineering

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

Review of Linear Algebra

Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems. Contents

Volume 30, Issue 3. A note on Kalman filter approach to solution of rational expectations models

An Introduction to Perturbation Methods in Macroeconomics. Jesús Fernández-Villaverde University of Pennsylvania

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

B553 Lecture 5: Matrix Algebra Review

The properties of L p -GMM estimators

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

EIGENVALUES AND EIGENVECTORS 3

Minimization of the root of a quadratic functional under a system of affine equality constraints with application in portfolio management

Solutions for Chapter 3

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.

Functions of Several Variables

Mathematics for Economics ECON MA/MSSc in Economics-2017/2018. Dr. W. M. Semasinghe Senior Lecturer Department of Economics

MS 2001: Test 1 B Solutions

Math 313 Chapter 1 Review

. D Matrix Calculus D 1

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Matrices. Chapter Definitions and Notations

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010

The Simplest Semidefinite Programs are Trivial

HIGHER ORDER CUMULANTS OF RANDOM VECTORS, DIFFERENTIAL OPERATORS, AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES

5 More on Linear Algebra

How large can a finite group of matrices be? Blundon Lecture UNB Fredericton 10/13/2007

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

Lecture 7: Linear-Quadratic Dynamic Programming Real Business Cycle Models

Transcription:

Second-order approximation of dynamic models without the use of tensors Paul Klein a, a University of Western Ontario First draft: May 17, 2005 This version: January 24, 2006 Abstract Several approaches to finding the second-order approximation to a dynamic model have been proposed recently This paper differs from the existing literature in that it makes use of the Magnus and Neudecker (1999) definition of the Hessian matrix The key result is a linear system of equations that characterizes the second-order coefficients No use is made of multi-dimensional arrays or tensors Keywords: Solving dynamic models; second-order approximation JEL classification: E0; C63 1 Introduction Several approaches to finding the second-order approximation to a dynamic model have been proposed recently Examples include Schmitt-Grohé and Uribe (2004) and Kim, Kim, Schaumburg, and Sims (2005) This paper differs from the existing literature, including Lombardo and Sutherland (2005), which also avoids the use of tensors, in that it makes use of the Magnus and Neudecker (1999) definition of the Hessian matrix The key result is a linear system of equations that characterizes the second-order coefficients No use is made of multi-dimensional arrays or tensors Matlab code is available from my website I thank Audra Bowlus, Elizabeth Caucutt, Martin Gervais, Paul Gomme, Lance Lochner and Igor Livshits 1

2 The model As pointed out in Schmitt-Grohé and Uribe (2004), the coefficients of a secondorder approximation to the solution of a dynamic model around its non-stochastic steady state are invariant with respect to the scale of the shocks with the notable exception of additive constants in the decision rules In what follows I will take this result for granted E t f(x t+1, y t+1, x t, y t ) = 0 (1) where f maps R 2nx+2ny into R nx+ny The solution is given by two functions g and h defined via y t = g(x t, σ) (2) and x t+1 = h(x t, σ) + σε t+1 (3) where ε t is exogenous white noise with variance matrix Σ and σ is a scaling variable The approximation is computed around the non-stochastic steady state, where σ = 0 Without loss of generality, we will assume that f(0, 0, 0, 0) = 0 3 Second-order Taylor expansions As stated in Magnus and Neudecker (1999), the second-order Taylor expansion of a twice differentiable function f : R n R m is given by f(x) f(x 0 ) + Df(x 0 )(x x 0 ) + 1 2 (I m (x x 0 ) )Hf(x 0 )(x x 0 ) (4) 2

where we define Df(x) = f(x) x = f 1 (x) x 1 f 2 (x) f 1 (x) x 2 f 1 (x) x n x 1 f m(x) x 1 f m(x) x n and Notice that and that Hf(x) = 2 f(x) x x = D vec((df(x)) ) Df 1 (x) Df Df(x) = 2 (x) Df m (x) Hf 1 (x) Hf Hf(x) = 2 (x) Hf m (x) Thus the Hessian Hf(x) is of dimension mn n and consists of m vertically concatenated symmetric n n matrices An important property of the quadratic term in the Taylor expansion is the following If f(x) = 1 2 (I m x )Ax and g(x) = 1 2 (I m x )Bx then f(x) g(x) iff 1 2 A + (A ) ν = 1 2 B + (B ) ν (5) where we define A ν via the following recipe, taken from Magnus and Neudecker (1999) 3

Definition Let A have the following structure A = A 1 A 2 A m where A i is an n n matrix for each i = 1, 2, m Then A ν = A 1 A 2 A m It follows that B = B 1 B 2 (B ) ν = B 1 B 2 B m B m In what follows, therefore, we will regard as equivalent two matrices of second derivatives A and B if (5) holds Strictly speaking, a Hessian matrix H is columnsymmetric, ie H = (H ) ν but we will regard any matrix G as a perfectly good Hessian if H is the Hessian and H = 1G + 2 (G ) ν 4 Representation of the second-order approximation of the solution y t ĝ(x t ) = k y + Fx t + (I ny x t )Ex t (6) and x t+1 ĥ(x t) = k x + Px t + (I nx x t )Gx t (7) Evidently F is n y n x, E is n x n y n x, P is n x n x and G is n 2 x n x 4

5 Finding the first-order approximation Klein (2000), King and Watson (2002) and others show how to find F and P in terms of D Following Klein (2000) one may proceed as follows, keeping in mind that we are after a non-explosive solution only Suppose the linear approximation of the equilibrium conditions can be written as A k t+1 E t λ t+1 = B where k 0 R n k is a given deterministic vector, (ξt ) is white noise and the conditional expectation is taken with respect to the natural filtration of (ξ t ) The matrices A and B are both n n The key Theorem required here is stated as Theorem 771 in Golub and van Loan (1996) It says that if there is a z C such that B za 0, then there exist matrices Q, Z, T and S such that k t λ t + ξ t+1 0 (8) 1 Q and Z are Hermitian, ie Q H Q = QQ H = I n and similarly for Z, where H denotes the Hermitian transpose (transpose followed by complex conjugation or vice versa) 2 T and S are upper triangular (all entries below the main diagonal are zero) 3 QA = SZ H and QB = TZ H 4 There is no i such that s ii = t ii = 0 1 Moreover, the matrices Q, Z, S and T can be chosen in such a way as to make the diagonal entries s ii and t ii appear in any desired order We will choose the following ordering The pairs s ii, t ii satisfying s ii > t ii appear first We will call these pairs the stable generalized eigenvalues 1 Here we denote the row i, column j element of any matrix M by m ij 5

We now introduce an auxiliary sequence (y t ) that will help us in finding the solution Define x t via x t = and y t via y t = Z H x t Partition y t in the same way as x t, introducing the following notation y t = Now premultiply (8) by Q This yields an equivalent system since Q is non-singular The result is Sy t+1 = Ty t k t λ t s t u t This is a triangular system More explicitly, we have S 11 S 12 s t+1 T 11 T 12 = 0 S 22 u t+1 0 T 22 s t u t If there no more stable generalized eigenvalues than there are state variables, then the second block of these equations implies that any solution that does not blow up (so that the mean is unbounded unless k 0 = 0) or have a unit root (so that the variance is unbounded unless ε t = 0 for all t) satisfies u t = 0 for t = 0, But then the first block says that S 11 s t+1 = T 11 s t If there are no less stable generalized eigenvalues than there are state variables, then S 11 is invertible and the generalized eigenvalues of (S 11, T 11 ) are stable Hence s t+1 = S 1 11 T 11s t (9) We have now reached the final step, which is to move back to x t from y t By definition, we have k t λ t Z 11 Z 12 = Z 21 Z 22 s t u t 6

Apparently λ t = Z 21 s t Moreover, if Z 11 is invertible, then s t = Z 1 11 k t and consequently λ t = Z 21 Z 1 11 k t We also have k t+1 = Z 11 s t+1 = Z 11 S 1 11 T 11s t = Z 11 S 1 11 T 11Z 1 11 k t Notice that Z 11 S11 1 T 11Z11 1 is similar to S 1 11 T 11 so that the two matrices have the same eigenvalues We conclude that if there are exactly as many state variables as there are stable generalized eigenvalues and Z 11 is invertible, then (unless k 0 = 0 or ε t = 0 for all t = 0, ), then λ t k t+1 = Z 21 Z 1 11 k t (10) = Z 11 S 1 11 T 11Z 1 11 k t + ξ t+1 (11) The upshot of this is that, from now on, we can treat F and P as known matrices 6 Finding the second-order approximation by solving a linear system of equations 61 Rules of differentiation 62 The equations characterizing the second-order coefficients By definition of the functions g and h (defined in (2) and (3)) we have E t f(h(x, σ) + σε t+1, g(h(x, σ) + σε t+1 ), x, g(x, σ)) 0 where f is the function defined in (1) Define z(x) defined via ) ) z(x, σ) = E t f (ĥ(x, σ) + σεt+1, ĝ (ĥ (x, σ) + εt+1, σ x, ĝ (x, σ) The second-order approximation is characterized by (i) D x z(0, 0) = 0, (ii) H xx z(0, 0) = 0 and (iii) H σσ z(0, 0) = 0 Property (i) is taken care of by choosing D and F properly, as briefly described in Section 5 Property (ii) is taken care of by choosing E 7

and G properly (given D and F), as described in Section 63 Finally, property (iii) is taken care of by choosing k x and k y appropriately, as described in Section 64 63 Hessians We will adopt the following notation Denoting the arguments of f by x 1, x 2, x 3, x 4 (in that order) we define and f i = f(0, 0, 0, 0) x i f ij = 2 f(0, 0, 0, 0) x i x j Defining m = n x +n y, the equation Hz(0) = 0 becomes, using Theorem 9 in chapter 6 of Magnus and Neudecker (1999), (f 1 I nx )G + (f 2 I nx ) { I ny P EP + F I nx G } + (f 4 I nx )E + I m P f 11 P + I m (P F )f 22 FP+ f 33 + I m F f 44 F + 2I m P f 12 FP + 2I m P f 13 + 2I m P f 14 F + 2I m (P F )f 23 + 2I m P F f 24 F+ 2f 34 F = 0 In shorthand notation, we can summarize these equations via Taking vecs, we get A 1 + A 2 E + A 3 EP + A 4 G = 0 vec(a 1 ) + (I nx A 2 ) vec(e) + (P A 3 ) vec(e) + (I nx A 4 ) vec(g) = 0 Thus the linear system we have to solve is given by (I nx A 2 ) + (P vec(e) A 3 ) (I nx A 4 ) vec(g) = vec(a 1 ) 8

64 Constants The constants k x and k y are proportional to σ 2 Setting (without loss of generality) σ = 1 we have f 1 k x + f 2 k y + f 4 k y + f 2 Fk x + f 2 tr (( I ny Σ ) E ) + tr((i m (ΣF ))f 22 F) + tr((i m (Σ))f 11 ) + 2 tr((i m (Σ))f 12 F) = 0 where we define the trace of an nm n matrix A = A 1 A 2 A m as the m 1 vector tr(a 1 ) tr(a 2 ) tr(a m ) 9

References Golub, G, & van Loan, C (1996) Matrix Computations, Third Edition Baltimore and London: The Johns Hopkins University Press Kim, J, Kim, S, Schaumburg, E, & Sims, C (2005) Calculating and Using Second Order Accurate Solutions of Discrete Time Dynamic Equilibrium Models Manuscript King, R G, & Watson, M (2002) System Reduction and Solution Algorithms for Singular Linear Difference Systems under Rational Expectations Computational Economics, 20(1-2), 57 86 Klein, P (2000) Using the Generalized Schur Form to Solve a Multivariate Linear Rational Expectations Model Journal of Economic Dynamics and Control, 24(10), 1405 1423 Lombardo, G, & Sutherland, A (2005) Computing Second-Order-Accurate Solutions for Rational Expectations Models Using Linear Solution Methods European Central Bank Working Paper Series No 487 Magnus, J, & Neudecker, H (1999) Matrix Differential Calculus With Applications in Statistics and Econometrics John Wiley and Sons Schmitt-Grohé, S, & Uribe, M (2004) Solving dynamic general equilibrium models using a second-order approximation to the policy function Journal of Economic Dynamics and Control, 28, 755 775 10