Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Similar documents
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

Lecture 10 Support Vector Machines II

Lecture 21: Numerical methods for pricing American type derivatives

Appendix B. The Finite Difference Scheme

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

MMA and GCMMA two methods for nonlinear optimization

Errors for Linear Systems

Difference Equations

APPENDIX A Some Linear Algebra

Chapter Newton s Method

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

DUE: WEDS FEB 21ST 2018

Report on Image warping

The Geometry of Logit and Probit

2 Finite difference basics

Some modelling aspects for the Matlab implementation of MMA

Lecture Notes on Linear Regression

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Lecture 12: Discrete Laplacian

How Strong Are Weak Patents? Joseph Farrell and Carl Shapiro. Supplementary Material Licensing Probabilistic Patents to Cournot Oligopolists *

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Online Appendix. t=1 (p t w)q t. Then the first order condition shows that

Notes on Kehoe Perri, Econometrica 2002

1 Matrix representations of canonical matrices

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

Linear Regression Analysis: Terminology and Notation

Numerical Heat and Mass Transfer

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Lecture 5.8 Flux Vector Splitting

EEE 241: Linear Systems

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model

Economics 101. Lecture 4 - Equilibrium and Efficiency

New Method for Solving Poisson Equation. on Irregular Domains

The equation of motion of a dynamical system is given by a set of differential equations. That is (1)

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

1 Convex Optimization

The Minimum Universal Cost Flow in an Infeasible Flow Network

Feature Selection: Part 1

Time-Varying Systems and Computations Lecture 6

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Foundations of Arithmetic

4DVAR, according to the name, is a four-dimensional variational method.

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

CHAPTER III Neural Networks as Associative Memory

Math 217 Fall 2013 Homework 2 Solutions

Linear Approximation with Regularization and Moving Least Squares

a b a In case b 0, a being divisible by b is the same as to say that

Implicit Integration Henyey Method

Numerical Transient Heat Conduction Experiment

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis

A 2D Bounded Linear Program (H,c) 2D Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

Lab session: numerical simulations of sponateous polarization

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

Chapter 4: Root Finding

Beyond Zudilin s Conjectured q-analog of Schmidt s problem

Canonical transformations

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

Hidden Markov Models

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

Supporting Information

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

Homework Notes Week 7

Lecture 10 Support Vector Machines. Oct

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Maximal Margin Classifier

1 GSW Iterative Techniques for y = Ax

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

CSCE 790S Background Results

ACTM State Calculus Competition Saturday April 30, 2011

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

PROBLEM SET 7 GENERAL EQUILIBRIUM

Math1110 (Spring 2009) Prelim 3 - Solutions

Exercise Solutions to Real Analysis

Problem Set 9 Solutions

1 The Sidrauski model

Application of B-Spline to Numerical Solution of a System of Singularly Perturbed Problems

Erratum: A Generalized Path Integral Control Approach to Reinforcement Learning

The Exact Formulation of the Inverse of the Tridiagonal Matrix for Solving the 1D Poisson Equation with the Finite Difference Method

Physics 5153 Classical Mechanics. Principle of Virtual Work-1

The Finite Element Method: A Short Introduction

= z 20 z n. (k 20) + 4 z k = 4

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

High resolution entropy stable scheme for shallow water equations

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

Structure and Drive Paul A. Jensen Copyright July 20, 2003

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

SEMI-LAGRANGIAN SCHEMES FOR LINEAR AND FULLY NON-LINEAR DIFFUSION EQUATIONS

AE/ME 339. K. M. Isaac. 8/31/2004 topic4: Implicit method, Stability, ADI method. Computational Fluid Dynamics (AE/ME 339) MAEEM Dept.

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Kernel Methods and SVMs Extension

Modeling curves. Graphs: y = ax+b, y = sin(x) Implicit ax + by + c = 0, x 2 +y 2 =r 2 Parametric:

Transcription:

Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf, t s useful to frst really understand the case wthout uncertanty: ρv(a) = max u(c) + v (a)[w + ra c] (1) c wth a state constrant a a, and we assume r < ρ and w > 0 and a > w/r. For future reference denote by s(a) = w + ra c(a) where c(a) s the optmal choce n (1). The state constrant mples s(a) = w + ra c(a) 0. Snce u (c(a)) = v (a) and snce u s concave therefore v (a) u (w + ra) (2) As above, we use a fnte dfference method and approxmate the functon v at J dscrete ponts n the space dmenson, a, = 1,..., J. We use equspaced grds, denote by a the dstance between grd ponts, and use the short-hand notaton v v(a ). Agan as before, one can mplement ether a so-called explct method or an mplct method. As usual, the mplct method s the preferred approach because t s both more effcent and more stable/relable. However, the explct method s easer to explan so we turn to t frst. 1.1 Explct Method Smplest Possble Algorthm. See matlab program HJB_no_uncertanty_smple.m. Gven that there s no uncertanty and r < ρ, we know the followng propertes of the soluton to (1): frst, savngs wll be negatve everywhere, s(a) 0 all a; and the borrowng constrant wll always bnd and hence (2) holds wth equalty. Gven these propertes, an extremely smple algorthm can be used. In partcular, use a backward dfference approxmaton to v everywhere v = v v 1, 2, v 1 = u (w + ra 1 ) (3) a 1

and update the value functon usng v n + ρv n = u(c n ) + (v n ) [w + ra c n ] (4) where c n = (u ) 1 [(v n ) ]. As above s the step sze of the explct scheme whch cannot be too large (CFL condton). A small enough also guarantees that the Barles-Sougands condtons are satsfed. See http://www.prnceton.edu/~moll/hact.pdf and http://www. prnceton.edu/~moll/hactproect/hact_numercal_appendx.pdf for more dscusson. Summary of Algorthm. Summarzng, the algorthm for fndng a soluton to the HJB equaton (1) s as follows. Guess v 0, = 1,..., J and for n = 0, 1, 2,... follow 1. Compute (v n ) from (3). 2. Compute c n from c n = (u ) 1 [(v n ) ] 3. Fnd from (4). 4. If s close enough to v n : stop. Otherwse, go to step 1. Upwnd Scheme. Note that (3) s an upwnd scheme. As explaned above, an upwnd scheme uses a forward dfference approxmaton whenever the drft of the state varable (here, savngs s n = w + ra c n ) s postve and a backwards dfference whenever t s negatve. In the specal case wthout uncertanty, we know that savngs are negatve everywhere and hence that one should always use the backwards dfference approxmaton. Instead of mposng that the backwards dfference s always used, we could have let the upwnd scheme choose the correct approxmaton as follows: frst compute savngs accordng to both the backwards and forward dfference approxmatons v,f and v,b s,f = w + ra (u ) 1 (v,f ), s,b = w + ra (u ) 1 (v,b) where we suppress n superscrpts for notatonal smplcty. Then use the followng approxmaton for v : v = v,f 1 {s,f >0} + v,b1 {s,b <0} + v 1 {s,f <0<s,B } (5) where 1 { } denotes the ndcator functon, and where v = u (w + ra ). Ths scheme would fnd that 1 {s,b <0} for all 2 and hence would pck the approxmaton n (3) by tself. Ths slghtly more general soluton algorthm s programmed up n HJB_no_uncertanty_explct.m. 1.2 Implct Method See HJB_no_uncertanty_mplct.m and also see Secton 1.2 of http://www.prnceton. edu/~moll/hactproect/hact_numercal_appendx.pdf for a detaled explanaton n the 2

verson wth uncertanty. Relatve to the explct scheme n (4), an mplct dffers n how v n s updated. In partcular, s now mplctly defned by the equaton v n + ρ = u(c n ) + ( ) F (w + ra c n,f ) + + ( ) B(w + ra c n,b) (6) where c n = (u ) 1 [(v n ) ] and (v n ) s gven by (5). For any number x, the notaton x + means the postve part of x,.e. x + = max{x, 0} and analogously x = mn{x, 0},.e. [w+ra c n,f ]+ = max{w + ra c n,f, 0} and [w + ra c n,b ] = mn{w + ra c n,b, 0}. Equaton (6) consttutes a system of J lnear equatons, and t can be wrtten n matrx notaton usng the followng steps. Substtutng the fnte dfference approxmatons to the dervatves, and defnng s n,f = w + ra c n,f and smlarly for sn,b, (6) s v n + ρ = u(c n ) + vn+1 +1 vn+1 a (s n,f ) + + vn+1 1 a Collectng terms wth the same subscrpts on the rght-hand sde (s n,b) v n + ρ = u(c n ) + 1 x + y + +1 z where x = (sn,b ) a, y = (sn,f )+ + (sn,b ) a a, z = (sn,f )+ a (7) Note that mportantly x 1 = z J = 0 so v0 n+1 and J+1 are never used. Equaton (7) s a lnear system whch can be wrtten n matrx notaton as: 1.3 Results 1 (vn+1 v n ) + ρ = u n + A n, A n = Fgure 1 plots the functon s(a). y 1 z 1 0 0. x 2 y 2 z 2.. 0 0 x 3 y 3 z 3 0............... 0..... xi y I 3

0 x 10 3 1 2 3 s(a) 4 5 6 7 8 9 0 0.2 0.4 0.6 0.8 1 a Fgure 1: Savngs Behavor n Model Wthout Uncertanty 2 Solvng the Neoclasscal Growth Model See matlab codes HJB_NGM.m and HJB_NGM_mplct. Fnally and for completeness, let us solve the neoclasscal growth model whch s the prototypcal dynamc programmng problem n macroeconomcs. The HJB equaton s ρv (k) = max c U(c) + V (k)[f (k) δk c] (8) As before s(k) = F (k) δk c(k) and c(k) = (U ) 1 (V (k)) denote optmal savngs and consumpton. We approxmate V at I dscrete grds ponts and use the short-hand notaton V = V (k ). We frst mplement an explct and then an mplct method. As usual, the mplct method s preferable due to better effcency and stablty propertes. 2.1 Explct Method See HJB_NGM.m. The explct method starts wth a guess V 0 = (V1 0,..., VI 0 ) and for n = 0, 1, 2,... updates V accordng to V n+1 V n + ρv n = U(c n ) + (V n ) [F (k ) δk c n ] (9) c n = (U ) 1 [(V n ) ] (10) Upwnd Scheme. The dervatve V (k ) s agan approxmated usng an upwnd scheme. That s compute savngs accordng to both the backwards and forward dfference approxma- 4

tons V,F and V,B s,f = F (k ) δk (U ) 1 (V,F ), s,b = F (k ) δk (U ) 1 (V,B) and then use the followng approxmaton for V : V = V,F 1 {s,f >0} + V,B1 {s,b <0} + V 1 {s,f <0<s,B } where V concave. = u (F (k ) δk ). Note agan that the case s,f > s,b wll not occur because V s Remark. We know that the neoclasscal growth model (8) has a steady state k satsfyng F (k ) = ρ + δ and that at ths steady state V (k ) = U (F (k ) δk ). Note that the upwnd scheme n effect uses the condton on the value functon at the steady state k as a boundary condton. It then uses a backward dfference approxmaton below the steady state, and a forward dfference approxmaton above the steady state. 2.2 Implct Method See HJB_NGM_mplct.m. The algorthm s exactly the same as n Secton 1.2. Also see Secton 1.2 of http://www.prnceton.edu/~moll/hactproect/hact_numercal_appendx.pdf. 2.3 Results. Fgure 2 plots the savngs polcy functon n the neoclasscal growth model. 0.3 0.2 0.1 s(k) 0 0.1 0.2 0.3 0.4 1 2 3 4 5 6 7 8 9 k Fgure 2: Savngs Polcy Functon n Neoclasscal Growth Model 5