Stochastic Dynamic Programming: The One Sector Growth Model

Similar documents
Lecture 5: The Bellman Equation

Recursive Methods. Introduction to Dynamic Optimization

Outline Today s Lecture

1 Stochastic Dynamic Programming

Macro 1: Dynamic Programming 1

Dynamic Programming Theorems

Stochastic Dynamic Programming. Jesus Fernandez-Villaverde University of Pennsylvania

Basic Deterministic Dynamic Programming

Economics 8105 Macroeconomic Theory Recitation 3

Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming

MATH 202B - Problem Set 5

LECTURE 15: COMPLETENESS AND CONVEXITY

1 Stochastic Dynamic Programming

Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications

Notes on Measure Theory and Markov Processes

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

Contents. An example 5. Mathematical Preliminaries 13. Dynamic programming under certainty 29. Numerical methods 41. Some applications 57

Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011

The Principle of Optimality

Metric Spaces and Topology

and C be the space of continuous and bounded real-valued functions endowed with the sup-norm 1.

UNIVERSITY OF VIENNA

Course 212: Academic Year Section 1: Metric Spaces

Social Welfare Functions for Sustainable Development

Nonparametric one-sided testing for the mean and related extremum problems

Chapter 2 Metric Spaces

Slides II - Dynamic Programming

Lecture 4: Dynamic Programming

MATH 409 Advanced Calculus I Lecture 7: Monotone sequences. The Bolzano-Weierstrass theorem.

Problem Set 2: Solutions Math 201A: Fall 2016

Math 421, Homework #9 Solutions

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

Sequences. Limits of Sequences. Definition. A real-valued sequence s is any function s : N R.

Lecture 1: Dynamic Programming

Paul-Eugène Parent. March 12th, Department of Mathematics and Statistics University of Ottawa. MAT 3121: Complex Analysis I

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

Set, functions and Euclidean space. Seungjin Han

Accuracy Properties of the Statistics from Numerical Simulations

Weak convergence. Amsterdam, 13 November Leiden University. Limit theorems. Shota Gugushvili. Generalities. Criteria

The Kadec-Pe lczynski theorem in L p, 1 p < 2

Solution. 1 Solution of Homework 7. Sangchul Lee. March 22, Problem 1.1

Introduction to Real Analysis Alternative Chapter 1

Estimation of Dynamic Regression Models

Math 421, Homework #7 Solutions. We can then us the triangle inequality to find for k N that (x k + y k ) (L + M) = (x k L) + (y k M) x k L + y k M

Measures and Measure Spaces

Summer Jump-Start Program for Analysis, 2012 Song-Ying Li

Mathematics for Economists

2. What is the fraction of aggregate savings due to the precautionary motive? (These two questions are analyzed in the paper by Ayiagari)

Tools from Lebesgue integration

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

THEOREMS, ETC., FOR MATH 515

ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a

ASYMPTOTICALLY NONEXPANSIVE MAPPINGS IN MODULAR FUNCTION SPACES ABSTRACT

1. Let A R be a nonempty set that is bounded from above, and let a be the least upper bound of A. Show that there exists a sequence {a n } n N

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

Examples of Dual Spaces from Measure Theory

Mathematical Methods in Economics (Part I) Lecture Note

First In-Class Exam Solutions Math 410, Professor David Levermore Monday, 1 October 2018

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

Lecture 6: Contraction mapping, inverse and implicit function theorems

MA651 Topology. Lecture 10. Metric Spaces.

1 Directional Derivatives and Differentiability

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X

Value Iteration and Action ɛ-approximation of Optimal Policies in Discounted Markov Decision Processes

EC 521 MATHEMATICAL METHODS FOR ECONOMICS. Lecture 1: Preliminaries

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming

Chapter II. Metric Spaces and the Topology of C

Chapter 3. Dynamic Programming

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

01. Review of metric spaces and point-set topology. 1. Euclidean spaces

Mathematics II, course

5 Birkhoff s Ergodic Theorem

Elements of Convex Optimization Theory

Lebesgue Integration on R n

An Application to Growth Theory

SOLUTIONS TO SOME PROBLEMS

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

ECON 2010c Solution to Problem Set 1

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

Stochastic convexity in dynamic programming

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t))

An introduction to Mathematical Theory of Control

Discussion Class Notes

Stochastic Optimal Growth with a Non-Compact State Space

Lecture 3. Econ August 12

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Best approximations in normed vector spaces

THE INVERSE FUNCTION THEOREM

Dynamic Discrete Choice Structural Models in Empirical IO

Lecture 7. Econ August 18

Optimal Growth Models and the Lagrange Multiplier

CHAPTER 1. Metric Spaces. 1. Definition and examples

NOTES ON VECTOR-VALUED INTEGRATION MATH 581, SPRING 2017

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

Numerical Sequences and Series

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Problem Set 1: Solutions Math 201A: Fall Problem 1. Let (X, d) be a metric space. (a) Prove the reverse triangle inequality: for every x, y, z X

MATHS 730 FC Lecture Notes March 5, Introduction

Transcription:

Stochastic Dynamic Programming: The One Sector Growth Model Esteban Rossi-Hansberg Princeton University March 26, 2012 Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 1 / 31

References We will study stochastic dynamic programing using the application in Section 13.3 in SLP. It requires previous knowledge of Chapters 8 to 12. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 2 / 31

Sequential Problem The problem we will study is given in sequential form by } max E {y t } t=0 { β t U (x t y t ) t=0 s.t. x t+1 = z t f (y t ) t = 0, 1,... 0 y t x t t = 0, 1,... x 0 0 given Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 3 / 31

Recursive Problem The functional equation of this problem is given by [ ] v (x) = max U (x y) + β v [f (y) z] µ (dz) 0 y x (1) where x is the quantity of output, y is the quantity carried over as capital for for next period s production, and x y is the quantity consumed. Next period, a technology shock z is realized which is drawn from the distribution µ, and so f (y) z units of output are produced tomorrow. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 4 / 31

Utility We impose the following assumptions on utility: U1: 0 < β < 1; U2: U is continuous; U3: U is strictly increasing; U4: U is strictly concave; U5: U is continuously differentiable. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 5 / 31

Technology We also impose the following assumptions on technology: T1: f is continuous; T2: f (0) > 0, and for some x > 0 : x f (x) x, all 0 x x, and f (x) < x all x > x. Used to bound the state space. In particular define the state space to be given by X = [0, x] with Borel sets X T3: f is strictly increasing; T4: f is (weakly) concave; T5: f is continuously differentiable and βf (0) > 1. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 6 / 31

Technology We will impose the following assumptions on technology shocks: Z1: Z = [1, z], where 1 < z < +, with Borel sets Z; Used for the integral of the functional equation to be well defined. Remember that a Borel Algebra is the smallest σ algebra in R containing all open sets. Any set in this σ-algebra is called a Borel set. Z2: {z t } is an i.i.d. sequence of shocks, each drawn according to the probability measure µ on (Z, Z) Needed in order for z not to be a state variable. Z3: For some α > 0, µ ((a, b]) α (b a), all (a, b] Z. So that µ assigns probability to all nondegenerate subintervals in Z. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 7 / 31

Existence and Uniqueness of Value Function Lemma There is a unique bounded continuous function v : X R satisfying [ ] v (x) = max U (x y) + β v [f (y) z] µ (dz) 0 y x Furthermore v is strictly increasing and strictly concave. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 8 / 31

Proof: Blackwell To prove this we want to use Theorem 9.6 in SLP which in turn uses Theorem 4.6 which is an application of the Theorem of the Maximum and the Contraction Mapping Theorem. In particular, we want to prove that the Blackwell suffi cient conditions for a contraction hold. A contraction (with modeulus β (0, 1)) is defined as an operator T : S S, where (S, ρ) is a metric space, such that ρ (Tx, Ty) βρ (x, y) all x, y S. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 9 / 31

Proof: Blackwell The Blackwell suffi cient conditions are given by: 1 Monotonicity: If B (X ) is the space of bounded functions on X with the sup norm, and T : B (X ) B (X ) is a functional operator. If f, g B (X ) and f (x) g (x), all x X (Tf ) (x) (Tg) (x), all x X. 2 Discounting: There exists some β (0, 1) such that all f B (X ), a 0, x X. [T (f + a)] (x) (Tf ) (x) + βa, If this conditions are satisfied, Theorem 3.3. in SLP guarantees that the operator T is a contraction. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 10 / 31

Proof: CMT If T is a contraction, then the Contraction Mapping Theorem (3.2 in SLP) says that if (B (X ), ρ) is a complete metric space Remember, a space with a metric (positive distance, symmetry, and triangle inequality) and every Cauchy sequence converges for all ε > 0, N (ε) s.t. ρ (x n, x m ) < ε all n, m > N (ε)) and T is a contraction then T has a unique fixed point v and for any v 0, ρ (T n v 0, v) β n ρ (v 0, v). Theorem 4.6 is an application of this result for bounded continuous functions. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 11 / 31

Proof: Theorem of the Maximum We can use the Theorem of the Maximum to say that the policy correspondence is compact valued and u.h.c. The Theorem of the Maximum says that if we have two functions defined by h (x) = max f (x, y) y Γ(x ) G (x) = {y Γ (x) : f (x, y) = h (x)} where f is continuous and Γ is compact valued and continuous correspondence. Then h is continuous and G is non-empty compact-valued and u.h.c for every sequence x n x and every sequence{y n } such that y n Γ (x n ), all n, there exists a convergent subsequence of y n whose limit point y is in Γ (x) (no open boundaries) Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 12 / 31

Proof: CMT and Theorem of the Maximum For our purposes we need to show none of these arguments fails because of the integral in (1). In particular we need to show that v [f (y) z] µ (dz) is bounded and continuous if v and f are bounded and continuous. This is trivially true in this case since the conditional probability of z given z is the same as the unconditional probability. That is, the transition function for the state is such that Q (z, dz ) = µ (dz ). If this was not the case we would need to guarantee that Q satisfies the Feller property, namely that if f is bounded and continuous, f ( z ) Q ( z, dz ) is also bounded and continuous, which is equivalent to Q being continuous in z. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 13 / 31

Proof: Increasing To show that v is strictly increasing, we will use a corollary of the Contraction Mapping Theorem. First note that the space of bounded, continuous and weakly increasing and weakly concave functions is a complete metric space. Given this, if the operator T maps this set into the set of strictly increasing and concave functions, we know that the fixed point has to be strictly increasing and strictly concave. Because of U3 the operator maps increasing functions into strictly increasing ones. And because of U4 the same is true for concavity in x and y. Note that Γ (x) = [0, x] is also increasing in x. Hence v is strictly increasing. For all θ [0, 1] if y Γ (x) and y Γ (x ) θy + (1 θ) y Γ ( θx + (1 θ) x ). Hence v strictly increasing and concave. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 14 / 31

Characterization of the Policy Function Lemma The policy function g : X X is single valued, continuous, and strictly increasing, with g (0) = 0 and 0 < g (x) < x, all 0 < x x. Furthermore the consumption function c : X X defined by c (x) = x g (x) is also strictly increasing. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 15 / 31

Proof The first part comes from the fact that we are maximizing a continuous and strictly concave function in a convex set, so there is a unique maximum. Since a single valued u.h.c. correspondence is a continuous function g is continuous. To show that g is increasing, suppose not. Then for x > x, U (x g (x)) U ( x g ( x )) which implies by the first order condition, U (x g (x)) = β v [f (g (x)) z] f (g (x)) zµ (dz), that v [f (g (x)) z] f (g (x))zµ (dz) v [ f ( g ( x )) z ] f ( g ( x )) zµ (d a contradiction with v strictly concave. Hence g is strictly increasing. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 16 / 31

Proof To show that g (0) = 0 notice that Γ (0) = {0}, and for some x, such that 0 < g (x) < x suppose g (x) = x. Tthis implies, that since g (x) is strictly increasing that g (g (x)) = g (x). But next period we need g (f (g (x)) z) = g (x) all z. But g (f (g (x)) z) > g (g (x)) = g (x), for some z. A contradiction. To show that c (x) is increasing, notice that since v is strictly concave, x > x implies v (x) < v (x) and so by T3, T4, and g increasing v [f (g (x)) z] f (g (x) zµ (dz) < v [ f ( g ( x )) z ] f ( g ( x )) zµ (d which implies that So by U4 c (x) is increasing. U (c (x)) < U ( c ( x )). Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 17 / 31

Differentiability Lemma v is continuously differentiable with v (x) = U [x g (x)]. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 18 / 31

Proof: Benaviste and Scheinkman With assumptions U4 and U5 this is just an application of the Benaviste and Scheinkman Theorem (SLP 4.10): v is differentiable at x if there exists a concave, differentiable function W (x) defined on a neighborhood of x with the property that W (x) v (x) with = if x = x, on that neighborhood. Define W (x) = U (x g (x )) + β v [f (g (x )) z] µ (dz), then since g (x ) maximizes the RHS for x = x we know that W (x) < v (x). Clearly W (x) = U [x g (x)]. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 19 / 31

Euler Equation The Euler equation for this problem is therefore given by U (c (x)) = β U [c (f (g (x)) z)] f (g (x))zµ (dz), (2) and the policy function defines a Markov process on X corresponding to the stochastic equation x t+1 = f (g (x t )) z t+1. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 20 / 31

Transition Function We can define the transition function P using f, g and µ. For this, define the transition function P by P (x, A) = µ (z Z : f (g (x)) z A), all A X. In principle need to show that P (x, ) is a probability measure (see Theorem 8.9). Notice that since f and g are continuous we know that h ( x ) P ( x, dx ) = h (f (g (x)) z) µ (dz) X and so if h is a continuous function, the Lebesgue Dominated Convergence Theorem implies that if x n x h (f (g (x n )) z) µ (dz) = h (f (g (x)) z) µ (dz). lim n Z Hence P has the Feller property. Since X is compact, we can conclude that: Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 21 / 31 Z Z

Invariant Measure Lemma P has at least one invariant measure. See Theorem 12.10 in SLP. The proof is an application of Helly s Theorem that states that if {µ n } is a sequence of probability measures on ( R l, B ) with the property that (i) for some a, b R l, µ n ((, a]) = 0 and µ n ((, b]) = 1, n = 1, 2,... Then there exists a probability measure µ with µ ((, a]) = 0 and µ ((, b]) = 1 and a subsequence of {µ n } that converges to it. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 22 / 31

Weak Convergence We now want to prove that this invariant measure λ is unique and that the system converges weakly to it from any initial condition. That is T n λ 0 λ. The notation is defined by the operator T. It is an operator that maps distributions today, given the transition function, into distributions tomorrow. In particular, in our example, (T λ 0 ) (A) = P (x, A) λ 0 (dx). A sequence of measures is said to converge weakly if for f C (S) (the set of bounded continuous functions) fdλ n = fdλ or lim n x lim λ n (A) = λ (A) all A X. n Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 23 / 31

Unique Invariant Measure To prove this we want to apply Theorem 12.12 in SLP. The key to prove this result is to show that P is monotone ( f nondecreasing Tf (x) = f (x ) P (x, dx )), has the Feller property, and satisfies the Mixing condition. The Mixing conditions says that: There exists c X, ε > 0, and N 1 such that P N (0, [c, x]) ε and P N ( x, [0, c]) ε. Monotone: Notice that in this case the fact that g is strictly increasing implies that the LHS of h ( x ) P ( x, dx ) = h (f (g (x)) z) µ (dz) X is strictly increasing and so P is monotone (the conditional expectation operator associated maps monotone into monotone functions). Feller: Shown above. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 24 / 31 Z

Mixing Mixing: This is in general the hardest part of these problems. Let z be the mean of µ, that is, z = zµ (dz) and let x be the unique value satisfying βf [g (x )] z = 1. (3) We want to show that the mixing condition holds for point x. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 25 / 31

Mixing For any x X and z Z define the sequence {φ n (x, z)} by φ 0 (x, z) = x, φ n+1 (x, z) = f [g (φ n (x, z))] z, n = 0, 1, 2,... Note that φ n (x, z) is the quantity of goods available at the beginning of period n if x is the quantity at the beginning of period 0 and the shock is constant at z. Note also that since f and g are continuous and strictly increasing, φ n is also continuous and strictly increasing in both arguments. This implies by Z3 that P n (x, (φ n (x, z δ), φ n (x, z)]) = (µ (δ)) n α n δ n and P n (x, (φ n (x, 1), φ n (x, 1 + δ)]) = (µ (δ)) n α n δ n. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 26 / 31

Mixing We want to use these two inequalities to show the result. Consider the case of x = 0, and therefore the sequence {φ n (0, z)}. Since φ 0 (0, z) = 0 < f (0) z = f [g (0)] z = φ 1 (0, z), it follows by induction, since f and g are monotone, that the sequence is nondecreasing. Note that here assumption T5 is crucial, if it is not satisfied then {0} is an ergodic set. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 27 / 31

Mixing Since the sequence is bounded above by x, it converges to some value ξ X, where ξ = f [g (ξ)] z. Hence, from the first order condition in (2) U [c (ξ)] = β U [c (f [g (ξ)] z)] f [g (ξ)] zµ (dz) ( )] ξz = βf [g (ξ)] U [c zµ (dz) z > βf [g (ξ)] U [c (ξ)] zµ (dz) = βf [g (ξ)] U [c (ξ)] z, where the inequality follows from z/ z < 1 and U4. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 28 / 31

Mixing So 1 > βf [g (ξ)] z which implies by (3) and T4 that ξ > x. Hence there exists an N 1 such that φ N (0, z) > x, and a δ > 0 such that φ N (0, z δ) > x. Hence and so x < φ N (0, z δ) < φ N (0, z) < x, P N (0, (x, x]) P N (0, (φ N (0, z δ), φ N (0, z)]) α n δ n. This shows the first part of the result. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 29 / 31

Mixing To show the second, consider the sequence {φ n ( x, 1)} which is such that φ 0 ( x, z) = x > f ( x) f [g ( x)] = φ 1 ( x, 1). Thus by induction and f and g nondecreasing, the sequence is nonincreasing. Since it is bounded from below by 0, it converges to a value υ = f [g (υ)]. And so U [c (υ)] = β U [c (f [g (υ)] z)] f [g (υ)] zµ (dz) = βf [g (υ)] U [c (υz)] zµ (dz) < βf [g (υ)] U [c (υ)] zµ (dz) = βf [g (υ)] U [c (υ)] z, where the inequality follows from z Z. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 30 / 31

Mixing Hence 1 < βf [g (υ)] z and by T4 υ < x. As before, choose N 1 such that φ N ( x, 1) < x, and a δ > 0 such that φ N ( x, 1 + δ) < x. Hence and so x > φ N ( x, 1 + δ) > φ N ( x, 1) > 0, P N ( x, (0, x ]) P N ( x, (φ N ( x, 1), φ N ( x, 1 + δ)]) α n δ n. Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 31 / 31