Lecture 5: The Bellman Equation

Similar documents
Recursive Methods. Introduction to Dynamic Optimization

Outline Today s Lecture

Economics 8105 Macroeconomic Theory Recitation 3

Macro 1: Dynamic Programming 1

Stochastic Dynamic Programming: The One Sector Growth Model

Dynamic Programming Theorems

The Principle of Optimality

Contents. An example 5. Mathematical Preliminaries 13. Dynamic programming under certainty 29. Numerical methods 41. Some applications 57

Lecture 4: Optimization. Maximizing a function of a single variable

Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011

and C be the space of continuous and bounded real-valued functions endowed with the sup-norm 1.

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Basic Deterministic Dynamic Programming

Bounded uniformly continuous functions

Stochastic Dynamic Programming. Jesus Fernandez-Villaverde University of Pennsylvania

Introduction to Functional Analysis

1 Stochastic Dynamic Programming

Mathematics II, course

MAS3706 Topology. Revision Lectures, May I do not answer enquiries as to what material will be in the exam.

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions

Characterisation of Accumulation Points. Convergence in Metric Spaces. Characterisation of Closed Sets. Characterisation of Closed Sets

Mathematical Methods in Economics (Part I) Lecture Note

B. Appendix B. Topological vector spaces

1 Directional Derivatives and Differentiability

Lecture 4: The Bellman Operator Dynamic Programming

Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

Analysis III Theorems, Propositions & Lemmas... Oh My!

Continuity. Matt Rosenzweig

Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course.

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

McGill University Math 354: Honors Analysis 3

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

SOME QUESTIONS FOR MATH 766, SPRING Question 1. Let C([0, 1]) be the set of all continuous functions on [0, 1] endowed with the norm

Course 212: Academic Year Section 1: Metric Spaces

7 Complete metric spaces and function spaces

NATIONAL UNIVERSITY OF SINGAPORE Department of Mathematics MA4247 Complex Analysis II Lecture Notes Part II

Convex Analysis and Economic Theory AY Elementary properties of convex functions

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

Econ Lecture 7. Outline. 1. Connected Sets 2. Correspondences 3. Continuity for Correspondences

MAS331: Metric Spaces Problems on Chapter 1

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3

Algorithms for MDPs and Their Convergence

Lagrangian Duality and Convex Optimization

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Continuity of convex functions in normed spaces

Optimization and Optimal Control in Banach Spaces

Berge s Maximum Theorem

Chapter 2 Convex Analysis

Topology, Math 581, Fall 2017 last updated: November 24, Topology 1, Math 581, Fall 2017: Notes and homework Krzysztof Chris Ciesielski

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

15 Real Analysis II Sequences and Limits , Math for Economists Fall 2004 Lecture Notes, 10/21/2004

Separation in General Normed Vector Spaces 1

INTRODUCTION TO TOPOLOGY, MATH 141, PRACTICE PROBLEMS

1 Stochastic Dynamic Programming

Notes on Distributions

Walker Ray Econ 204 Problem Set 2 Suggested Solutions July 22, 2017

[A + 1 ] + (1 ) v: : (b) Show: the derivative of T at v = v 0 < 0 is: = (v 0 ) (1 ) ; [A + 1 ]

Math 328 Course Notes

Deterministic Dynamic Programming

Introduction to Dynamic Programming Lecture Notes

Convex analysis and profit/cost/support functions

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Selçuk Demir WS 2017 Functional Analysis Homework Sheet

Stochastic convexity in dynamic programming

8 Further theory of function limits and continuity

Problem Set 2: Solutions Math 201A: Fall 2016

L p Spaces and Convexity

Economics 204 Summer/Fall 2017 Lecture 7 Tuesday July 25, 2017

4 Countability axioms

REVIEW OF ESSENTIAL MATH 346 TOPICS

APPLICATIONS OF DIFFERENTIABILITY IN R n.

Mathematics for Economists

Metric Spaces and Topology

Recursive Metho ds. Introduction to Dynamic Optimization Nr

Boundary Behavior of Excess Demand Functions without the Strong Monotonicity Assumption

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Lecture 7 Monotonicity. September 21, 2008

ECON 2010c Solution to Problem Set 1

1 Lecture 4: Set topology on metric spaces, 8/17/2012

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS

Fixed Point Theorems

THEOREMS, ETC., FOR MATH 515

4.2: What Derivatives Tell Us

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

Lecture Notes on Metric Spaces

LECTURE 15: COMPLETENESS AND CONVEXITY

Definition 6.1. A metric space (X, d) is complete if every Cauchy sequence tends to a limit in X.

Math 5052 Measure Theory and Functional Analysis II Homework Assignment 7

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets

MATHS 730 FC Lecture Notes March 5, Introduction

U e = E (U\E) e E e + U\E e. (1.6)

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

Geometry and topology of continuous best and near best approximations

M17 MAT25-21 HOMEWORK 6

Transcription:

Lecture 5: The Bellman Equation Florian Scheuer 1 Plan Prove properties of the Bellman equation (In particular, existence and uniqueness of solution) Use this to prove properties of the solution Think about numerical approaches 2 Statement of the Problem V (x) = sup F (x, y) + βv (y) y s.t. y Γ (x) (1) Some terminology: The Functional Equation (1) is called a Bellman equation. x is called a state variable. G (x) = {y Γ (x) : V (x) = F (x, y) + βv (y)} is called a policy correspondence. It spells out all the values of y that attain the maximum in the RHS of (1). If G (x) is single-valued (i.e. there is a unique optimum), G is called a policy function. Questions: 1. Does (1) have a solution? 2. Is it unique? 3. How do we find it? 1

3 The Bellman Equation as a Fixed-Point Problem Define the operator T by T ( f ) (x) = sup F (x, y) + β f (y) y s.t. y Γ (x) V can be defined as a fixed point of T, i.e. a function V such that T (V) (x) = V (x) x Does T have a fixed point? How do we find it? Assumption 1. (Assumption 4.3 in SLP) X R n is convex. Γ : X X is nonempty, compactvalued and continuous. Assumption 2. (Assumption 4.4 in SLP) F : X X R is continuous and bounded, i.e. F such that F(x, y) < F for all {x, y} with x X and y Γ(x). What space is the operator T defined in? Define the metric space (S, ρ) by S { f : X R continuous and bounded} (2) with the norm and thus the distance f = sup f (x) x X ρ ( f, g) = f g = sup f (x) g (x) (3) x X Notice that T : S S, i.e. if f is continuous and bounded, then g is continuous and bounded. How do we know this? 1. T ( f ) (x) is continuous Recall Theorem of the Maximum: Proposition 1. (Theorem of the Maximum). X R l and Y R m. f : X Y R is continuous. Γ : X Y is compact-valued and continuous. Then the function h : X R defined by h(x) = max f (x, y) 2

is continuous; and the correspondence G : X Y defined by G(x) = {y Γ(x) : f (x, y) = h(x)} is non-empty, compact-valued and upper hemi-continuous. Applied to this problem: f (x, y) becomes F (x, y) + β f (y) h (x) becomes T ( f ) (x) 2. This is because F is bounded (Assumption 2) and f is bounded. Now we want to show that T has a unique fixed point. Two steps: 1. Show that T is a contraction (Blackwell s sufficient conditions hold) 2. Appeal to contraction mapping theorem 1. Blackwell s sufficient conditions: Proposition 2. (Blackwell s sufficient conditions) X R l and B(X) is the space of bounded functions f : X R, with the sup norm. T is a contraction with modulus β if: a. [Monotonicity] f, g B(X) and f (x) g(x) for all x X T( f )(x) T(g)(x) for all x X; b. [Discounting] There exists some β (0, 1) such that T( f + a)(x) T( f )(x) + βa for all f B(X), a 0, x X. Proof. [You ll see this in class] These conditions hold in our problem because (a) For any x, y F (x, y) + β f (y) F (x, y) + βg (y) sup F (x, y) + β f (y) sup F (x, y) + βg (y) T ( f ) (y) T (g) (y) (b) T ( f + a) (x) = sup F (x, y) + β [ f (y) + a] = T ( f ) (x) + βa 3

2. Contraction Mapping Theorem Proposition 3. (Contraction Mapping Theorem). If (S, ρ) is a complete metric space and T : S S is a contraction mapping with modulus β, then: a. T has exactly one fixed point V in S; b. For any V 0 S, ρ(t n V 0, V) β n ρ(v 0, V), n = 0, 1, 2,... Proof. (a) Let V n = T n V 0. Since T is a contraction Therefore for any m > n ρ (V 2, V 1 ) = ρ (TV 1, TV 0 ) βρ (V 1, V 0 )... ρ (V n+1, V n ) β n ρ (V 1, V 0 ) ρ (V m, V n ) ρ (V m, V m 1 ) + ρ (V m 1, V m 2 ) + + ρ (V n+1, V n ) [ β m 1 + β m 2 +... β n] ρ (V 1, V 0 ) βn 1 β ρ (V 1, V 0 ) so {V n } is a Cauchy sequence. Since S is complete, {V n } must converge to some V S. We must now show that V is indeed a fixed point. ρ (TV, V) ρ (TV, T n V 0 ) + ρ (T n V 0, V) n ) βρ (V, T n 1 V 0 + ρ (T n V 0, V) n but both of these terms go to zero as n which implies that ρ (TV, V) = 0. Finally, we need to show that there is no other fixed point. Suppose there is another fixed point V. Then ρ ( V, V ) = ρ ( TV, TV ) βρ ( V, V ) (the first step is because they are both fixed points). This implies ρ (V, V ) = 0. (b) ( ) ( ) ρ (T n V 0, V) = ρ TT n 1 V 0, TV βρ T n 1 V 0, V... β n ρ(v 0, V) 4

The only missing step is to show that (S, ρ) defined by (2) and (3) indeed constitutes a complete metric space. (SLP Thm 3.1). Notice that if we used ρ ( f, g) = f (x) g (x) dx then (S, ρ) would NOT be a complete metric space. (SLP exercise 3.6.a.). Proposition 4. (SLP 4.6) If Assumptions 1 and 2 hold, then T has a unique fixed point in S, i.e. there is a unique continuous bounded function that solves (1). Proof. From Contraction Mapping Theorem, knowing that Blackwell s sufficient conditions are met. Proposition 5. The policy correspondence G(x) = {y Γ (x) : V (x) = F (x, y) + βv (y)} is compact-valued and u.h.c. Proof. From the Theorem of the Maximum Proposition 6. If Assumptions 1 and 2 hold, then V is the value function of the sequence problem. Proof. V solves (1) and, because V is bounded, then lim T β T V (x T ) = 0 X, so the sufficient conditions for Theorem SLP 4.3 hold. x Π (x 0 ), x 0 4 Proving Properties of V Basic structure of these results: proving that the fixed point belongs to some subset of S S, i.e. the set of functions that meet certain criteria. Proposition 7. If (S, ρ) is a complete metric space and S is a closed subset of S, then S is a complete metric space Proof. SLP Exercise 3.6.b. Proposition 8. (SLP Corollary 1, page 52). Let (S, ρ) be a complete metric space and T : S S be a contraction mapping with fixed point V S. 1. If S is a closed subset of S and T ( f ) S for all f S, then V S 2. If in addition S S and T ( f ) S for all f S, then V S 5

Proof. 1. Choose V 0 S. T n (V) is a sequence in S converging to V. Since S is closed, V S. 2. Since V S, then T (V) S. But T (V) = V so V S Example: S: all continuous functions f : [a, b] R S : all increasing functions f : [a, b] R S : all strictly increasing functions f : [a, b] R Note: we require the subset S to be closed but not the sub-subset S In our example: 1. If T maps increasing functions into increasing functions, then the fixed point must be an increasing function 2. If T maps increasing functions into strictly increasing functions, then the fixed point must be a strictly increasing function What the result does not say is that if T maps strictly increasing functions into strictly increasing functions, then the fixed point must be a strictly increasing function (because the set of strictly increasing functions is not closed) 4.1 V increasing Assumption 3. (Assumption 4.5 in SLP) F(x, y) is strictly increasing in x. Assumption 4. (Assumption 4.6 in SLP) x x implies Γ(x) Γ(x ) Do Assumptions (3) and (4) hold in the Neoclassical model? Proposition 9. (SLP 4.7). Suppose Assumptions (1)-(4) hold. Then V is strictly increasing. Proof. Let x > x and f S. T ( f ) (x) = max F (x, y) + β f (y) max F (x, y) + β f (y) y Γ(x ) < max y Γ(x ) F ( x, y ) + β f (y) = T ( f ) ( x ) This implies that T maps any continuous bounded function into a strictly increasing function. Proposition 8 gives the result. 6

4.2 V concave Assumption 5. (Assumption 4.7 in SLP) F(x, y) is strictly concave. Assumption 6. (Assumption 4.8 in SLP) Γ is convex Proposition 10. (SLP 4.8) Suppose Assumptions (1), (2), (5) and (6) hold. Then V is strictly concave and G is continuous and single-valued. Proof. We want to show that T maps concave functions into strictly concave functions. Strict concavity of V follows by Proposition 8. Let x 0 = x 1 and x θ = θx 0 + (1 θ) x 1 for θ (0, 1). Let y 0 Γ (x 0 ) be such that T ( f ) (x 0 ) = F (x 0, y 0 ) + β f (y 0 ) and similarly y 1 Γ (x 1 ) be such that T ( f ) (x 1 ) = F (x 1, y 1 ) + β f (y 1 ) Then: T ( f ) (x θ ) F (x θ, y θ ) + β f (y θ ) (Γ convex makes x θ, y θ feasible) > [θf (x 0, y 0 ) + (1 θ) F (x 1, y 1 )] + β [θ f (y 0 ) + (1 θ) f (y 1 )] ( f concave and F strictly concave) = θ [F (x 0, y 0 ) + β f (y 0 )] + (1 θ) [F (x 1, y 1 ) + β f (y 1 )] (rearranging) = θt ( f ) (x 0 ) + (1 θ)t ( f ) (x 1 ) (by assumption) G single-valued follows from strict concavity G continuous follows from Theorem of the Maximum 5 Is V differentiable? (Benveniste & Scheinkman, 1979) Cannot use same proof technique: Space of differentiable functions is not closed T does not necessarily map f into a differentiable function Instead, rely on the following result 7

Proposition 11. Suppose V : X R is concave. Let x 0 be an interior point and D be a neighborhood around x 0. Suppose exists concave differentiable w : D R such that: 1. w (x) V (x) 2. V (x 0 ) = w (x 0 ) 3. w is differentiable at x 0 Then V is differentiable at x 0 Proof. Any subgradient p of V at x 0 must satisfy; p (x x 0 ) V (x) V (x 0 ) w (x) V (x 0 ) = w (x) w (x 0 ) The first step is because a subgradient of a concave function is always (by definition) above the function the second step is because w (x) V (x) the last step is because w (x 0 ) = V (x 0 ) Since w is differentiable, then p is unique, which implies V differentiable. (Graph) This result is useful to establish the following: Proposition 12. (SLP 4.11) Suppose Assumptions (1), (2), (5) and (6) hold and F is continuosly differentiable. Then V is differentiable and V i (x 0 ) = F i (x 0, g (x 0 )) Proof. Define w (x) = F (x, g (x 0 )) + βv (g (x 0 )) w is concave, differentiable and satisfies w (x) = F (x, g (x 0 )) + βv (g (x 0 )) w (x) max F (x, y) + βv (y) = V (x) and w (x 0 ) = V (x 0 ) The result then follows from (11) 8