Stochastic Problems. 1 Examples. 1.1 Neoclassical Growth Model with Stochastic Technology. 1.2 A Model of Job Search

Similar documents
Lecture 7: Stochastic Dynamic Programing and Markov Processes

(a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming

UNIVERSITY OF WISCONSIN DEPARTMENT OF ECONOMICS MACROECONOMICS THEORY Preliminary Exam August 1, :00 am - 2:00 pm

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X

problem. max Both k (0) and h (0) are given at time 0. (a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming

Slides II - Dynamic Programming

Macroeconomics Qualifying Examination

Macroeconomic Theory II Homework 2 - Solution

Dynamic Optimization Problem. April 2, Graduate School of Economics, University of Tokyo. Math Camp Day 4. Daiki Kishishita.

McCall Model. Prof. Lutz Hendricks. November 22, Econ720

Competitive Equilibrium and the Welfare Theorems

Economic Growth: Lecture 13, Stochastic Growth

Equilibrium in a Production Economy

1 Bewley Economies with Aggregate Uncertainty

Lecture Notes 8

Asymmetric Information in Economic Policy. Noah Williams

ECOM 009 Macroeconomics B. Lecture 2

Neoclassical Business Cycle Model

Session 4: Money. Jean Imbs. November 2010

A simple macro dynamic model with endogenous saving rate: the representative agent model

18.600: Lecture 32 Markov Chains

Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6

2. What is the fraction of aggregate savings due to the precautionary motive? (These two questions are analyzed in the paper by Ayiagari)

1 Jan 28: Overview and Review of Equilibrium

u(c t, x t+1 ) = c α t + x α t+1

Dynamic (Stochastic) General Equilibrium and Growth

Recursive Methods Recursive Methods Nr. 1

ECON607 Fall 2010 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2

Adverse Selection, Risk Sharing and Business Cycles

Lecture 3: Huggett s 1993 model. 1 Solving the savings problem for an individual consumer

Markov Chains (Part 4)

Chapter 4. Applications/Variations

Graduate Macroeconomics 2 Problem set Solutions

Lecture 2. (1) Permanent Income Hypothesis (2) Precautionary Savings. Erick Sager. February 6, 2018

Lecture 6: Discrete-Time Dynamic Optimization

The representative agent model

The Principle of Optimality

18.440: Lecture 33 Markov Chains

Chapter 16 focused on decision making in the face of uncertainty about one future

Appendices to: Revisiting the Welfare Effects of Eliminating Business Cycles

0 β t u(c t ), 0 <β<1,

Economics 210B Due: September 16, Problem Set 10. s.t. k t+1 = R(k t c t ) for all t 0, and k 0 given, lim. and

Macroeconomics Qualifying Examination

HOMEWORK #3 This homework assignment is due at NOON on Friday, November 17 in Marnix Amand s mailbox.

Advanced Tools in Macroeconomics

Permanent Income Hypothesis Intro to the Ramsey Model

Ramsey Cass Koopmans Model (1): Setup of the Model and Competitive Equilibrium Path

High-dimensional Problems in Finance and Economics. Thomas M. Mertens

18.175: Lecture 30 Markov chains

MA Advanced Macroeconomics: Solving Models with Rational Expectations

STOCHASTIC PROCESSES Basic notions

Notes on Unemployment Dynamics

Indivisible Labor and the Business Cycle

1 Recursive Competitive Equilibrium

Graduate Macro Theory II: Notes on Solving Linearized Rational Expectations Models

Lecture 10: Powers of Matrices, Difference Equations

Practical Dynamic Programming: An Introduction. Associated programs dpexample.m: deterministic dpexample2.m: stochastic

Notes on Measure Theory and Markov Processes

Advanced Macroeconomics

Comprehensive Exam. Macro Spring 2014 Retake. August 22, 2014

Markov Chains and Stochastic Sampling

An approximate consumption function

Online Appendix for Slow Information Diffusion and the Inertial Behavior of Durable Consumption

1. Using the model and notations covered in class, the expected returns are:

In the Ramsey model we maximized the utility U = u[c(t)]e nt e t dt. Now

Lecture 4: Dynamic Programming

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming

Consumption / Savings Decisions

Problem Set # 2 Dynamic Part - Math Camp

Lecture 1: Dynamic Programming

Macroeconomics Qualifying Examination

Monetary Economics: Solutions Problem Set 1

1 Two elementary results on aggregation of technologies and preferences

ISM206 Lecture, May 12, 2005 Markov Chain

Government The government faces an exogenous sequence {g t } t=0

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications

Solow Growth Model. Sang Yoon (Tim) Lee. Jan 9-16, last updated: January 20, Toulouse School of Economics

Macro 1: Dynamic Programming 2

Ljungqvist & Sargent s: The European Unemployment Dilemma

Suggested Solutions to Homework #6 Econ 511b (Part I), Spring 2004

Lecture XI. Approximating the Invariant Distribution

Lecture 2: Firms, Jobs and Policy

6.842 Randomness and Computation March 3, Lecture 8

Small Open Economy RBC Model Uribe, Chapter 4

12. Perturbed Matrices

MA Advanced Macroeconomics: 7. The Real Business Cycle Model

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

B Search and Rest Unemployment Fernando Alvarez and Robert Shimer Additional Appendixes not for Publication

Lecture Notes. Econ 702. Spring 2004

Notes for ECON 970 and ECON 973 Loris Rubini University of New Hampshire

The Real Business Cycle Model

Modern Macroeconomics II

Problem Set #4 Answer Key

Topic 2. Consumption/Saving and Productivity shocks

1 Basic Analysis of Forward-Looking Decision Making

Dynamic Optimization Using Lagrange Multipliers

The New Keynesian Model: Introduction

Incomplete Markets, Heterogeneity and Macroeconomic Dynamics

Note that we are looking at the true mean, μ, not y. The problem for us is that we need to find the endpoints of our interval (a, b).

Transcription:

Stochastic Problems References: SLP chapters 9, 10, 11; L&S chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology Production function y = Af k where A is random Let A s t be productivity in history s t We could have A be a function of s t only, i.e. A s t but allocations will still be functions of histories Sequence problem V k 0, A 0 = 1.2 A Model of Job Search max cs t,ks t t=0 s.t. s t β t Pr s t u c s t k s t+1 = A s t f k s t + 1 δ k s t c s t k s t 0 c s t 0 k 0, A 0 given Each period a worker begins by receving a wage offer w [0, w] If he accepts the offer, he gets paid w for that period If he rejects the offer he gets unemployment benefits b instead 1

The worker cannot borrow or save, so he consumes either w or b respectively If the worker wasn t working the previous period, then w is drawn iid from a distribution G G could have mass at w = 0, which can be interpreted as not finding a job If the worker was working the previous period, then with iid probability 1 λ he gets the same wage offer as the previous period interpretation: he keeps his job with iid probability λ he draws a new w from G interpretation: he goes back to the unemployment pool The exogenous state variables are given by: w: the wage realization in case the worker draws from G θ {0, 1}: an indicator of whether the worker keeps his previous job in case he had one Endogenous state variables: Job status J {0, 1}: whether the worker was empolyed yesterday z [0, w]: wage that you enter the period with only relevant if J = 1 Decision: does the worker accept the current offer? a {0, 1} Graph: 2

as t =1 z θ s t =1 employed at wage z θs t =0 as t =0 as t =1 as t =0 b ws t b unemployed as t =1 ws t as t =0 b Sequence problem: V w 0, θ 0, J 0, z 0 = max as t,js t,cs t,zs t t=0 s t β t Pr s t u c s t s.t. c z s t if J s t = 1, θ s t = 1, a s t = 1 s t = w s t if J s t = 0 or θ s t = 0 and a s t = 1 b if a s t = 0 J s t+1 = a s t z s t+1 = c s t w 0, y 0, J 0, z 0 given 3

1.3 Consumption-Savings Under Uncertainty Household gets income y s t in history s t Again, we could simplify this to y s t Can borrow up to a limit and save at the risk-free interest rate R Does not have access to insurance i.e. there are no Arrow-Debreu securities he can buy Sequence problem 2 Markov Processes V A 0, y 0 = max As t,cs t t=0 s.t. s t β t Pr s t u c s t A s t+1 = RA s t + y s t c s t A s t+1 B A 0 and y 0 given Stochastic process: sequence of random vectors x t X Example: x t = {A t, k t } capital and productivity s could be composed of exogenous variables, endogenous variables or a combination thereof Markov property: Pr x t+1 x t, x t 1,..., x t k = Pr x t+1 x t for all k The state today is a sufficient statistic for forecasting the state tomorrow This is not so restrictive as it seems To avoid going too much into measure theory, we are going to focus mostly on cases where X is a finite set In those cases, a Markov process can also be called a Markov chain We ll briefly mention how some of the results generalize to a continuous state space 4

2.1 Markov chains Discrete state space x {x 1,..., x N } The evolution of x is governed by a matrix P sometimes called transition matrix, Markov matrix or stochastic matrix Definition 1. A stochastic matrix is an N N matrix P such that: N P ij = 1 j=1 i P i,j = Probability that tomorrow you ll be in state j given that today you are in state i Suppose we represent the probability distribution over states in period t by π t where π t is a row vector such that N π ti = 1 then πp represents the probability distribution over states in period t + 1 i=1 Example 1: π = 0 1 0 i.e. there are three possible states and we start at x = x 2 for sure P = p 11 p 12 p 13 0.3 0.2 0.5 p 31 p 32 p 33 so πp = 0.3 0.2 0.5 so the second row of P tells us the probability distributions over states tomorrow given the state today Example 2: π = 0.5 0.5 0 P = 0.5 0.4 0.1 0.3 0.2 0.5 p 31 p 32 p 33 5

πp = 0.4 0.3 0.3 so if in period t there s a fifty-fity chance of being is states x 1 or x 2, then πp gives us the probabilities of being in the three states in period t + 1 Notation note: Sometimes people use the opposite row/column convention for transition matrices: P i,j = Probability that tomorrow you ll be in state i given that today you are in state j For this you should write π as a column vector and π = P π Of course, this is all equivalent 2.2 The Markov Assumption Suppose I have the following process: x X = {Rain, Sun} 0.1 if x t = x t 1 = Rain Pr {x t+1 = Rain x t, x t 1 } 0.6 if x t = Rain and x t 1 = Sun 0.3 if x t = Sun This seems not to satisfy the Markov assumption because the weather today is not a sufficient statistic to forecast the weather tomorrow But a simple transformation restores that Markov property: X = {NewRain, OldRain, Sun} Transition Matrix P = 0 0.6 0.4 0 0.1 0.9 0.3 0 0.7 Many history-dependent processes can be rewritten this way as Markov Processes For now, we ll use the Markov assumption to have a justification for saying Pr x x and knowing that we can ignore the history of x Later we ll ask questions about Markov processes themselves: 6

What do they look like in the long run? Under what conditions do they converge? What do we mean by converge? 3 Recursive Setup Often, there is more than one way to set things up Typical sequence: Stuff happens Make decisions Stuff happens Make decisions... Do I compute the value function at the time I m about to make a decision or when something is about to happen? What do I define as the state variable? The results on the equivalence of the sequence problem and the functional equation extend to the stochastic case, s.t. some technical caveats See SLP Theorems 9.2 and 9.4 3.1 Neoclassical Growth Model with Stochastic Technology State variables: capital stock endogenous productivity exogenous Bellman equation: V k, A = max u c + βe V k, A A c,k s.t. k = Af k + 1 δ k c k 0 c 0 7

or simply V k, A = max u Af k + 1 δ k k [0,Afk+1 δk] k + βe V k, A A where E V k, A A = A Pr A A V k, A for a discrete space for A FOC for c: FOC for k : u c λ = 0 β A Pr A A V K k, A λ = 0 Envelope condition: and therefore V K k, A = λ [Af k + 1 δ] V K k, A = λ [A f k + 1 δ] Putting these together u c k, A = β A Pr A A u c k, A [A f k + 1 δ] which is the Euler equation that we encountered at the beginning of the class: u c t = βe [Ru c t+1 ] where R = A f k + 1 δ is the stochastic return of a unit of capital The solution defines a system of stochastic difference equations in k, c and A: u c t = βe [A t+1 f k t+1 + 1 δ u c t+1 ] k t+1 = A t f k t + 1 δ c t A t+1 exogenous Markov process This is a Markov process for the vector k, c, A 8

Unfortunately, unless we restrict k and c to grids, this Markov process lives in a continuous state space even if A lives in a discrete state space Questions: How does this system behave over time? Does it tend to a steady state? In what sense? Are there values that it will never reach / reach repeatedly? 3.2 Job search model A not-so-useful way to do it: State variables: Exogenous: w: Wage drawn from G this period Exogenous: θ: whether the worker keeps his job this period if he had it Exogenous: J: the worker s incoming job status Exogenous: z: The worker s incoming wage if employed Bellman equation: where V w, θ, J, z = max a,j,z,c u c + βe [V w, θ, J, z] s.t. z if J = 1, θ = 1, a = 1 c = w if J = 0 or θ = 0 and a = 1 b if a = 0 J = a z = c w G Pr θ = 1 = λ Alternatively, notice that at the point the worker gets a wage offer, that s the only thing that matters to him. Things that don t matter: 9

Whether the wage was offered by his old job or a new one Whether he is drawing from G because he got fired or because he was unemployed Define x as today s job offer V x = max u xa + b 1 a + βe V x x, a a where x with probablity 1 λ if a = 1 x x, a = G with probablity λ if a = 1 G if a = 0 Note: if we didn t assume that new wage offers were iid, then x would not be a sufficient state variable Alternatively, compute the value function for a worker who has a job but might get fired or quit and a worker who doesn t have a job separately Let z be the incoming wage for a worker who has a job V J is the value of entering with a job that pays z V U is the value of entering the period unemployed [ V J z = λv U + 1 λ max a z + βv J z + 1 a b + βv U] a V U = max E [ a w w + βv J w + 1 a w b + βv U] aw = max aw Let s work with this last formulation ˆ [a w w + βv J w + 1 a w b + βv U] dg w Unemployed workers have a reservation wage w which satisfies w + βv J w = b + βv U Workers never quit If they chose to take the job when unemployed, this means that the wage z they currently have satisfied z w, which is the same condition for not quitting Hence V J z = λv U + 1 λ z + βv J z 1 10

Simplify V U to V U = ˆ [max { w + βv J w, b + βv U}] dg w 2 Special case λ = 0 you won t get fired Simplifies to V J z = z + βv J z V J z = z 1 β Replace in 2: with V U = ˆ [ { w max 1 β, b + βv U ˆ = G w [ b + βv U] + w [ 1 = G w b + 1 βg w }] dg w w dg w 1 β ] w dg w 1 β ˆ w 1 β = b + βv U w Exercise 1. 1. Show that the reservation wage increases if G becomes more dispersed in a SOSD sense 2. Show that w is decresing in λ 3.3 Consumption-Savings State variables: A : Level of assets y: today s income V A, y = max c,a uc + β s Pry yv A, y 3 s.t. A y c + RA A b 11

Alternatively, let x RA + y cash on hand : V x, y = max c,x y uc + β y Pry yv x y, y 4 s.t. x y R [x c] + y c x + b This relabeling is especially useful if y is iid: reduce problem to a single state variable FOC: because with x, y as the state variables,y only matters through Pr y y with equality if c < x + b u c y λ y R µ = 0 β Pr y y V x y, y x λ y = 0 u c βr y Pry y V x y, y x Envelope condition V x, y x = u cx, y Euler equation: u c βr s Prs su cx y, y with equality if c < x + b 4 Convergence of Markov Processes Starting from π 0 which could be degenerate, a Markov chain will evolve according to π t = π 0 P t π belongs to the set N = { π R n + : } N π i = 1 i=1 12

Define the following norm in N : π = N π i i=1 We say a Markov chain converges to π if lim π t π = 0 T In this case we call π an invariant distribution and the support of π an ergodic set Define the operator T : N N as T π = πp Proposition 1. SLP Lemma 11.3 Let ɛ j = min i P ij and ɛ = N j=1 ɛ j. If ɛ > 0 then T is a contraction modulus 1 ɛ. Example: Here P = 0.2 0.5 0.3 0 0.3 0.7 0.7 0.1 0.2 ɛ j = 0 0.1 0.2 ɛ = 0.3 Interpretation: there is one value of x that you reach with positive probability from any other state Proof. The distance between T π and T ρ is: ρ T π, T µ = T π T µ = π µ P = π i µ i P ij j i 13

Example: π = 1 0 0 µ = 0 1 0 π µ = 1 1 0 π i µ i P ij = 0.2 0 0.5 0.3 0.3 0.7 i π i µ i P ij = 0.2 + 0.2 + 0.4 = 0.8 1 0.3 2 j i continue proof = j j = j i π i µ i P ij ɛ j + i i π i µ i P ij ɛ j + i j π i µ i P ij ɛ j i π i µ i P ij ɛ j j π i µ i ɛ j π i µ i ɛ j i = ρ π, µ 1 ɛ Proposition 1 immediately implies: There is a unique fixed point, i.e. unique solution to π = T π denote it π Starting from any π 0, lim T π = π T 14

You can find π by solving π = πp π I P = 0 i.e. π is the eigenvector corresponding to the eigenvalue 1 of matrix P, scaled so that N i=1 π i = 1. Generalizations: If the condition ɛ > 0 holds for the matrix P n, then T n is a contraction mapping, so the limiting condition holds too Interpretation: there is a state that you reach with positive probability in n steps starting from anywhere For states spaces that are not finite, condition M is Definition 2. A Markov process satisfies Condition M if there exist ɛ > 0 and n 1 such that for any set A X, either: 1. Pr x t+n A x t ɛ for all x t X or 2. Pr x t+n A C x t ɛ for all xt X Example: suppose there is a recurrent element x R X such that Pr x t+n = x R x t ɛ x t Then condition M is satisfied because for any A, either x R A or x R A C Proposition 2. SLP Lemma 11.11. If a Markov process satisfies condition M, then T n is a contraction of modulus ɛ 15