Lecture Notes - Dynamic Moral Hazard

Similar documents
Lecture Notes - Dynamic Moral Hazard

Game Theory and Economics of Contracts Lecture 5 Static Single-agent Moral Hazard Model

Notes on the Thomas and Worrall paper Econ 8801

Some Notes on Moral Hazard

In the Ramsey model we maximized the utility U = u[c(t)]e nt e t dt. Now

"A Theory of Financing Constraints and Firm Dynamics"

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X

Dynamic Principal Agent Models: A Continuous Time Approach Lecture III

Moral Hazard: Hidden Action

Minimum Wages and Excessive E ort Supply

Some Notes on Adverse Selection

Moral Hazard and Persistence

Module 8: Multi-Agent Models of Moral Hazard

Moral Hazard. Felix Munoz-Garcia. Advanced Microeconomics II - Washington State University

ECOM 009 Macroeconomics B. Lecture 2

Labor Economics, Lecture 11: Partial Equilibrium Sequential Search

George Georgiadis. Joint work with Jakša Cvitanić (Caltech) Kellogg School of Management, Northwestern University

EconS Microeconomic Theory II Midterm Exam #2 - Answer Key

Graduate Macroeconomics 2 Problem set Solutions

The Firm-Growth Imperative: A Theory of Production and Personnel Management

EconS Advanced Microeconomics II Handout on Subgame Perfect Equilibrium (SPNE)

EC476 Contracts and Organizations, Part III: Lecture 2

Organization, Careers and Incentives

Solutions to Problem Set 4 Macro II (14.452)

The Ramsey Model. Alessandra Pelloni. October TEI Lecture. Alessandra Pelloni (TEI Lecture) Economic Growth October / 61

Impatience vs. Incentives

Microeconomic Theory (501b) Problem Set 10. Auctions and Moral Hazard Suggested Solution: Tibor Heumann

Lecture 1: Introduction

Optimal Incentive Contract with Costly and Flexible Monitoring

Problem Set # 2 Dynamic Part - Math Camp

Free (Ad)vice. Matt Mitchell. July 20, University of Toronto

Capital Structure and Investment Dynamics with Fire Sales

TOBB-ETU - Econ 532 Practice Problems II (Solutions)

4- Current Method of Explaining Business Cycles: DSGE Models. Basic Economic Models

Banks, depositors and liquidity shocks: long term vs short term interest rates in a model of adverse selection

Screening. Diego Moreno Universidad Carlos III de Madrid. Diego Moreno () Screening 1 / 1

Simplifying this, we obtain the following set of PE allocations: (x E ; x W ) 2

Competitive Equilibrium and the Welfare Theorems

Econ 504, Lecture 1: Transversality and Stochastic Lagrange Multipliers

ECON2285: Mathematical Economics

G5212: Game Theory. Mark Dean. Spring 2017

Layo Costs and E ciency with Asymmetric Information

EconS Microeconomic Theory II Homework #9 - Answer key

Appendix for "O shoring in a Ricardian World"

Partial Solutions to Homework 2

Macroeconomic Theory II Homework 2 - Solution

1 Uncertainty. These notes correspond to chapter 2 of Jehle and Reny.

Economics 2450A: Public Economics Section 8: Optimal Minimum Wage and Introduction to Capital Taxation

Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications

Tractability and Detail-Neutrality in Incentive Contracting

General idea. Firms can use competition between agents for. We mainly focus on incentives. 1 incentive and. 2 selection purposes 3 / 101

1 Uncertainty and Insurance

Nonlinear Programming (NLP)

A Principal-Agent Model of Sequential Testing

Stochastic Dynamic Programming: The One Sector Growth Model

Solutions to problem set 11 (due Tuesday, April 29 1st, before class)

ECON607 Fall 2010 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2

Uncertainty Per Krusell & D. Krueger Lecture Notes Chapter 6

Solow Growth Model. Michael Bar. February 28, Introduction Some facts about modern growth Questions... 4

8. MARKET POWER: STATIC MODELS

Experimentation, Patents, and Innovation

Game Theory. Bargaining Theory. ordi Massó. International Doctorate in Economic Analysis (IDEA) Universitat Autònoma de Barcelona (UAB)

1 Moral Hazard: Multiple Agents 1.1 Moral Hazard in a Team

Practice Problems 2 (Ozan Eksi) ( 1 + r )i E t y t+i + (1 + r)a t

problem. max Both k (0) and h (0) are given at time 0. (a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming

Chapter 4. Applications/Variations

Microeconomics, Block I Part 1

Econ 101A Problem Set 6 Solutions Due on Monday Dec. 9. No late Problem Sets accepted, sorry!

Neoclassical Growth Model / Cake Eating Problem

Exclusive contracts and market dominance

Training, Search and Wage Dispersion Technical Appendix

A New Class of Non Existence Examples for the Moral Hazard Problem

An adaptation of Pissarides (1990) by using random job destruction rate

Bounded Rationality Lecture 4

Dynamic Mechanism Design:

An Introduction to Moral Hazard in Continuous Time

Efficiency in Dynamic Agency Models

Linear Contracts. Ram Singh. February 23, Department of Economics. Ram Singh (Delhi School of Economics) Moral Hazard February 23, / 22

COLLEGIUM OF ECONOMIC ANALYSIS WORKING PAPER SERIES. Repeated moral hazard with costly self-control. Łukasz Woźny

Fluctuations. Shocks, Uncertainty, and the Consumption/Saving Choice

Moral hazard in teams

ECO421: Moral hazard. Marcin P ski. March 26, 2018

Advanced Economic Growth: Lecture 8, Technology Di usion, Trade and Interdependencies: Di usion of Technology

Limit pricing models and PBE 1

Lecture 3, November 30: The Basic New Keynesian Model (Galí, Chapter 3)

D i (w; p) := H i (w; S(w; p)): (1)

EconS Advanced Microeconomics II Handout on Mechanism Design

BEEM103 UNIVERSITY OF EXETER. BUSINESS School. January 2009 Mock Exam, Part A. OPTIMIZATION TECHNIQUES FOR ECONOMISTS solutions

Hidden information. Principal s payoff: π (e) w,

1. Linear Incentive Schemes

Externalities and PG. MWG- Chapter 11

Moral Hazard. EC202 Lectures XV & XVI. Francesco Nava. February London School of Economics. Nava (LSE) EC202 Lectures XV & XVI Feb / 19

Introduction: structural econometrics. Jean-Marc Robin

(a) Output only takes on two values, so the wage will also take on two values: z(0) = 0 0 z(0) 0. max s(d)z { d. n { z 1 0 (n + d) 2.

Technical Appendix to "Sequential Exporting"

A Theory of Financing Constraints and Firm Dynamics by Clementi and Hopenhayn - Quarterly Journal of Economics (2006)

Gaming and Strategic Ambiguity in Incentive Provision

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

September Math Course: First Order Derivative

Optimal Contract to Induce Continued Effort

Transcription:

Lecture Notes - Dynamic Moral Hazard Simon Board and Moritz Meyer-ter-Vehn October 27, 2011 1 Marginal Cost of Providing Utility is Martingale (Rogerson 85) 1.1 Setup Two periods, no discounting Actions a t 2 A Output q t Time-separable and stationary Production q t f (q t ja t ) - no technological link Agent utilty X (u (w t) g (a t )) no preference link t Principal payo R = X (q t w t ) t 1.2 Principal s Problem Let a = (a 1 ; a 2 (q 1 )) be agent s action plan Principal chooses a ; w1 (q 1) ; w2 (q 1; q 2 ) to maximize E [(q 1 w 1 (q 1 ) + q 2 w 2 (q 1 ; q 2 )) ja] subject to : E [u (w1 (q 1 )) g (a 1) + u (w2 (q 1 ; q 2 )) g (a 2) ja ] E [:::jea] (IC) E [u (w1 (q 1 )) g (a 1) + u (w2 (q 1 ; q 2 )) g (a 2) ja ] 2u (IR) Note: Can t save or borrow 1

1.3 Result Proposition 1 The optimal long-term contract satis es 1 u 0 (w 1 (q 1 )) = E 1 u 0 (w 2 (q 1 ; q 2 )) jq 1; a (*) for all q 1. Idea: LHS is marginal cost of providing utility today RHS is expected marginal cost of providing utility tomorrow Agent is indi erent between receiving utility today or tomorrow If LHS<RHS principal could pro t by front-loading utility Proof. Let w 1 (q 1 ) ; w 2 (q 1 ; q 2 ) be optimal contract Fix q 1 Shift " 7 0 utility to period 1 u ( bw 1 (q 1 )) = u (w 1 (q 1 )) + " u ( bw 2 (q 1 ; q 2 )) = u (w 2 (q 1 ; q 2 )) " Does not a ect agent s IC or IR constraint By rst-order Taylor approximation " bw 1 (q 1 ) = w 1 (q 1 ) + u 0 (w 1 (q 1 )) " bw 2 (q 1 ; q 2 ) = w 2 (q 1 ; q 2 ) u 0 (w 2 (q 1 ; q 2 )), 8q 2 Thus, e ect on Revenue 1 br R = " u 0 (w 1 (q 1 )) 1 E u 0 (w 2 (q 1 ; q 2 )) jq 1; a As " can be chosen positive or negative, optimality requires that the term in parantheses vanishes 2

1.4 Discussion Optimal long-term contract has memory Unless w 1 independent of q 1, LHS depends on q 1 So does RHS, in particular w 2 6= w 2 (q 2 ) Optimal long-term contract is complex Agent would like to save - not borrow Apply Jensen s inequality to (*) f (x) = 1=x is a convex function Thus f (E [x]) E [f (x)] Intuition? u 0 1 (w 1 (q 1 )) = 1=E u 0 (w 2 (q 1 ; q 2 )) E u 0 (w 2 (q 1 ; q 2 )) 2 Asymptotic E ciency 2.1 Setup 1 periods, common discount factor Output q t 2 q; q Actions a t 2 A First best action a and quantity q = E [qja ] Time-separable and stationary 2.2 Result Proposition 2 If everybody is patient, rst-best is almost achievable: 8"; 9; 8 there is a contract generating agent utility greater than u (q ) g (a ) " (and yielding at least 0 to the principal). Statement assumes that agent proposes contract and has to satisfy principal s IR constraing If principal proposes, can also get rst-best Idea: 3

Make agent residual claimant He can build up savings and then smooth his consumption Proof. Agent s wealth w t If wealth is high, w t q q =, consume q t = q + (1 ) w t e" Earnings q Interest (1 ) w t save a little e" 2 0; (1 ) q q If wealth is low, w t (q q) =, consume q t = q + (1 ) w t Minimal earning q Interest (1 ) w t This is pretty arbitrary. The point is that wealth grows E [w t+1 ] w t min q q =;e" > 0 Thus, wealth is a submartingale with bounded increments and thus the probability that it exceeds any threshold x, e.g. x = q q =, after time t approaches 1 as t! 1 lim t t!1 = 1 where p t = Pr (w x for all t) Omitting non-negative terms gives lower bound on agent s utility 1X (1 ) (u (q t ) g (a )) t p t u (q ) g (a ) =0 u (q ) g (a ) " when we choose and p t close enough to 1 4

3 Short-term Contracts 3.1 Setup 2 periods, no discounting Time separable technology and preferences Agent can save, but principal can monitor this Funny assumption, but necessary for tractability and result Maybe reasonable in third world when saving is through landlord Outside utility u = u (q) g (a) 3.2 Principal s Problem Principal chooses a ; s (q 1 ) ; w 1 (q 1) ; w 2 (q 1; q 2 ) to maximize E [(q 1 w 1 (q 1 ) + q 2 w 2 (q 1 ; q 2 )) ja] subject to : E [u (w 1 (q 1 ) s (q 1 )) g (a 1) + u (w 2 (q 1 ; q 2 ) + s (q 1 )) g (a 2) ja ] E [::: (ea; es)] (IC) E [u (w 1 (q 1 ) s (q 1 )) g (a 1) + u (w 2 (q 1 ; q 2 ) + s (q 1 )) g (a 2) ja ] 2u (IR) Can choose s (q 1 ) = 0 because principal can save for the agent by adjusting w 3.3 Renegotiation and Spot Contracts After period 1, the principal could o er the agent to change the contract Optimally, he o ers contract ba 2 ; bw 2 (q 2 ) to maximize E [q 2 w 2 (q 2 ) ja 2 ] subject to : (Seq-E ) E [u ( bw 2 (q 2 )) g (ba 2 ) jba 2 ] E [::: (ea)] (IC ) E [u ( bw 2 (q 2 )) g (ba 2 ) jba 2 ] E [u (w 2 (q 1 ; q 2 )) g (a 2) ja 2] (IR ) where the last line captures the idea that the agent can insist on the original long-term contract Of course, ba 2 ; bw 2 (q 2 ) implicitly depend on q 1 through (IR ) Call contract sequentially e cient, or renegotiation-proof if there is no such mutually bene cial deviation after any realization of q 1, and thus ba 2 = a 2 and bw 2 (q 2 ) = w2 (q 1; q 2 ) : 5

The long-term contract a ; w can be implemented via spot contracts if there is a saving strategy s (q 1 ) for the agent such that the second period spot contract a 2 ; w 2 (q 2 ) that maximizes E [q 2 w 2 (q 2 ) ja 2 ] subject to : (Spot) E [u (w 2 (q 2 ) + s (q 1 )) g (a 2 ) ja 2 ] E [::: (ea)] (IC-spot) E [u (w 2 (q 2 ) + s (q 1 )) g (a 2 ) ja 2 ] u (q + s (q 1 )) g (a 2 ) (IR-spot) yields the same actions a 2 = a 2 and wages w 2 (q 2 ) + s (q 1 ) = w 2 (q 1; q 2 ) as the original contract. 3.4 Result Proposition 3 1. The optimal long-term contract is renegotiation-proof. 2. A renegotiation-proof contract can be implemented by spot contracts. If there was a pro table deviation after q 1, there is a weakly more pro table deviation where IR is binding The original contract could then be improved by substituting the deviation into the original contract. This proves 1. For 2, set s (q 1 ) so that u (q + s (q 1 )) = E [u (w 2 (q 1 ; q 2 )) ja 2 ] Then if ba 2 ; bw 2 (q 2 ) solves (Seq-E ), a 2 = ba 2 ; w 2 (q 2 ) = bw 2 (q 2 ) s (q 1 ) solves (Spot) 3.5 Discussion Rationale for Short-Term Contracting Separates incentive-provision from consumption smoothing Yields recursive structure of optimal long-term contract - Memory of contract can be captured by one state variable: savings Generalizes to T periods Preferences where a 1 does not a ect trade-o between a 2 and c 2 6

4 Optimal Linear Contracts (Holmstrom, Milgrom 87) 4.1 Setup 2 periods, no discounting Time separable technology and preferences Funny utility function u (w 1 ; w 2 ; a 1 ; a 2 ) = exp ( (w 1 + w 2 g (a 1 ) g (a 2 ))) Consumption at the end (-> no role for savings) Monetary costs of e ort CARA - no wealth e ects Outside wage w per period Optimal static contract a s ; w s 4.2 Result Proposition 4 1. The optimal long-term contract repeats the optimal static contract: w 1 (q 1 ) = w s (q 1 ) and w 2 (q 1 ; q 2 ) = w s (q 2 ) 2. If q is binary, or Brownian, the optimal contract is linear in output: w (q 1 ; q 2 ) = + (q 1 + q 2 ) Idea: CARA makes everything separable Proof. Principal chooses a ; w to maximize E [q 1 w 1 (q 1 ) + q 2 w 2 (q 1 ; q 2 ) ja] subject to : E [ exp ( (w 1 (q 1 ) + w 2 (q 1 ; q 2 ) g (a 1) g (a 2))) ja ] E [ exp (:::) jea] (IC) E [ exp ( (w 1 (q 1 ) + w 2 (q 1 ; q 2 ) g (a 1) g (a 2))) ja ] u (2w) (IR) Can choose w 2 (q 1; q 2 ) so that E [ exp ( (w 2 (q 1; q 2 ) g (a 2 ))) ja 2 ] = u (w) for all q 1 Add (q 1 ) to all w 2 (q 1; q 2 ) Subtract (q 1 ) from w 1 (q 1) 7

Does not a ect w 1 (q 1) + w 2 (q 1; q 2 ) for any realization (q 1 ; q 2 ) Principal and agent only care about this sum Sequential e ciency implies that in the second period after realization of q 1, principal chooses ba 2 ; bw 2 to maximize E [q 2 bw 2 (q 2 ) ja 2 ] subject to : exp ( (w 1 (q 1 ) g (a 1))) E [exp ( ( bw 2 (q 2 ) g (ba 2 ))) jba 2 ] exp (:::) E [exp (:::) jea 2 ](IC 2) exp (:::) E [exp ( ( bw 2 (q 2 ) g (ba 2 ))) jba 2 ] exp (:::) u (w) (IR 2) As period one factors out (this is because there are no wealth e ects), the optimal second period contract ba 2 ; bw 2 is the optimal short-term contract bw 2 (q 2 ) = w s (q 2 ) - independent of q 1 Taken ba 2 ; bw 2 as given, the principal chooses ba 1 ; bw 1 to maximize maximize E [q 1 w 1 (q 1 ) ja 1 ] subject to : E [exp ( ( bw 1 (q 1 ) g (ba 1 ))) jba 1 ] E [exp ( ( bw 2 (q 2 ) g (ba 2 ))) jba 2 ] E [:::jea 1 ] E [:::jba 2 ](IC 1) E [exp ( ( bw 1 (q 1 ) g (ba 1 ))) jba 1 ] E [exp ( ( bw 2 (q 2 ) g (ba 2 ))) jba 2 ] u (2w) (IR 1) This is again the static problem, proving (1) (2) follows because every function of q binary is linear, and a Brownian motion is approximated by a binary process 4.3 Discussion Not very general, but extends to any number of periods Stationarity not so suprising: technology independent no consumption-smoothing no wealth-e ects no bene ts from long-term contracting Agent bene ts ability to adjust his actions according to realized output Consider generalization with t 2 [0; T ] and dq t = adt + dw t, so that q T N a; 2 T 8

If agent cannot adjust his action, principal can implement rst-best via tail-test and appropriate surplus Tail-test does not work if agent can adjust e ort Can slack at rst...... and only start working if q t drifts down to far More generally with any concave, say, reward function w (q T ), agent will work in steep region, after bad realization shirk in at region, after good realization Providing stationary incentives to always induce the static optimal a is better 5 Continuous Time (Sannikov 2008) 5.1 Setup Continuous time t 2 [0; 1), discount rate r Think about time as tiny discrete increments dt and remember rdt 1 e rdt Time separable technology dx t = a t dt + dz t Brownian Motion Z t (also called Wiener process) characterized by Sample paths Z t continuous almost surely Increments independent and stationary with distribution Z t+ Z t N (0; ) Wealth of agent Z 1 w = r e rt (u (c t ) t=0 g (a t )) dt (the r annuitizes the value of the agent and renders it comparable to u and g) Cost function g with g(0) = 0, g 0 > 0, g 00 > 0 Consumption utility with u (0) = 0; u 0 > 0; lim x!1 u 0 (x) = 0 Consumption = wage; no hidden savings Revenue of rm Z = re e rt dx t Z = r e rt (a t c t ) dt Z r e rt c t dt 9

5.2 Firm s problem Choose a t ; c t as function of X st to maximize subject to (IC) and (IR) Recursive approach: Let w t be the continuation wealth of the agent (in utils) Z 1 w t = r e r(s t) (u (c s ) g (a s )) ds s=t Principal chooses c t ; a t ; w t+dt to maximize t subject to (IC), (IR) and w t = rdt (u (c t ) g (a t )) + (1 rdt) E [w t+dt ja t ] (PK) The Promise Keeping constraint is not a proper constraint, but just an accounting identity that ensures that w t is actually the continuation value of the agent Current payo u (c t ) g (a t ) Continuation wealth E [w t+dt ] Example: Retiring agent with wealth w t Instruct agent not to take any e ort a t = 0 Pay out wealth as annuity u (c t ) = w t Firm pro t from this contract 0 (u (c)) = c Draw picture of 0 and Decompose principal s NPV into current pro ts and continuation value t = r (a t c t ) dt + (1 rdt) E [ t+dt ] r t dt = r (a t c t ) dt + E [d t ] Principal s expected pro t t is function of state variable, i.e. of agent s wealth (w t ) The expected value of the increment E [d t ] can be calculated with Ito s Lemma Lemma 5 (Ito) Consider the stochastic process w t governed by dw t = (w t ) dt + dz t 10

and a process t = (w t ) that is a function of this original process. Then the expected increment of t is given by E [d (w)] = (w) 0 (w) + 1 2 2 00 (w) dt Proof. By Taylor expansion (w t+dt ) (w t ) = (w t + (w t ) dt + dz t ) (w t ) = ( (w t ) dt + dz t ) 0 (w t ) + 1 2 ( (w t) dt + dz t ) 2 00 (w t ) + o (dt) = ( (w t ) dt + dz t ) 0 (w t ) + 1 2 2 dzt 2 00 (w t ) + o (dt) E [ (w t+dt ) (w t )] = (w t ) dt 0 (w t ) + 1 2 2 dt 00 (w t ) + o (dt) The reason, the Ito term 1 2 2 00 (w t ) comes in is that w t is oscillating so strongly, with stdv. p dt in every dt. 5.3 Solving the agent s problem 5.3.1 Evolution of Wealth Subtracting (1 rdt) w t from (PK) rdtw t = rdt (u (c t ) g (a t )) + (1 rdt) (E [w t+dt ] w t ) = rdt (u (c t ) g (a t )) + E [dw t ] the agent s value is a function of his current consumption u (c t ) his current e ort g (a t ) the expected drift of his value Reversely E [dw t ] = r (w t (u (c t ) g (a t ))) dt To get at the actual dynamics of w t, assume that value increments dw t are linear in output increments 11

dx t with wealth dependent sensitivity rb (w t ) dw t = rb (w t ) dx t E [dw t ] = rb (w t ) E [dx t ] = rb (w t ) E [a t dt + dz t ] = rb (w t ) a t dt Therefore, the actual increment of wealth is governed by dw t = E [dw t ] + (dw t E [dw t ]) = r (w t (u (c t ) g (a t ))) dt + rb (w t ) (dx t a t dt) = r (w t (u (c t ) g (a t ))) dt + rb (w t ) dz t Drifting up when wealth and interest rw t are high down when consumption c t is high up when e ort a t is high Wiggling up, when production exceeds expectations dx t > a t dt down, when production falls short of expectations dx t < a t dt 5.3.2 Agent IC Agent s e ort a t a ects value rw t through current marginal cost rg 0 (a) marginal continuation value b (w t ) Then, if agent with wealth w is instructed to exert e ort a (w) his FOC becomes rg 0 (a (w)) = rb (w) (IC) So, if the principal wants to incentivize e ort a = a (w) he needs to link the evolution of wealth w t to output dx t via b (w) By (IC) we can write b as a function of a, (a) = g 0 (a) 12

5.4 The Firm s Problem - continued In the case at hand we have (w t ) = r (w t (u (c t ) g (a t ))) = r (a t ) and we get r (w) = r (a c) + r (w (u (c) g (a))) 0 (w) + 1 2 r2 (a) 2 00 (w) ((*)) So the principal chooses plans a = a (w) > 0 and c = c (w) to maximize the RHS of (*) The agent has to retire at some point w r Marginal utility of consumption u 0 (c)! 0 Marginal cost of e ort g 0 (a) " > 0 Boundary conditions (0) = 0: If the agent s wealth is 0, he can achieve this by setting future e ort a t = 0, yielding 0 to the rm (w r ) = 0 (w r ) = u 1 (w r ): At some retirement wealth w r, the agent retires 0 (w r ) = 0 0 (w r): Smooth pasting: The pro t function is smooth and equals 0 above w r Theorem 6 There is a unique concave function 0, maximizing (*) under the above boundary conditions. The action and consumption pro les a (w) ; c (w) constitute an optimal contract. 5.5 Properties of Solution 5.5.1 Properties of (0) = 0 0 (0) = 0: terminating the contract at w = 0 is ine cient, and w > 0 serves as insurance against termination (w) < 0 for large w, e.g. w = w r, because agent has been promised a lot of continuation utility 13

5.5.2 Properties of optimal e ort a Optimal e ort maximizes ra + rg (a) 0 (w) + 1 2 r2 (a) 2 00 (w) Increased output ra Compensating agent for e ort through continuation wealth rg (a) 0 (w) - yes, this is positive for w 0 Compensating agent for income risk 1 2 r2 (a) 2 00 (w) - how so? Monotonicity of a (w) unclear 0 (w) decreasing 00 (w) could be increasing or decreasing As r! 0, a (w) decreasing 5.5.3 Properties of optimal consumption c Optimal consumption maximizes rc ru (c) 0 (w) and thus 1 u 0 (c ) = 0 (w) or c = 0 1 u 0 (c ): cost of current consumption utility 0 (w): cost of continuation utility Thus agent does not consume as long as w small 5.6 Extensions: Career Paths Performance-based compensation c (w t ) serves as short-term incentive Now incorporate long-term incentives into model In baseline model principal s outside option was retirement 0 (u (c)) = c Can model, quitting, replacement or promotion by di erent outside options e 0 This only changes the boundary conditions but not the di erential equation determining 14

If agent can quit at any time with outside utility ew, then e 0 (u (c)) = ( c if u (c) > ew 0 if u (c) = ew If agent can be replaced at pro t D to rm, then e 0 (u (c)) = D c If agent can be promoted at cost K, resulting into new value function p, then e 0 (w) = max f 0 (w) ; p (w) Kg Find that quitting < benchmark < replacement, promotion 5.7 Stu Change in agent s value E [dw t ] = E [w t+dt ] w t = dw t = w t+dt w t = rdt (w t+dt [u (c t ) g (a t )]) + (1 rdt) (w t+dt E [w t+dt ]) = rdt (w t [u (c t ) g (a t )]) + (w t+dt E [w t+dt ]) Drift in agent s value Increasing via interest at rate r on current wealth w t Decreasing in current consumption u (c t ) (if agent eats today, he has less tomorrow) Increasing in current e ort g (a t ) Assume that value increments are linear in output increments dw t = b (w t ) dx t = b (w t ) a t dt + b (w t ) dz t E [dw t ] = b (w t ) E [dx t ] = b (w t ) a t dt 15

Then, w t+dt E [w t+dt ] = dw t E [dw t ] = b (w t ) dx t E [b (w t ) dx t ] Random shocks through in annuity of accumulated wealth w t, minus Current utility u (c t ) g (a t ) (wealth will increase, dw t > 0, if agent eats less / works more today, i.e. c t lower or a t higher), plus Random shocks 16