Game Theory and Economics of Contracts Lecture 5 Static Single-agent Moral Hazard Model

Similar documents
Moral Hazard: Hidden Action

Some Notes on Moral Hazard

Some Notes on Adverse Selection

EC476 Contracts and Organizations, Part III: Lecture 2

Moral Hazard. Felix Munoz-Garcia. Advanced Microeconomics II - Washington State University

Lecture Notes - Dynamic Moral Hazard

Notes on Mechanism Designy

Lecture Notes - Dynamic Moral Hazard

Lectures on the Theory of Contracts and Organizations. Lars A. Stole

Microeconomic Theory (501b) Problem Set 10. Auctions and Moral Hazard Suggested Solution: Tibor Heumann

Game Theory, Information, Incentives

General idea. Firms can use competition between agents for. We mainly focus on incentives. 1 incentive and. 2 selection purposes 3 / 101

This is designed for one 75-minute lecture using Games and Information. October 3, 2006

EconS Microeconomic Theory II Midterm Exam #2 - Answer Key

Minimum Wages and Excessive E ort Supply

A New Class of Non Existence Examples for the Moral Hazard Problem

Solving Extensive Form Games

Moral Hazard and Persistence

Screening. Diego Moreno Universidad Carlos III de Madrid. Diego Moreno () Screening 1 / 1

Lecture Notes on Bargaining

Moral Hazard. EC202 Lectures XV & XVI. Francesco Nava. February London School of Economics. Nava (LSE) EC202 Lectures XV & XVI Feb / 19

Labor Economics, Lecture 11: Partial Equilibrium Sequential Search

Bounded Rationality Lecture 4

Microeconomics II Lecture 4: Incomplete Information Karl Wärneryd Stockholm School of Economics November 2016

Game Theory. Bargaining Theory. ordi Massó. International Doctorate in Economic Analysis (IDEA) Universitat Autònoma de Barcelona (UAB)

Hidden information. Principal s payoff: π (e) w,

Linear Contracts. Ram Singh. February 23, Department of Economics. Ram Singh (Delhi School of Economics) Moral Hazard February 23, / 22

Module 8: Multi-Agent Models of Moral Hazard

D i (w; p) := H i (w; S(w; p)): (1)

Teoria das organizações e contratos

Mechanism Su cient Statistic. in the Risk-Neutral Agency Problem

On the Unique D1 Equilibrium in the Stackelberg Model with Asymmetric Information Janssen, M.C.W.; Maasland, E.

Lecture 7. Simple Dynamic Games

A Max-min-max Approach for General Moral Hazard Problem (Preliminary)

Moral Hazard: Characterization of SB

The Principal-Agent Problem

Externalities and PG. MWG- Chapter 11

Econ 101A Problem Set 6 Solutions Due on Monday Dec. 9. No late Problem Sets accepted, sorry!

Introduction: Asymmetric Information and the Coase Theorem

Lecture Notes on Game Theory

Intrinsic and Extrinsic Motivation

"A Theory of Financing Constraints and Firm Dynamics"

Informed Principal in Private-Value Environments

EconS Advanced Microeconomics II Handout on Subgame Perfect Equilibrium (SPNE)

Layo Costs and E ciency with Asymmetric Information

Area I: Contract Theory Question (Econ 206)

Capital Structure and Investment Dynamics with Fire Sales

Banks, depositors and liquidity shocks: long term vs short term interest rates in a model of adverse selection

Nonlinear Programming (NLP)

What happens when there are many agents? Threre are two problems:

The Generalized Informativeness Principle

University of Warwick, Department of Economics Spring Final Exam. Answer TWO questions. All questions carry equal weight. Time allowed 2 hours.

Answer Key for M. A. Economics Entrance Examination 2017 (Main version)

5. Relational Contracts and Career Concerns

Final Exam (Solution) Economics 501b Microeconomic Theory

Notes on the Thomas and Worrall paper Econ 8801

Microeconomics, Block I Part 1

Gaming and Strategic Ambiguity in Incentive Provision

a = (a 1; :::a i )

The Value of Symmetric Information in an Agency Model with Moral Hazard: The Ex Post Contracting Case

Advanced Microeconomics Fall Lecture Note 1 Choice-Based Approach: Price e ects, Wealth e ects and the WARP

ECO 199 GAMES OF STRATEGY Spring Term 2004 Precepts Week 7 March Questions GAMES WITH ASYMMETRIC INFORMATION QUESTIONS

The B.E. Journal of Theoretical Economics

Contracts in informed-principal problems with moral hazard

Introduction. 1 University of Pennsylvania, Wharton Finance Department, Steinberg Hall-Dietrich Hall, 3620

Area I: Contract Theory Question (Econ 206)

Moral Hazard: Part 1. April 9, 2018

Labor Economics, Lectures 5 and 6: Career Concerns and Multitasking

Economics 201B Economic Theory (Spring 2017) Bargaining. Topics: the axiomatic approach (OR 15) and the strategic approach (OR 7).

9 A Class of Dynamic Games of Incomplete Information:

Theory Field Examination Game Theory (209A) Jan Question 1 (duopoly games with imperfect information)

Competitive Equilibrium and the Welfare Theorems

RSMG Working Paper Series. TITLE: The value of information and the value of awareness. Author: John Quiggin. Working Paper: R13_2

Optimal Incentive Contract with Costly and Flexible Monitoring

1 Uncertainty and Insurance

Dynamic Mechanism Design:

Definitions and Proofs

The Kuhn-Tucker Problem

Tractability and Detail-Neutrality in Incentive Contracting

Two-Dimensional Comparison of Information Systems. in Principal-Agent Models

Bargaining, Contracts, and Theories of the Firm. Dr. Margaret Meyer Nuffield College

Lecture Notes on Solving Moral-Hazard Problems Using the Dantzig-Wolfe Algorithm

Game Theory Review Questions

WORKING PAPER SERIES

Learning and Risk Aversion

1 Uncertainty. These notes correspond to chapter 2 of Jehle and Reny.

1. Linear Incentive Schemes

Adding an Apple to an Orange: A General Equilibrium Approach to Aggregation of Beliefs

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics

G5212: Game Theory. Mark Dean. Spring 2017

Lecture 3, November 30: The Basic New Keynesian Model (Galí, Chapter 3)

A Principal-Agent Model of Sequential Testing

Lecture Notes in Information Economics

Simplifying this, we obtain the following set of PE allocations: (x E ; x W ) 2

A Solution to the Problem of Externalities When Agents Are Well-Informed

Game Theory. Monika Köppl-Turyna. Winter 2017/2018. Institute for Analytical Economics Vienna University of Economics and Business

Microeconomics, Block I Part 2

On the level of public good provision in games of redistributive politics

Knightian uncertainty and moral hazard

The Cake-Eating problem: Non-linear sharing rules

Transcription:

Game Theory and Economics of Contracts Lecture 5 Static Single-agent Moral Hazard Model Yu (Larry) Chen School of Economics, Nanjing University Fall 2015

Principal-Agent Relationship Principal-agent relationship is a common economic occurrence as below: I Two parties (roles), principal(s) (short for P) and agent(s) (short for A), in the context.

Principal-Agent Relationship Principal-agent relationship is a common economic occurrence as below: I Two parties (roles), principal(s) (short for P) and agent(s) (short for A), in the context. I P(s) need A(s) s participation in a certain economic activity to achieve a certain economic goal of her.

Principal-Agent Relationship Principal-agent relationship is a common economic occurrence as below: I Two parties (roles), principal(s) (short for P) and agent(s) (short for A), in the context. I P(s) need A(s) s participation in a certain economic activity to achieve a certain economic goal of her. I But A(s) normally has a certain asymmetric information, either hidden action or hidden type, which is only observable to himself. He usually has di erent objectives from P, so he is inclined to make use of his informational advantage to bene t himself.

Principal-Agent Relationship Principal-agent relationship is a common economic occurrence as below: I Two parties (roles), principal(s) (short for P) and agent(s) (short for A), in the context. I P(s) need A(s) s participation in a certain economic activity to achieve a certain economic goal of her. I But A(s) normally has a certain asymmetric information, either hidden action or hidden type, which is only observable to himself. He usually has di erent objectives from P, so he is inclined to make use of his informational advantage to bene t himself. I P normally has full bargaining power, so she need to design an incentive scheme to A(s) to motivate the informed agent(s) to behave in her best interests. > Contracting!

Overview: Moral Hazard I From today on we are addressing the rst type of information asymmetry: Moral Hazard.

Overview: Moral Hazard I From today on we are addressing the rst type of information asymmetry: Moral Hazard. I In moral hazard models, we highlight the situation in which A(s) will commit himself to taking certain actions for P(s).

Overview: Moral Hazard I From today on we are addressing the rst type of information asymmetry: Moral Hazard. I In moral hazard models, we highlight the situation in which A(s) will commit himself to taking certain actions for P(s). I Not surprisingly, P will want to in uence A s actions. This in uence will often take the form of a contract that has P compensating A contingent on either his actions or the consequences of his actions.

Overview: Moral Hazard I From today on we are addressing the rst type of information asymmetry: Moral Hazard. I In moral hazard models, we highlight the situation in which A(s) will commit himself to taking certain actions for P(s). I Not surprisingly, P will want to in uence A s actions. This in uence will often take the form of a contract that has P compensating A contingent on either his actions or the consequences of his actions. I But such actions are normally unobservable to P but observable to A himself, so they are also not contractible that is, P cannot make contracts directly contingent on the actions. Two parties have di erent objectives. A is then inclined to pick the action in his best interests but not in P s best interest. So actions by A will then impose a (negative) externality on P. Moral hazard (from hidden actions of A)

Overview: Moral Hazard I Moral hazard was a term from the early literature on insurance. But a generalization nowadays.

Overview: Moral Hazard I Moral hazard was a term from the early literature on insurance. But a generalization nowadays. I Someone who insures an asset might then fail to maintain the asset properly (e.g., park his car in a bad neighbor-hood). Typically, such behavior was either unobservable by the insurance company or too di cult to contract against directly; hence, the insurance contract could not be directly contingent on such behavior.

Overview: Moral Hazard I Moral hazard was a term from the early literature on insurance. But a generalization nowadays. I Someone who insures an asset might then fail to maintain the asset properly (e.g., park his car in a bad neighbor-hood). Typically, such behavior was either unobservable by the insurance company or too di cult to contract against directly; hence, the insurance contract could not be directly contingent on such behavior. I But because this behavior is known as moral hazard, since it imposes an externality on the insurance company, insurance companies were eager to develop contracts that guarded against it.

Overview: Moral Hazard I Eg. many insurance contracts have deductibles the rst k dollars of damage must be paid by the insured rather than the insurance company. Because the insured now has $k at risk, he will think twice about parking in a bad neighborhood. That is, the insurance contract is designed to mitigate the externality that A the insured imposes on P the insurance company.

Overview: Moral Hazard I Eg. many insurance contracts have deductibles the rst k dollars of damage must be paid by the insured rather than the insurance company. Because the insured now has $k at risk, he will think twice about parking in a bad neighborhood. That is, the insurance contract is designed to mitigate the externality that A the insured imposes on P the insurance company. I Although principal-agent analysis is more general than this, the name moral hazard has stuck and, so, the types of problems considered here are often referred to as moral-hazard problems.

More Examples for MH Problem

Single Agency Moral Hazard Model Setting I Two players are in an economic relationship characterized by the following two features:

Single Agency Moral Hazard Model Setting I Two players are in an economic relationship characterized by the following two features: 1. the actions of one player, A, a ect the wellbeing of the other player, P.

Single Agency Moral Hazard Model Setting I Two players are in an economic relationship characterized by the following two features: 1. the actions of one player, A, a ect the wellbeing of the other player, P. 2. P and A can agree ex ante to a reward schedule by which P pays A. The reward schedule represents an enforceable contract (i.e., if there is a dispute about whether a player has lived up to the terms of the contract, then a court or similar body can adjudicate the dispute).

Single Agency Moral Hazard Model Setting I Two players are in an economic relationship characterized by the following two features: 1. the actions of one player, A, a ect the wellbeing of the other player, P. 2. P and A can agree ex ante to a reward schedule by which P pays A. The reward schedule represents an enforceable contract (i.e., if there is a dispute about whether a player has lived up to the terms of the contract, then a court or similar body can adjudicate the dispute). I A s action is hidden; that is, he knows what action he has taken but P does not directly observe his action. Moreover, A has complete discretion in choosing his action from some set of feasible actions.

Single Agency Moral Hazard Model Setting I The actions determine, usually stochastically, some performance measures (or outcomes, signals). The contract is a function of (at least some) of these performance variables. In particular, the contract can be a function of the observable, veri able performance measures. Information is veri able if it can be observed perfectly (i.e., without error) by third parties, who might be called upon to adjudicate a dispute between P and A.

Single Agency Moral Hazard Model Setting I The actions determine, usually stochastically, some performance measures (or outcomes, signals). The contract is a function of (at least some) of these performance variables. In particular, the contract can be a function of the observable, veri able performance measures. Information is veri able if it can be observed perfectly (i.e., without error) by third parties, who might be called upon to adjudicate a dispute between P and A. I The structure of the situation is common knowledge between the players.

A Typical Example I Consider a salesperson who has discretion over the amount of e ort he expends promoting his company s products.

A Typical Example I Consider a salesperson who has discretion over the amount of e ort he expends promoting his company s products. I Many of these actions are unobservable by his company. The company can, however, measure in a veri able way the number of orders or revenue he generates.

A Typical Example I Consider a salesperson who has discretion over the amount of e ort he expends promoting his company s products. I Many of these actions are unobservable by his company. The company can, however, measure in a veri able way the number of orders or revenue he generates. I Because these measures are, presumably, correlated with his actions (i.e., the harder he works, the more sales he generates on average), it may make sense for the company to base his pay on his sales put him on commission to induce him to expend the appropriate level of e ort.

Addition Assumptions for Standard Static P-A MH Model I The players are symmetrically informed at the time they agree to a reward schedule.

Addition Assumptions for Standard Static P-A MH Model I The players are symmetrically informed at the time they agree to a reward schedule. I Bargaining is take-it-or-leave-it: Having full bargaining power, P proposes a contract (reward schedule), which A either accepts or rejects. If he rejects it, the game ends and the players receive their reservation utilities (their expected utilities from pursuing their next best alternatives). If he accepts, then both parties are bound by the contract. Contracts cannot be renegotiated.

Addition Assumptions for Standard Static P-A MH Model I The players are symmetrically informed at the time they agree to a reward schedule. I Bargaining is take-it-or-leave-it: Having full bargaining power, P proposes a contract (reward schedule), which A either accepts or rejects. If he rejects it, the game ends and the players receive their reservation utilities (their expected utilities from pursuing their next best alternatives). If he accepts, then both parties are bound by the contract. Contracts cannot be renegotiated. I Once the contract has been agreed to, the only player to take further actions is A. The game is played once. In particular, there is only one period in which A takes actions and A completes his actions before any performance measures are realized.

Standard Static P-A MH Model I One P makes a take-it-or-leave-it contract o er to a single A with outside reservation utility of r 2 R under conditions of symmetric information.

Standard Static P-A MH Model I One P makes a take-it-or-leave-it contract o er to a single A with outside reservation utility of r 2 R under conditions of symmetric information. I If the contract is accepted, A then chooses an action, a 2 A, which will have an e ect (usual stochastic) on an outcome, x 2 X, of which P cares about and is typically informative about A s action. P may observe some additional signal, s 2 S, which may also be informative about A s action. The simplest version of this model casts x as monetary pro ts and s =?; we will focus on this simple model for now ignoring information besides x.

Standard Static P-A MH Model I One P makes a take-it-or-leave-it contract o er to a single A with outside reservation utility of r 2 R under conditions of symmetric information. I If the contract is accepted, A then chooses an action, a 2 A, which will have an e ect (usual stochastic) on an outcome, x 2 X, of which P cares about and is typically informative about A s action. P may observe some additional signal, s 2 S, which may also be informative about A s action. The simplest version of this model casts x as monetary pro ts and s =?; we will focus on this simple model for now ignoring information besides x. I x is observable and veri able. So enforceable contracts can be written on the variable, x. The nature of P s contract o er will be a wage schedule, w : X! R, according to which A is rewarded. P has residual claim rights, i.e., P can keep x w(x) as her pro t.

Standard Static P-A MH Model I One P makes a take-it-or-leave-it contract o er to a single A with outside reservation utility of r 2 R under conditions of symmetric information. I If the contract is accepted, A then chooses an action, a 2 A, which will have an e ect (usual stochastic) on an outcome, x 2 X, of which P cares about and is typically informative about A s action. P may observe some additional signal, s 2 S, which may also be informative about A s action. The simplest version of this model casts x as monetary pro ts and s =?; we will focus on this simple model for now ignoring information besides x. I x is observable and veri able. So enforceable contracts can be written on the variable, x. The nature of P s contract o er will be a wage schedule, w : X! R, according to which A is rewarded. P has residual claim rights, i.e., P can keep x w(x) as her pro t. I We also assume that P has full commitment and will not alter the contract w(x) later.

Standard Static P-A MH Model I We will also need to assume w 2 F, where F is the feasible contract set. E.g. F is just the set of all linear functions from X to R.

Standard Static P-A MH Model I We will also need to assume w 2 F, where F is the feasible contract set. E.g. F is just the set of all linear functions from X to R. I A takes a hidden action a 2 A which yields a monetary return in terms of a stochastic conditional probability measure P(x ja) (could be either prob. distribution for continuum cases or prob. point mass for discrete case). This action has the e ect of stochastically improving x.

Standard Static P-A MH Model I We will also need to assume w 2 F, where F is the feasible contract set. E.g. F is just the set of all linear functions from X to R. I A takes a hidden action a 2 A which yields a monetary return in terms of a stochastic conditional probability measure P(x ja) (could be either prob. distribution for continuum cases or prob. point mass for discrete case). This action has the e ect of stochastically improving x. I But A s action results in a monetary disutility of c(a), which is continuously di erentiable, increasing and strictly convex. The monetary utility of P is V (x w(x)), where V 0 > 0 V 00. A s net utility is separable in cost of e ort and money: U(w(x), a) u(w(x)) c(a); where u 0 > 0 u 00.

In the language of game theory I P and A are playing a sequential (Stackelberg) game (normally with exogenous uncertainty). P moves rst, and A follows.

In the language of game theory I P and A are playing a sequential (Stackelberg) game (normally with exogenous uncertainty). P moves rst, and A follows. I A s strategy is to specify her action in response to P s contract o er. P s strategy is contract o er and technically action recommendation (for tie breaking for multi-actions response to any contract o er).

In the language of game theory I P and A are playing a sequential (Stackelberg) game (normally with exogenous uncertainty). P moves rst, and A follows. I A s strategy is to specify her action in response to P s contract o er. P s strategy is contract o er and technically action recommendation (for tie breaking for multi-actions response to any contract o er). I Then we will look for the subgame perfect equilibrium.

In the language of game theory I P and A are playing a sequential (Stackelberg) game (normally with exogenous uncertainty). P moves rst, and A follows. I A s strategy is to specify her action in response to P s contract o er. P s strategy is contract o er and technically action recommendation (for tie breaking for multi-actions response to any contract o er). I Then we will look for the subgame perfect equilibrium. I Game Tree.

General Static P-A Problem with MH I P solves the following program Z max V (x w(x))p(dxja) w 2F;a2A X Z s.t. u(w(x))p(dxja) c(a) r, X Z a 2 arg max a 0 2A u(w(x))p(dxja 0 ) c(a 0 ), X where the rst constraint is A s participation or individual rationality (IR) constraint, and the second is A s incentive compatibility (IC) constraint. Any behavior is motivated by economic interests!

General Static P-A Problem with MH I P solves the following program Z max V (x w(x))p(dxja) w 2F;a2A X Z s.t. u(w(x))p(dxja) c(a) r, X Z a 2 arg max a 0 2A u(w(x))p(dxja 0 ) c(a 0 ), X where the rst constraint is A s participation or individual rationality (IR) constraint, and the second is A s incentive compatibility (IC) constraint. Any behavior is motivated by economic interests! I Interpretation. Why is a also the choice variable controlled by P?

Environment with Di erentiability I It is very useful to consider instead the density and distribution induced over x for a given action; this is referred to as the parameterized distribution characterization.

Environment with Di erentiability I It is very useful to consider instead the density and distribution induced over x for a given action; this is referred to as the parameterized distribution characterization. I Let X = [x, x] be the support of outcome; let P(xja) associates with the cumulative distribution function conditional on a, F (x ja), and a density function conditional on a, f (xja) > 0 8x 2 X.

Environment with Di erentiability I It is very useful to consider instead the density and distribution induced over x for a given action; this is referred to as the parameterized distribution characterization. I Let X = [x, x] be the support of outcome; let P(xja) associates with the cumulative distribution function conditional on a, F (x ja), and a density function conditional on a, f (xja) > 0 8x 2 X. I f a and f aa exist and are continuous.

Environment with Di erentiability I It is very useful to consider instead the density and distribution induced over x for a given action; this is referred to as the parameterized distribution characterization. I Let X = [x, x] be the support of outcome; let P(xja) associates with the cumulative distribution function conditional on a, F (x ja), and a density function conditional on a, f (xja) > 0 8x 2 X. I f a and f aa exist and are continuous. I F a (xja) < 0, 8x 2 (x, x),i.e., action produces a rst-order stochastic dominant shift on X.

Environment with Di erentiability I It is very useful to consider instead the density and distribution induced over x for a given action; this is referred to as the parameterized distribution characterization. I Let X = [x, x] be the support of outcome; let P(xja) associates with the cumulative distribution function conditional on a, F (x ja), and a density function conditional on a, f (xja) > 0 8x 2 X. I f a and f aa exist and are continuous. I F a (xja) < 0, 8x 2 (x, x),i.e., action produces a rst-order stochastic dominant shift on X. I Since our support is xed, F a (xja) = F a (xja) = 0 for any a.

Full Information Benchmark (First Best) I Let s begin with the full information outcome where e ort is observable and veri able.

Full Information Benchmark (First Best) I Let s begin with the full information outcome where e ort is observable and veri able. I P chooses a and w to satisfy max w 2F,a2A Z x s.t. x Z x x V (x w(x))f (xja)dx u(w(x))f (xja)dx c(a) r, Only IR constraint is needed due to full information.

Full Information Benchmark (First Best) I Let s begin with the full information outcome where e ort is observable and veri able. I P chooses a and w to satisfy max w 2F,a2A Z x s.t. x Z x x V (x w(x))f (xja)dx u(w(x))f (xja)dx c(a) r, Only IR constraint is needed due to full information. I The Lagrangian is L = Z x x [V (x w(x)) + λu(w(x))]f (xja)dx λ(c(a) + r), where is the Lagrangian multiplier associated with the IR constraint; it represents the shadow price of income to A in each state.

Full Information Benchmark (First Best) I Assuming an interior solution, we have as rst-order conditions after simpli cation, V 0 (x w(x)) u 0 (w(x)) = λ, x 2 X, Z x x [V (x w(x)) + λu(w(x))]f a (xja)dx = λc 0 (a), and the IR constraint is binding (Holmstrom 1979).

Full Information Benchmark (First Best) I Assuming an interior solution, we have as rst-order conditions after simpli cation, V 0 (x w(x)) u 0 (w(x)) = λ, x 2 X, Z x x [V (x w(x)) + λu(w(x))]f a (xja)dx = λc 0 (a), and the IR constraint is binding (Holmstrom 1979). I The rst condition is known as the Borch rule: the ratios of marginal utilities of income are equated across states under an optimal contract. Note that it holds for every x and not just in expectation. The second condition is the choice of e ort condition.

Hidden Action Case (Second Best) I We now suppose that the level of action cannot be contracted upon and A is risk averse: u 00 < 0.

Hidden Action Case (Second Best) I We now suppose that the level of action cannot be contracted upon and A is risk averse: u 00 < 0. I P solves the following program max w 2F;a2A Z x s.t. x a 2 arg max a 0 2A Z x x V (x w(x))f (xja)dx u(w(x))f (xja)dx c(a) r (IR) Z x x u(w(x))f (xja 0 )dx c(a 0 ) (IC)

Hidden Action Case (Second Best) I We now suppose that the level of action cannot be contracted upon and A is risk averse: u 00 < 0. I P solves the following program max w 2F;a2A Z x s.t. x a 2 arg max a 0 2A Z x x V (x w(x))f (xja)dx u(w(x))f (xja)dx c(a) r (IR) Z x x u(w(x))f (xja 0 )dx c(a 0 ) (IC) I The IC constraint implies (assuming an interior optimum) that Z x u(w(x))fa(x; a)dx c 0 (a) = 0; x Z x x u(w(x))fa(x; a)dx c 0 (a) 0; which are the local rst- and second-order conditions for a maximum.

First-Order Approach (FOA) to Incentives Contracts I The FOA to incentives contracts is to maximize subject to the rst-order condition rather than IC, and then check to see if the solution indeed satis es IC ex post. Let s ignore questions of the validity of this procedure for now.

First-Order Approach (FOA) to Incentives Contracts I The FOA to incentives contracts is to maximize subject to the rst-order condition rather than IC, and then check to see if the solution indeed satis es IC ex post. Let s ignore questions of the validity of this procedure for now. I Using µ as the multiplier on the rst-order condition w.r.t IC, the Lagrangian of the FOA program is L = Z x x +µ( [V (x w(x)) + λu(w(x))]f (xja)dx λ(c(a) + r) Z x x u(w(x))fa(x; a)dx c 0 (a)).

First-Order Approach (FOA) to Incentives Contracts I The FOA to incentives contracts is to maximize subject to the rst-order condition rather than IC, and then check to see if the solution indeed satis es IC ex post. Let s ignore questions of the validity of this procedure for now. I Using µ as the multiplier on the rst-order condition w.r.t IC, the Lagrangian of the FOA program is L = Z x x +µ( [V (x w(x)) + λu(w(x))]f (xja)dx λ(c(a) + r) Z x x u(w(x))fa(x; a)dx I The rst-order condition w.r.t w(x) is V 0 (x w(x)) u 0 (w(x)) c 0 (a)). = λ + µ f a(xja) f (xja), 8x 2 X ;

First-Order Approach (FOA) to Incentives Contracts I The FOA to incentives contracts is to maximize subject to the rst-order condition rather than IC, and then check to see if the solution indeed satis es IC ex post. Let s ignore questions of the validity of this procedure for now. I Using µ as the multiplier on the rst-order condition w.r.t IC, the Lagrangian of the FOA program is L = Z x x +µ( [V (x w(x)) + λu(w(x))]f (xja)dx λ(c(a) + r) Z x x u(w(x))fa(x; a)dx I The rst-order condition w.r.t w(x) is V 0 (x w(x)) u 0 (w(x)) c 0 (a)). = λ + µ f a(xja) f (xja), 8x 2 X ; I Modi ed Borch rule: the marginal rates of substitution may vary if µ > 0 to consider the incentives e ect of w(x). Thus, risk-sharing will generally be ine cient, compared with FB.

First-Order Approach (FOA) to Incentives Contracts I Consider a simple two-action ( nite) case in which P wishes to induce the high action: A = fa L ; a H g. Then the IC constraint implies the inequality Z x x [V (x w(x))[f (xja H ) f (xja L )]dx c(a H ) c(a L ).

First-Order Approach (FOA) to Incentives Contracts I Consider a simple two-action ( nite) case in which P wishes to induce the high action: A = fa L ; a H g. Then the IC constraint implies the inequality Z x x [V (x w(x))[f (xja H ) f (xja L )]dx c(a H ) c(a L ). I The rst-order condition for the associated Lagrangian is V 0 (x w(x)) u 0 (w(x)) = λ + µ f (xja H ) f (xja L ), 8x 2 X. f (xja H )

First-Order Approach (FOA) to Incentives Contracts I Consider a simple two-action ( nite) case in which P wishes to induce the high action: A = fa L ; a H g. Then the IC constraint implies the inequality Z x x [V (x w(x))[f (xja H ) f (xja L )]dx c(a H ) c(a L ). I The rst-order condition for the associated Lagrangian is V 0 (x w(x)) u 0 (w(x)) = λ + µ f (xja H ) f (xja L ), 8x 2 X. f (xja H ) I In both cases, providing µ > 0, A is rewarded for outcomes which have higher relative frequency under high action.

First-Order Approach (FOA) to Incentives Contracts I Consider a simple two-action ( nite) case in which P wishes to induce the high action: A = fa L ; a H g. Then the IC constraint implies the inequality Z x x [V (x w(x))[f (xja H ) f (xja L )]dx c(a H ) c(a L ). I The rst-order condition for the associated Lagrangian is V 0 (x w(x)) u 0 (w(x)) = λ + µ f (xja H ) f (xja L ), 8x 2 X. f (xja H ) I In both cases, providing µ > 0, A is rewarded for outcomes which have higher relative frequency under high action. I Theorem 1 (Holmstrom, 1979) Assume that the FOA program is valid. Then at the optimum of the FOA program, µ > 0. The proof of the theorem relies upon rst-order stochastic dominance F a (xja) < 0 and risk aversion u 00 < 0.

Monotonicity of the Optimal Contracts I The monotone likelihood ratio property (MLRP) is satis ed for a distribution F and its density f if f a (xja) f (xja) is increasing in x

Monotonicity of the Optimal Contracts I The monotone likelihood ratio property (MLRP) is satis ed for a distribution F and its density f if f a (xja) f (xja) is increasing in x I When action is restricted to only two types, so f is non-di erentiable, the analogous MLRP condition is that f (xja H ) f (xja L ) is increasing in x f (xja H )

Monotonicity of the Optimal Contracts I The monotone likelihood ratio property (MLRP) is satis ed for a distribution F and its density f if f a (xja) f (xja) is increasing in x I When action is restricted to only two types, so f is non-di erentiable, the analogous MLRP condition is that f (xja H ) f (xja L ) is increasing in x f (xja H ) I Intuition: the higher the observed value x, the more likely it was drawn from distribution F (xja) with higher a.

Monotonicity of the Optimal Contracts I The monotone likelihood ratio property (MLRP) is satis ed for a distribution F and its density f if f a (xja) f (xja) is increasing in x I When action is restricted to only two types, so f is non-di erentiable, the analogous MLRP condition is that f (xja H ) f (xja L ) is increasing in x f (xja H ) I Intuition: the higher the observed value x, the more likely it was drawn from distribution F (xja) with higher a. I Note that MLRP implies that F a (xja) < 0 for 8x 2 (x, x) (i.e., rst-order stochastic dominance) F a (x, a) = Z x x f a (xja) f (xja)dx < 0. f (xja)

Monotonicity of the Optimal Contracts I The monotone likelihood ratio property (MLRP) is satis ed for a distribution F and its density f if f a (xja) f (xja) is increasing in x I When action is restricted to only two types, so f is non-di erentiable, the analogous MLRP condition is that f (xja H ) f (xja L ) is increasing in x f (xja H ) I Intuition: the higher the observed value x, the more likely it was drawn from distribution F (xja) with higher a. I Note that MLRP implies that F a (xja) < 0 for 8x 2 (x, x) (i.e., rst-order stochastic dominance) F a (x, a) = Z x x f a (xja) f (xja)dx < 0. f (xja) I Theorem (Holmstrom, 1979). Under FOA, if F satis es the MLRP, then the wage contract w(x) is increasing in x.

Value of Information I Now assume that P and agent can enlarge their contract to include other information, such as an observable and veri able signal, s. When should w depend upon s?

Value of Information I Now assume that P and agent can enlarge their contract to include other information, such as an observable and veri able signal, s. When should w depend upon s? I x is su cient for fx, sg with respect to a 2 A i f is multiplicatively separable in s and a; i.e. f (x, sja) = y(xja)z(xjs) We say that s is informative about a 2 A whenever x is not su cient for fx, sg with respect to a 2 A.

Value of Information I Now assume that P and agent can enlarge their contract to include other information, such as an observable and veri able signal, s. When should w depend upon s? I x is su cient for fx, sg with respect to a 2 A i f is multiplicatively separable in s and a; i.e. f (x, sja) = y(xja)z(xjs) We say that s is informative about a 2 A whenever x is not su cient for fx, sg with respect to a 2 A. I Theorem 3 (Holmstrom, [1979], Shavell [1979]). Assume that the FOA program is valid and yields w(x) as a solution. Then there exists a new contract, w(x, s), that strictly Pareto dominates w(x) i s is informative about a 2 A.

Value of Information I Proof: Using the FOA program, but allowing w() to depend upon s as well as x, the rst-order condition determining w is given by V 0 (x w(x, s)) u 0 (w(x, s)) = λ + µ f a(x, sja) f (x, sja), 8x 2 X. which is independent of s i s is not informative about a.

Value of Information I Proof: Using the FOA program, but allowing w() to depend upon s as well as x, the rst-order condition determining w is given by V 0 (x w(x, s)) u 0 (w(x, s)) = λ + µ f a(x, sja) f (x, sja), 8x 2 X. which is independent of s i s is not informative about a. I Implication: P can restrict attention to wage contracts that depend only upon a set of su cient statistics for A s action, since we normally assume there is no cost of monitoring one more signal.

Value of Information I Proof: Using the FOA program, but allowing w() to depend upon s as well as x, the rst-order condition determining w is given by V 0 (x w(x, s)) u 0 (w(x, s)) = λ + µ f a(x, sja) f (x, sja), 8x 2 X. which is independent of s i s is not informative about a. I Implication: P can restrict attention to wage contracts that depend only upon a set of su cient statistics for A s action, since we normally assume there is no cost of monitoring one more signal. I Moreover, any informative signal about A s action should be included in the optimal contract! (But if monitoring cost is considerable, still no!)

Validity of the First-order Approach I A distribution satis es the Convexity of Distribution Function Condition (CDFC) i F (x, γa + (1 γ)a 0 ) γf (x; a) + (1 γ)f (x; a); for all a, a 0 2 A, γ 2 [0; 1].(i.e., F aa (xja) 0.)

Validity of the First-order Approach I A distribution satis es the Convexity of Distribution Function Condition (CDFC) i F (x, γa + (1 γ)a 0 ) γf (x; a) + (1 γ)f (x; a); for all a, a 0 2 A, γ 2 [0; 1].(i.e., F aa (xja) 0.) I A useful special case: linear distribution function condition f (xja) = af (x) + (1 a)f (x), where f (x) rst-order stochastically dominates f (x).

Validity of the First-order Approach I A distribution satis es the Convexity of Distribution Function Condition (CDFC) i F (x, γa + (1 γ)a 0 ) γf (x; a) + (1 γ)f (x; a); for all a, a 0 2 A, γ 2 [0; 1].(i.e., F aa (xja) 0.) I A useful special case: linear distribution function condition f (xja) = af (x) + (1 a)f (x), where f (x) rst-order stochastically dominates f (x). I Theorem (Rogerson, 1985) The rst-order approach is valid if F (xja) satis es the MLRP and CDFC conditions.

Validity of the First-order Approach I A distribution satis es the Convexity of Distribution Function Condition (CDFC) i F (x, γa + (1 γ)a 0 ) γf (x; a) + (1 γ)f (x; a); for all a, a 0 2 A, γ 2 [0; 1].(i.e., F aa (xja) 0.) I A useful special case: linear distribution function condition f (xja) = af (x) + (1 a)f (x), where f (x) rst-order stochastically dominates f (x). I Theorem (Rogerson, 1985) The rst-order approach is valid if F (xja) satis es the MLRP and CDFC conditions. I Therefore, MLRP and CDFC guarantees the FOA program is valid and yields a monotonic wage schedule.

A Natural Case: Linear Contracts with Normally Distributed Performance and Exponential Utility I Performance x = a + ε, where ε N(0, σ 2 ). P is risk neutral, while A has a utility function: u(w; a) = e γ(w c(a)), where γ is the constant degree of absolute risk aversion (γ = U 00 /U 0 ), and c(a) = 1 2 ka2.

A Natural Case: Linear Contracts with Normally Distributed Performance and Exponential Utility I Performance x = a + ε, where ε N(0, σ 2 ). P is risk neutral, while A has a utility function: u(w; a) = e γ(w c(a)), where γ is the constant degree of absolute risk aversion (γ = U 00 /U 0 ), and c(a) = 1 2 ka2. I Linear contracts: w = αx + β.

A Natural Case: Linear Contracts with Normally Distributed Performance and Exponential Utility I Performance x = a + ε, where ε N(0, σ 2 ). P is risk neutral, while A has a utility function: u(w; a) = e γ(w c(a)), where γ is the constant degree of absolute risk aversion (γ = U 00 /U 0 ), and c(a) = 1 2 ka2. I Linear contracts: w = αx + β. I P will solve s.t. E ε ( e γ(w c(a)) ) r max E ε(x w) a,α,β a 2 arg max E ε ( e γ(w c(a)) ) a

A Natural Case: Linear Contracts with Normally Distributed Performance and Exponential Utility I Performance x = a + ε, where ε N(0, σ 2 ). P is risk neutral, while A has a utility function: u(w; a) = e γ(w c(a)), where γ is the constant degree of absolute risk aversion (γ = U 00 /U 0 ), and c(a) = 1 2 ka2. I Linear contracts: w = αx + β. I P will solve s.t. E ε ( e γ(w c(a)) ) r max E ε(x w) a,α,β a 2 arg max E ε ( e γ(w c(a)) ) a I Let r = U(w) is the default utility level of A, and w is thus its certain monetary equivalent.

A Natural Case: Linear Contracts with Normally Distributed Performance and Exponential Utility I Performance x = a + ε, where ε N(0, σ 2 ). P is risk neutral, while A has a utility function: u(w; a) = e γ(w c(a)), where γ is the constant degree of absolute risk aversion (γ = U 00 /U 0 ), and c(a) = 1 2 ka2. I Linear contracts: w = αx + β. I P will solve s.t. E ε ( e γ(w c(a)) ) r max E ε(x w) a,α,β a 2 arg max E ε ( e γ(w c(a)) ) a I Let r = U(w) is the default utility level of A, and w is thus its certain monetary equivalent. I Transform the original problem to a certainty equivalent