Bayesian Persuasion Online Appendix

Similar documents
Disclosure of Endogenous Information

Introduction. Introduction

Bayesian Persuasion. Emir Kamenica and Matthew Gentzkow University of Chicago. September 2009

NBER WORKING PAPER SERIES BAYESIAN PERSUASION. Emir Kamenica Matthew Gentzkow. Working Paper

A Rothschild-Stiglitz approach to Bayesian persuasion

Persuasion Under Costly Lying

A Rothschild-Stiglitz approach to Bayesian persuasion

A Rothschild-Stiglitz approach to Bayesian persuasion

Graduate Microeconomics II Lecture 5: Cheap Talk. Patrick Legros

Hierarchical Bayesian Persuasion

Competition in Persuasion

Persuading Skeptics and Reaffirming Believers

Persuading a Pessimist

Games and Economic Behavior

Supplementary appendix to the paper Hierarchical cheap talk Not for publication

Tijmen Daniëls Universiteit van Amsterdam. Abstract

Ex Post Cheap Talk : Value of Information and Value of Signals

Deceptive Advertising with Rational Buyers

Competition in Persuasion

Endogenous Persuasion with Rational Verification

Definitions and Proofs

On the Unique D1 Equilibrium in the Stackelberg Model with Asymmetric Information Janssen, M.C.W.; Maasland, E.

Iterative Weak Dominance and Interval-Dominance Supermodular Games

Selecting Cheap-Talk Equilibria

Sender s Small Concern for Credibility and Receiver s Dilemma

CMS-EMS Center for Mathematical Studies in Economics And Management Science. Discussion Paper #1583. Monotone Persuasion

Cheap Talk with Transparent Motives

Hierarchical cheap talk

1 Lattices and Tarski s Theorem

Government 2005: Formal Political Theory I

An Example of Conflicts of Interest as Pandering Disincentives

Module 16: Signaling

Sequential Equilibria of Multi-Stage Games with Infinite Sets of Types and Actions

Competition relative to Incentive Functions in Common Agency

Quantifying information and uncertainty

Quantifying information and uncertainty

Robust Comparative Statics in Large Static Games

Equilibrium Refinements

Hierarchical cheap talk

Sequential Games with Incomplete Information

Economics 204 Summer/Fall 2017 Lecture 7 Tuesday July 25, 2017

Wars of Attrition with Budget Constraints

Game Theory Lecture 10+11: Knowledge

Cheap talk theoretic tools for distributed sensing in the presence of strategic sensors

. Introduction to Game Theory Lecture Note 8: Dynamic Bayesian Games. HUANG Haifeng University of California, Merced

Strategic Learning and Information Transmission

Payoff Continuity in Incomplete Information Games

Perfect Conditional -Equilibria of Multi-Stage Games with Infinite Sets of Signals and Actions (Preliminary and Incomplete)

Organizational Barriers to Technology Adoption: Evidence from Soccer-Ball Producers in Pakistan

A Nonspeculation Theorem with an Application to Committee Design

joint with Jonathan Pogach (Penn) 2009 FESAMES at Tokyo Hong Kong University of Science and Technology Honesty vs. White Lies Kim & Pogach Overview

Introduction to Game Theory

We set up the basic model of two-sided, one-to-one matching

WEAKLY DOMINATED STRATEGIES: A MYSTERY CRACKED

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Perfect Bayesian Equilibrium

WHY SATURATED PROBABILITY SPACES ARE NECESSARY

Introduction. 1 University of Pennsylvania, Wharton Finance Department, Steinberg Hall-Dietrich Hall, 3620

Coordinated Sequential Bayesian Persuasion

Robust Persuasion of a Privately Informed Receiver

CRITICAL TYPES. 1. Introduction

Bayesian Persuasion with Private Information

Markets with Asymetric Information

Economics 201B Economic Theory (Spring 2017) Bargaining. Topics: the axiomatic approach (OR 15) and the strategic approach (OR 7).

RESEARCH PAPER NO AXIOMATIC THEORY OF EQUILIBRIUM SELECTION IN SIGNALING GAMES WITH GENERIC PAYOFFS. Srihari Govindan.

On Reputation with Imperfect Monitoring

Costly Persuasion. Matthew Gentzkow and Emir Kamenica University of Chicago. December 2013

Open Sequential Equilibria of Multi-Stage Games with Infinite Sets of Types and Actions

Game Theory. Wolfgang Frimmel. Perfect Bayesian Equilibrium

Optimality Conditions for Nonsmooth Convex Optimization

A remark on discontinuous games with asymmetric information and ambiguity

Introduction Persuasion Attribute Puffery Results Conclusion. Persuasive Puffery. Archishman Chakraborty and Rick Harbaugh

When to Ask for an Update: Timing in Strategic Communication

Berge s Maximum Theorem

Discussion of "Persuasion in Global Games with an Application to Stress Testing" by Nicolas Inostroza and Alessandro Pavan

Preliminary Results on Social Learning with Partial Observations

Constructive Proof of the Fan-Glicksberg Fixed Point Theorem for Sequentially Locally Non-constant Multi-functions in a Locally Convex Space

Supplementary Appendix for Informative Cheap Talk in Elections

LONG PERSUASION GAMES

Disguising Lies - Image Concerns and Partial Lying in Cheating Games

BAYES CORRELATED EQUILIBRIUM AND THE COMPARISON OF INFORMATION STRUCTURES IN GAMES. Dirk Bergemann and Stephen Morris

MORAL HAZARD, UNCERTAIN TECHNOLOGIES, AND LINEAR CONTRACTS

The Intuitive and Divinity Criterion:

Conditional equilibria of multi-stage games with infinite sets of signals and actions

Robust comparative statics in large static games

Subgame Perfect Equilibria in Stage Games

Boundary Behavior of Excess Demand Functions without the Strong Monotonicity Assumption

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION

On the Maximal Domain Theorem

ON FORWARD INDUCTION

Coordination and Cheap Talk in a Battle of the Sexes with Private Information

A Nonspeculation Theorem with an Application to Committee Design

Honesty vs. White Lies

Informed Principal in Private-Value Environments

EC319 Economic Theory and Its Applications, Part II: Lecture 7

Lecture Note 5: Signaling

Mechanism Design: Basic Concepts

ECO421: Communication

On Bayesian Persuasion with Multiple Senders

Introduction to Game Theory

Transcription:

Bayesian Persuasion Online Appendix Emir Kamenica and Matthew Gentzkow University of Chicago June 2010 1 Persuasion mechanisms In this paper we study a particular game where Sender chooses a signal π whose realization is observed by Receiver who then takes her action. In Subsection 2.3 we made two important observations about this game. The first observation is that, as long as Receiver knows which π Sender chose, it is not important to assume that she directly observes the realization s from π. Specifically, if we instead assume that Sender observes s privately and then sends a verifiable message about s, the same information is revealed in equilibrium and we obtain the same outcomes. The second observation is that Sender s gain from persuasion is weakly greater in the game that we study than in any other communication game, including Spence (1973), Crawford and Sobel (1982), or Milgrom (1981). In this section of the Online Appendix we provide a formal statement and proof of these two claims. To do so, we introduce the notion of a persuasion mechanism. As before, Receiver has a continuous utility function u (a, ω) that depends on her action a A and the state of the world ω Ω. Sender has a continuous utility function v (a, ω) that depends on Receiver s action and the state of the world. Sender and Receiver share a prior µ 0 int ( (Ω)). Let a (µ) denote the set of actions that maximize Receiver s expected utility given her belief is µ. We assume that there are at least two actions in A and that for any action a there exists a µ s.t. a (µ) = {a}. The action space A is compact and the state space Ω is finite. We will relax the latter assumption in the next section. A persuasion mechanism (π, c) is a combination of a signal and a message technology. Sender s private signal π consists of a finite realization space S and a family of distributions {π ( ω)} ω Ω over S. A message technology c consists of a finite message space M and a family of functions c ( s) : M R + ; c (m s) denotes the cost to Sender of sending message m after receiving signal realization s. 1 The assumptions that S and M are finite are without loss of generality (cf. Proposition 3) and are used solely for notational convenience. A persuasion mechanism defines a game. The timing is as follows. First, nature selects ω from Ω according to µ 0. Neither Sender nor Receiver observe nature s move. Then, Sender privately observes a realization s S from π ( ω) and chooses a message m M. Finally, Receiver observes m and chooses an action a A. Sender s payoff is v (a, ω) c (m s) and Receiver s payoff is u (a, ω). We represent the Sender s and Receiver s (possibly stochastic) strategies by σ and ρ, respectively. We use µ (ω m) to denote Receiver s posterior belief that the state is ω after observing m. 1 R + denotes the affinely extended non-negative real numbers: R + = R + { }. Allowing c to take on the value of is useful for characterizing the cases where Sender cannot lie and cases where he must reveal all his information. 1

A few examples help clarify the varieties of games that are captured by the definition of a persuasion mechanism. If π is perfectly informative, M = R +, and c (m s) = m/s, the mechanism is Spence s (1973) education signalling game. If π is perfectly informative and c is constant, the mechanism is a cheap talk game as in Crawford and Sobel (1982). If π is arbitrary and c is constant, the mechanism coincides with the information-transmission game of Green and Stokey (2007). If π is perfectly informative and c (m s) = (m s) 2, the mechanism is a communication game with lying costs developed in Kartik (2009). If π is perfectly informative, M = P (Ω), and c (m s) = { 0 if s m if s/ m, the mechanism is a persuasion game as in Grossman (1981) and Milgrom (1981). 2 A perfect Bayesian equilibrium of a persuasion mechanism is a triplet (σ, ρ, µ ) satisfying the usual conditions. We also apply an additional equilibrium selection criterion: we focus on Senderpreferred equilibria, i.e., equilibria where the expectation of v (a, ω) c (m s) is the greatest. For the remainder of this appendix, we use the term equilibrium to mean a Sender-preferred perfect Bayesian equilibrium of a persuasion mechanism. We define the value of a mechanism to be the equilibrium expectation of v (a, ω) c (m s). The gain from a mechanism is the difference between its value and the equilibrium expectation of v (a, ω) when Receiver obtains no information. Sender benefits from persuasion if there is a mechanism with a strictly positive gain. A mechanism is optimal if no other mechanism has higher value. The game we study in the paper is closely related to persuasion mechanism where M = S and c (m s) = { k if s=m if s m for some k R +. We call such mechanisms honest. Specifically, the game we study in the paper is one where Sender chooses among the set of all honest mechanisms. We will refer to mechanisms where M = P (S), and c (m s) = { 0 if s m if s/ m as verifiable. With these definitions in hand, we can formally state and prove the observations from Subsection 2.1. Proposition 1 The following are equivalent: 1. There exists a persuasion mechanism with value v ; 2. There exists an honest mechanism with value v ; 3. There exists a verifiable mechanism with value v. Proof. It is immediate that (2) implies (1) and (3) implies (1). We first show that (1) implies (2). Say that a mechanism is straightforward if it is honest, S A, and Receiver s equilibrium action equals the message. It will suffice to show that given a mechanism with value v there exists a straightforward mechanism with value v. To establish that we closely follow the proof of Proposition 1 in the paper, taking care to account for any messaging costs c. Consider an equilibrium ε o = (σ o, ρ o, µ o ) of some mechanism (π o, c o ) with value v. For any a, let M a be the set of messages which induce a: M a = {m â (µ) = m} in this equilibrium. Now, consider a straightforward mechanism with S = A and π (a ω) = σ o (m s) π o (s ω). m M a s S Since a was an optimal response to each m M a in ε o, it must also be an optimal response to the message a in the proposed straightforward mechanism. Hence, the distribution of Receiver s actions 2 P (X) denotes the set of all subsets of X. 2

conditional on the state is the same as in ε o. Therefore, if we set k equal to the messaging costs in ε o, i.e., k = m s ω c (m s) σo (m s) π o (s ω), the value of the straightforward mechanism is exactly v. Hence, (1) implies (2). We next show that (2) implies (3). It will suffice to show that if there exists an optimal honest mechanism with value v, then there is a verifiable mechanism with value v ; to generate the same value as a suboptimal honest mechanism one can choose an appropriate k. Suppose that an optimal honest mechanism with signal π has value v. Let S be the set of signal realizations that occur with positive probability under π. For any s S, let µ s denote the posterior induced by realization of s from π. Let M be the set of messages Sender conveys with a positive probability. For any m M, let µ m be Receiver s equilibrium belief upon observing m and let S m = {s σ (m s) > 0}. It will suffice to show that for every m M we must have ˆv (µ m ) = ˆv (µ s ) for all s S m. Suppose to the contrary there is an m M and s S m s.t. ˆv (µ m ) ˆv (µ s ). Since (i) µ m is a convex combination of {µ s s S m }, (ii) for all s S we have V (µ s ) = ˆv (µ s ) and (iii) V is concave, this means there is an s S m (possibly equal to s ) s.t. ˆv (µ m ) < ˆv (µ s ). But, this means that Sender has a profitable deviation which is to always send the message {s } when s is realized from π. Hence, we reach a contradiction. The fact that (1) implies (2) means that Sender s gain from persuasion is weakly greater in the game that we study than in any other persuasion mechanism. The fact that (2) implies (3) means that it is not important to assume that Receiver observes the realization s from π. 2 Relaxing the assumption that Ω is finite In the paper, we assumed that Ω is finite. We also claimed this assumption was made primarily for expositional convenience. In this section, we show that the approach used in the paper extends to the case when Ω is a compact metric space. 3 As before, Receiver has a continuous utility function u (a, ω) that depends on her action a A and the state of the world ω Ω. Sender has a continuous utility function v (a, ω) that depends on Receiver s action and the state of the world. The action space A is assumed to be compact and the state space Ω is assumed to be a compact metric space. Let (Ω) denote the set of Borel probabilities on Ω, a compact metric space in the weak* topology. Sender and Receiver share a prior µ 0 (Ω). A persuasion mechanism is a combination of a signal and a message technology. A signal (π, S) consists of a compact metric realization space S and a measurable function π : [0, 1] Ω S, x (π 1 (x), π 2 (x)). Note that we define π to be a measurable function whose second component (i.e., the signal realization) is correlated with ω. We assume that x is uniformly distributed on [0, 1] and that Sender observes π 2 (x). We denote a realization of π 2 (x) by s. Note that since S is a compact metric space (hence, complete and separable), there exists a regular conditional probability (i.e., a posterior probability) obtained by conditioning on π 2 (x) = s (Shiryaev 1996, p.230). A message technology c consists of a message space M and a family of functions { c ( s) : M R + }s S. As before, a mechanism is honest if M = S and c (m s) = { k if s=m if s m for some k R +. A persuasion mechanism defines a game just as before. Perfect Bayesian equilibrium is still the solution concept and we still select Sender-preferred equilibria. Definitions of value and gain are same as before. Let a (µ) denote the set of actions optimal for Receiver given her beliefs are µ (Ω): a (µ) arg max u (a, ω) dµ (ω). a 3 We are very grateful to Max Stinchcombe for help with this extension. 3

Note that a ( ) is an upper hemicontinuous, non-empty valued, compact valued, correspondence from (Ω) to A. Let ˆv (µ) denote the maximum expected value of v if Receiver takes an action in a (µ): ˆv (µ) max a a (µ) v (a, ω) dµ (ω). Since a (µ) is non-empty and compact and v (a, ω) dµ (ω) is continuous in a, ˆv is well defined. We first show that the main ingredient for the existence of an optimal mechanism, namely the upper semicontinuity of ˆv, remains true in this setting. Lemma 1 ˆv is upper semicontinuous. Proof. Given any a, the random variable v (a, ω) is dominated by the constant random variable max ω v (a, ω) (since v is continuous in ω and Ω is compact, the maximum is attained). Hence, by the Lebesgue s Dominated Convergence Theorem, v (a, ω) dµ (ω) is continuous in µ for any given a. Now, suppose that ˆv is discontinuous at some µ. Since u is continuous, by Berge s Maximum Theorem this means that Receiver must be indifferent between a set of actions at µ, i.e., a (µ) is not a singleton. By definition, however, ˆv (µ) max a a (µ) v (a, ω) dµ (ω). Hence, ˆv is upper semicontinuous. Now, a distribution of posteriors, denoted by τ, is an element of the set ( (Ω)), the set of Borel probabilities on the compact metric space (Ω). We say a distribution of posteriors τ is Bayes-plausible if µdτ (µ) = µ 0. We say that π induces τ if conditioning on π 2 (x) = s gives posterior κ s and the distribution of κ π2 (x) is τ given that x is uniformly distributed. Since Ω is a compact metric space, for any Bayes-plausible τ there exists a π that induces it. 4 Hence, the problem of finding an optimal mechanism is equivalent to solving max ˆv (µ) dτ (µ) Now, let τ () s.t. µdτ (µ) = µ 0. V sup {z (µ, z) co (hyp (ˆv))}, where co ( ) denotes the convex hull and hyp ( ) denotes the hypograph. Recall that given a subset K of an arbitrary vector space, co (K) is defined as {C K C, C convex}. Let g (µ 0 ) denote the subset of ( (Ω)) that generate the point (µ 0, V (µ 0 )), i.e., { } g (µ 0 ) τ ( (Ω)) µdτ (µ) = µ 0, ˆv(µ)dτ (µ) = V (µ 0 ). Note that we still have not established that g (µ 0 ) is non-empty. proof of our main proposition. That is the primary task of the Proposition 2 Optimal mechanism exists. Value of an optimal mechanism is V (µ 0 ). Sender benefits from persuasion iff V (µ 0 ) > ˆv (µ 0 ). An honest mechanism with a signal that induces an element of g (µ 0 ) is optimal. 4 Personal communication with Max Stinchcombe. Detailed proof available upon request. 4

Proof. By construction of V, there can be no mechanism with value strictly greater than V (µ 0 ). We need to show there exists a mechanism with value equal to V (µ 0 ), or equivalently, that g (µ 0 ) is not empty. Without loss of generality, suppose the range of v is [0, 1]. Consider the set H = {(µ, z) hyp (V ) z 0}. Since ˆv is upper semicontinuous, H is compact. By construction of V, H is convex. Therefore, by Choquet s Theorem (e.g., Phelps 2001), for any (µ, z ) H, there exists a probability measure η s.t. (µ, z ) = H (µ, z) dη (µ, z) with η supported by extreme points of H. In particular, there exists η s.t. (µ 0, V (µ 0 )) = H (µ, z) dη (µ, z) with η supported by extreme points of H. Now, note that if (µ, z) is an extreme point of H, then V (µ) = ˆv (µ); moreover, if z > 0, z = V (µ) = ˆv (µ). Hence, we can find an η s.t. (µ 0, V (µ 0 )) = H (µ, z) dη (µ, z) with support of η entirely within {(µ, ˆv (µ)) µ (Ω)}. Therefore, there exists a τ g (µ 0 ). 3 Bound on the size of S In footnote 7, we claimed that S need not exceed min { A, Ω }. Proposition 1 implies that there exists an optimal signal with S A. Hence, we only need to establish the following: Proposition 3 There exists an optimal mechanism with S Ω. Proof. Since ˆv is bounded, hyp (ˆv) is path-connected. Therefore, it is connected. The Fenchel-Bunt Theorem (Hiriart-Urruty and Lemaréchal 2004, Thm 1.3.7) states that if S R n has no more than n connected components (in particular, if S is connected), then any x co (S) can be expressed as a convex combination of n elements of S. Hence, since hyp (ˆv) R Ω, any element of co (hyp (ˆv)) can be expressed as a convex combination of Ω elements of hyp (ˆv). In particular, (µ 0, V (µ 0 )) co (hyp (ˆv)) can be expressed a convex combination of Ω elements of hyp (ˆv). This further implies that (µ 0, V (µ 0 )) can be expressed as a convex combination of Ω elements of the graph of ˆv. Hence, there exists an optimal straightforward mechanism which induces a distribution of posteriors whose support has no more than Ω elements. 4 Monotonicity of ˆv and optimality of worst beliefs In Section 5.1, we observed that when Sender s payoffs are monotonic in Receiver s beliefs, there is a sense in which Sender always induces the worst belief consistent with a given action. Here we formalize that claim. Say that ˆv is monotonic if for any µ, µ, ˆv (αµ + (1 α) µ ) is monotonic in α. When ˆv is monotonic in µ, it is meaningful to think about beliefs that are better or worse from Sender s perspective. The simplest definition would be that µ is worse than µ if ˆv (µ) ˆv (µ ). Note, however, that because v (a, ω) depends on ω directly, whether µ is worse in this sense depends both on how Receiver s action changes at µ and how µ affects Sender s expected utility directly. It turns out that for our result we need a definition of worse that isolates the way beliefs affect Receiver s actions. When ˆv is monotonic, there is a rational relation on A defined by a a if ˆv (µ) ˆv (µ ) whenever a = â (µ) and a = â (µ ). This relation on A implies a partial order on (Ω): say that µ µ if E µ u(a, ω) E µ u ( a, ω ) > E µ u (a, ω) E µ u ( a, ω ) for any a a. In other words, a belief is higher in this partial order if it makes better actions (from Sender s perspective) more desirable for Receiver. The order is partial since a belief might 5

make both a better and a worse action more desirable for Receiver. We say that µ is a worst belief inducing â (µ) if there is no µ µ s.t. â (µ) = â (µ ). We then have the following: Proposition 4 Suppose Assumption 1 holds. If ˆv is monotonic, A is finite, and Sender benefits from persuasion, then for any interior belief µ induced by an optimal mechanism either: (i) µ is a worst belief inducing â (µ), or (ii) both Sender and Receiver are indifferent between two actions at µ. Proof. Suppose Assumption 1 holds, ˆv is monotonic, A is finite, and Sender benefits from persuasion. Now, suppose µ is an interior belief induced by an optimal mechanism. Since Assumption 1 holds and Sender benefits from persuasion, Receiver s preference at µ is not discrete by Proposition 6. Therefore, Lemma 1 tells us a such that E µ u(â (µ), ω) = E µ u (a, ω). If (ii) does not hold, we know E µ v(â (µ), ω) > E µ v(a, ω). Therefore, â (µ) a. Hence, given any µ µ, 0 = E µ u(â (µ), ω) E µ u (a, ω) > E µ u (â (µ), ω) E µ u (a, ω). Since E µ u (a, ω) > E µ u (â (µ), ω), we know that â (µ) is not Receiver s optimal action when her beliefs are µ. Hence, for any µ µ, â (µ ) â (µ), which means that µ is a worst belief inducing â (µ). 6

References Crawford, Vincent, & Sobel, Joel. 1982. Strategic information transmission. Econometrica, 50(6), 1431 1451. Green, Jerry R., & Stokey, Nancy L. M. 2007. A two-person game of information transmission. Journal of Economic Theory, 135(1), 90 104. Grossman, Sanford J. 1981. The informational role of warranties and private disclosure about product quality. Journal of Law and Economics, 24(3), 461 483. Hiriart-Urruty, Jean-Baptiste, & Lemarechal, Claude. 2004. Springer. Fundamentals of convex analysis. Kartik, Navin. 2009. Strategic communication with lying costs. Review of Economic Studies, 76(4), 1359 1395. Milgrom, Paul. 1981. Good news and bad news: Representation theorems and applications. The Bell Journal of Economics, 12(2), 380 391. Phelps, Robert R. 2001. Lectures on Choquet s Theorem. Springer. Shiryaev, A. N. 1996. Probability. Springer. Spence, A. Michael. 1973. Job market signaling. Quarterly Journal of Economics, 87(3), 355 374. 7