ONLINE APPENDICES. Paternalism, Libertarianism, and the Nature of Disagreement Uliana Loginova and Petra Persson

Size: px
Start display at page:

Download "ONLINE APPENDICES. Paternalism, Libertarianism, and the Nature of Disagreement Uliana Loginova and Petra Persson"

Transcription

1 ONLINE APPENDICES Paternalism, Libertarianism, and the Nature of Disagreement Uliana Loginova and Petra Persson B Appendix: Omitted Proofs Proof of Proposition 2. Preference disagreement. by a corresponding lemma. We consider each type of disagreement in turn. The proof consists of two steps, each described Lemma 1. For any cost of coercion q 0 and preference distribution f(b) there exists a threshold ϕ TT (q, b, b 2 ), s.t. a FRE exists if and only if ϕ ϕ TT (q, b, b 2 ). Proof. Assume that the authority gets the signal s. If the authority coerces, she chooses the action a that maximizes her expected utility given by: + E(U A (a, θ) s) = p(s) [(a 1) 2 + ϕ(a 1 b i ) 2 ]f(b i )db i (1 p(s)) + Thus, the authority optimally imposes the action: a A (s) =p(s)+ [a 2 + ϕ(a b i ) 2 ]f(b i )db i q. ϕ 1+ϕ b. Her expected utility from coercion is E(U A (a, θ) s) = p(s) (1 + ϕ)(a A (s) 1) 2 2ϕ b(a A (s) 1) + ϕb 2 (1 p(s)) (1 + ϕ)(a A (s)) 2 2ϕ ba A (s)+ϕb 2 q = (1 + ϕ)e (a A (s) θ) 2 s +2ϕ b [a A (s) p(s)] ϕb 2 q 2 ϕ = (1 + ϕ) + p(s)(1 p(s)) +2 1+ϕ b 1+ϕ b ϕ2 2 ϕb 2 q = ϕ 2 1+ϕ b 2 (1 + ϕ)p(s)(1 p(s)) ϕb 2 q. 41

2 If instead the authority communicates some message m to the population, then the population s posterior becomes ˆp = p(m) (bysymmetry,inanyequilibrium all individuals hold the same posteriors after observing the authority s message). Hence, each individual i picks his preferred action a i (m) =ˆp + b i, which generates the following expected payoff to the authority: + E (U A (a(m), θ) s) = p(s) (ˆp + b i 1) 2 f(b i )db i + ϕ(ˆp 1) 2 + (1 p(s)) (ˆp + b i ) 2 f(b i )db i + ϕˆp 2 = p(s) (1 + ϕ)(ˆp 1) 2 +2 b(ˆp 1) + b 2 (1 p(s)) (1 + ϕ)ˆp 2 +2 bˆp + b 2 = (1 + ϕ)(ˆp 2 2ˆpp(s)+p(s)) 2 b(ˆp p(s)) b 2 = (1 + ϕ)(ˆp p(s)) 2 (1 + ϕ)p(s)(1 p(s)) 2 b(ˆp p(s)) b 2. Communication generates a greater expected payoff to the authority than coercion if ϕ 2 1+ϕ b 2 ϕb 2 q (1 + ϕ)(ˆp p(s)) 2 2 b(ˆp p(s)) b 2. (B.1) In the particular case of truthful advising this reduces to ϕ 2 1+ϕ b 2 ϕb 2 + b 2 q 0, ϕ 2 (b 2 b 2 ) ϕq + b 2 q 0. (B.2) The parabola in the LHS is concave and achieves its maximum at ϕ 0. This implies that the LHS decreases in ϕ for ϕ 0. In particular, the authority prefers to act as a truth telling advisor if and only if the level of altruism is sufficiently large: ϕ ϕ TT (q, b, b 2 ). The threshold ϕ TT (q, b, b 2 ) 1ifthe average bias is sufficiently small: b 2 2 q 0. If coercion is costless, q =0,thentruthfuladvisingispreferredforsufficiently large ϕ, for any non-degenerate distribution ( b 2 = b 2 ), i.e., ϕ TT (0, b, b 2 ) < + ; while coercion is always preferred for any degenerate distribution ( b 2 = b 2 ), i.e., ϕ TT (0, b, b 2 )=+. Clearly, ϕ TT (q, b, b 2 ) strictly exceeds 0 and decreases with q for q<b 2,and ϕ TT (q, b, b 2 )=0whenq b 2 42

3 Lemma 2. For every f(b) and q<q C ( b, b 2 ) there exists a threshold ϕ C (q, b, b 2 ) > 0, s.t. for any ϕ < ϕ C (q, b, b 2 ) every equilibrium outcome involves the authority imposing her preferred action with non-zero probability. Moreover, for every b and q<q CC ( b, b 2 ) there exists a threshold ϕ CC (q, b, b 2 ) > 0, s.t. for any ϕ < ϕ CC (q, b, b 2 ) there is a unique equilibrium, in which the authority coerces the individual for each signal s {0, 1}. Proof. As regards the first part of the Lemma, we show that sufficiently low cost q and altruism ϕ preclude the existence of communication equilibria. We study two different types of communication equilibria: (i) only one posterior belief ˆp is induced, and (ii) two different posterior beliefs ˆp 0 < ˆp 1 are induced. In case (i), the authority s communication yields the same posterior belief in the population for both signals. Thus, no informative communication takes place and the posterior belief necessarily coincides with the prior, ˆp = π. This means that each individual i picks an action a i = π+b i. Such a communication equilibrium fails to exist when, after some signal s, the authority would rather exercise her power to choose her preferred action, i.e., when the inequality (B.1) holdswiththeoppositesign: ϕ 2 1+ϕ b 2 ϕb 2 q> (1 + ϕ)(π p(s)) 2 2 b(π p(s)) b 2. Assume that the authority does not incorporate the individual s payoff into her utility: ϕ =0. Then,forsufficiently low cost q<max s [(π p(s)) 2 +2 b(π p(s)) + b 2 ], the authority coerces the individual for at least one signal s. Now consider case (ii), in which the induced posterior beliefs of the population are different for different signals, ˆp 0 < ˆp 1. The authority cannot be indifferent between these beliefs after both signal realizations. Thus, for these actions to be induced in equilibrium, it must be the case that the authority mixes between messages for no more than one signal. This implies that ˆp s = p(s) foratleastonesignals. Hence, a non-altruistic authority who obtained the signal s will choose to impose her preferred action whenever the inequality (B.2) breaksdown,i.e.,whenq<b 2. Clearly, she will still do so for sufficiently low levels of altruism. Together, cases (i) and (ii) imply that for a sufficiently low coercion cost, q<q C ( b, b 2 ), and altruism level, ϕ < ϕ C (q, b, b 2 ), each equilibrium involves coercion with a non-zero probability. Clearly, ϕ C (q, b, b 2 )decreaseswithq for q<q C ( b, b 2 ). As regards the second part of the Proposition, we argue that for sufficiently lowq and ϕ no communication is sustainable in equilibrium. Below we consider the two possibilities of how communication can arise. 43

4 First, assume that the authority communicates with positive probability for both signal realizations s =0, 1. If only one population s posterior ˆp [p(0),p(1)] is induced in equilibrium, then a non-altruistic authority would strictly prefer to coerce for at least one signal when q<q(ˆp, b, b 2 ) = max[(ˆp p(s)) 2 +2 b(ˆp p(s)) + b 2 ]. s (B.3) Clearly, q(ˆp, b, b 2 ) > 0foreveryˆp. Because q(ˆp, b, b 2 ) is a continuous function of ˆp, there exists 0 < q( b, b 2 ) = minˆp [p(0),p(1)] q(ˆb, b, b 2 ). The inequality (B.3) is strict for any q< q( b, b 2 )andˆp [p(0),p(1)]; hence, the authority still prefers to coerce whenever the level of altruism is below the threshold ϕ(q, ˆp, b, b 2 ) > Because ϕ(q, ˆp, b, b 2 )iscontinuousinˆp [p(0),p(1)], there exists 0 < ϕ(q, b, b 2 )= min ˆp [p(0),p(1)] ϕ(q, ˆp, b, b 2 ). As a result, for q< q( b, b 2 )andϕ < ϕ(q, b, b 2 ), there exists no equilibrium with non-zero communication after both signals and only one induced posterior belief. Next, if two different posterior beliefs ˆp 0 < ˆp 1 are induced in equilibrium, then the authority strictly prefers inducing one action over the other for at least one signal, meaning that ˆp s = p(s) for some s {0, 1}. Clearly, such an equilibrium cannot exist for q lower than b 2 and sufficiently low ϕ. Second, suppose that the authority communicates with a non-zero probability for one signal and coerces with certainty for the other signal. For example, assume that the authority sometimes communicates after s =1and always coerces after s =0. Inthiscase,themerefactthattheauthoritydidn t impose an action reveals to the population that the signal is s =1. As a result, the induced population s posterior ˆp necessarily equals p(1). Such an equilibrium fails to exist for q lower than b 2 and sufficiently low ϕ. The case when the authority sometimes communicates the low signal s = 0andalwayscoerces when s =1isanalogous. Itfollowsthatforeveryq<b 2, communicating after only one signal cannot be an equilibrium if ϕ is sufficiently low. Combining the results of the two cases, when the coercion cost q and altruism level ϕ are below the respective bounds q CC ( b, b 2 )andϕ CC (q, b, b 2 ), the unique equilibrium is the one where the authority always coerces. 32 Assuming that the maximum in (B.3) is achieved at ŝ, the threshold ϕ(q, ˆp, b, b 2 )is ϕ determined as a closest to 0 positive root of 1+ϕ b 2 2 ϕb 2 q = (1 + ϕ)(ˆp p(s)) 2 2 b(ˆp p(s)) b 2. 44

5 Denoting q = q CC ( b, b 2 ), Lemma 1 and Lemma 2 imply the preference disagreement result of Proposition 2. Evidently, ϕ CC (q, b, b 2 ), ϕ C (q, b, b 2 ), and ϕ TT (q, b, b 2 )aredecreasinginq. Clearly, a FRE is preferred by the individuals to any other (pure or mixed strategies) PBE. Now we show that a FRE is preferred by the authority to any other pure strategies PBE when ϕ exceeds ϕ TT (q, b, b 2 ). First, the FRE is always preferred to purely uninformative advising, where the same message is sent after both signals. Indeed, the FRE generates a greater expected utility to the advisor whenever (πγ +(1 π)(1 γ)) (1 + ϕ)(π p(1)) 2 2 b(π p(1)) + (π(1 γ)+(1 π)γ) (1 + ϕ)(π p(0)) 2 2 b(π p(0)) 0. This inequality can be rewritten as (1 + ϕ) (πγ +(1 π)(1 γ))(π p(1)) 2 +(π(1 γ)+(1 π)γ)(π p(0)) 2 0, which satisfied for all ϕ. Second, the FRE is preferred to equilibria where the authority coerces after some or both signals. Indeed, as the previous analysis illustrates, after any signal s the authority prefers truth-telling over coercion for ϕ ϕ TT (q, b, b 2 ). Opinion disagreement. The proof proceeds in two steps; each step is formulated as a corresponding lemma. Lemma 3. For any opinion distribution g / G, there exists a threshold ϕ TT (q, g), s.t. a FRE exists if and only if ϕ ϕ TT (q, g). Proof. Consider some opinion distribution g(π) / G. That is, truthful reporting to the population described by g(π) isincentivecompatiblewhenc =0, given that the individuals believe the reported message. Truth-telling is preferred to coercion after signal s if ϕ 1 0 pa (p i 1) 2 +(1 p A )(p i ) 2 1 q ϕ pa (p A 1) 2 +(1 p A )(p A ) 2 g(π i )dπ i, where p A = p A (s) andp i = p i (s) =a i (s) aretheauthority sandindividuali s posterior beliefs that θ = 1afterthesignals. This condition can be rewritten as ϕ pa (p i 1) 2 +(1 p A )(p i ) 2 p A (p A 1) 2 (1 p A )(p A ) 2 g(π i )dπ i q. π i =

6 The integral in the LHS is strictly positive because p A (p i 1) 2 +(1 p A )(p i ) 2 > p A (p A 1) 2 +(1 p A )(p A ) 2 for any p i = p A (i.e., π i =0.5) and π i =0.5 g(π i)dπ i > 0. Thus, a FRE exists if and only if ϕ is below a threshold ϕ TT (q, g). Note that ϕ TT (q, g) > 0whenq>0andϕ TT (0,g)=0. Lemma 4. For any opinion distribution g(π) and q 0, there exist thresholds ϕ C (q, g) ϕ CC (q, g). For ϕ > ϕ CC (q, g), there exists a unique equilibrium in which the authority coerces with probability one after each signal s. For ϕ > ϕ C (q, g), every equilibrium involves the authority coercing with strictly positive probability. Proof. First, we prove that all equilibria with communication after both signal realizations break down for a sufficiently high level of altruism ϕ > ϕ C (q, g). To show this, we distinguish between (i) communication equilibria where each individual i takes only one action â i, and (ii) communication equilibria where each individual i takes two different actions â i,0 < â i,1 with positive probabilities. In case (i), communication is necessarily uninformative, and hence, each individual i chooses an action equal to his prior belief, â i = π i. Coercion after some signal s is preferred to such communication whenever ϕ pa (π i 1) 2 +(1 p A )(π i ) 2 p A (p A 1) 2 (1 p A )(p A ) 2 g(π i )dπ i q, π i =p A where p A = p A (s). The integral in the LHS is strictly positive, because p A (π i 1) 2 +(1 p A )(π i ) 2 >p A (p A 1) 2 +(1 p A )(p A ) 2 for any π i = p A and π i =p A g(π i )dπ i > 0. Thus, coercion is strictly preferred for sufficiently large ϕ, so such a communication equilibrium breaks down. In case (ii), the communicating authority induces the individuals to take different actions â i,0 < â i,1. Clearly, rational behavior on the part of each individual i ensures that â i,0, â i,1 [p i (0),p i (1)]. Because g(π)dπ > 0and π= =π A is the median of g(π), w.l.o.g. we assume that g(π)dπ > 0 π>0.5+ε for some ε > 0. Coercion after signal s =0ispreferredtosuchcommunication whenever max ϕ m {0,1} 1 0 pa (â i,m 1) 2 +(1 p A )(â i,m ) 2 p A (p A 1) 2 (1 p A )(p A ) 2 g(π i )dπ i q, where p A = p A (0). Now we show that the integral in the LHS has a lower 46

7 bound δ > 0thatisindependentoftheparticularactionsâ i,0 and â i,1. Indeed, max m {0,1} 1 π i >0.5+ε π i >0.5+ε = δ > 0, 0 pa (â i,m 1) 2 +(1 p A )(â i,m ) 2 p A (p A 1) 2 (1 p A )(p A ) 2 g(π i )dπ i pa (â i,0 1) 2 +(1 p A )(â i,0 ) 2 p A (p A 1) 2 (1 p A )(p A ) 2 g(π i )dπ i pa (p 0.5+ε (0) 1) 2 +(1 p A )(p 0.5+ε (0)) 2 p A (p A 1) 2 (1 p A )(p A ) 2 g(π i )dπ i where p 0.5+ε (0) is the posterior of the individual with the prior 0.5 +ε after getting the signal s =0. 33 Here the first inequality holds because the expression inside the integral always is positive and for every i the action a i,0 is closer to the advisor s preferred action p A than a i,1 is. The second inequality holds because p A (0) <p 0.5+ε (0) <a i,0 for any communication equilibrium of this type. As a result, the authority prefers coercion over communication when her altruism level exceeds some threshold (which is independent of particular actions â i,0 and â i,1 ). Combining the results of the two cases, we obtain that for a sufficiently high level of altruism, ϕ > ϕ C (q, g) (where the subscript C denotes coercion), the authority cannot communicate with probability one in equilibrium. Second, we argue that for a sufficiently large ϕ no communication is sustainable in equilibrium. In order to do this, we consider two manners in which information transmission can arise: (i) the authority communicates with positive probability after both signal realizations, and (ii) the authority communicates with non-zero probability after one signal realization and coerces with certainty after the other. In case (i), we consider two possibilities: either each individual i takes only one action â i, or each individual i takes two different actions â i,0 < â i,1 with positive probabilities. In the first case the authority prefers to coerce after the signal s =0if ϕ 1 0 pa (â i 1) 2 +(1 p A )(â i ) 2 p A (p A 1) 2 (1 p A )(p A ) 2 g(π i )dπ i q, where p A = p A (0), and we continue to assume that g(π)dπ > 0for π>0.5+ε some ε > 0. As it was shown, the integral in the LHS exceeds some δ > 0that 33 Note that the individual with the prior 0.5+ε is hypothetical, i.e., he might not be a part of the population. 47

8 is independent of particular actions â i. Hence, for ϕ exceeding some threshold (which is independent of particular actions â i ), there exists no equilibrium with non-zero communication for both signals where each individual i takes only one action. Second, if every individual i takes different actions â i,0 < â i,1, then, by an analogous analysis, the authority will prefer to coerce for sure when ϕ is greater than some threshold (which is independent of particular actions â i,0 and â i,1 ). In case (ii), suppose, for example, that the authority coerces for sure after getting the signal s = 0. In this equilibrium, the authority s decision to communicate perfectly reveals that she observed the signal s = 1, so the individuals implement a i (1) = p i (1). Clearly, for a high enough level of altruism ϕ pa (p i 1) 2 +(1 p A )(p i ) 2 p A (p A 1) 2 (1 p A )(p A ) 2 g(π i )dπ i >q. π i =0.5 where p A = p A (1) and p i = p i (1), and the authority would choose to improve upon this action; consequently, such an equilibrium breaks down. Combining the results of (i) and (ii), for sufficiently high ϕ > ϕ CC (q, g) the unique equilibrium is the one where the authority always coerces the individual. Note that ϕ CC (q, g) ϕ C (q, g) ϕ TT (q, g) > 0whenq>0and ϕ CC (0,g)=ϕ C (q, g) =0. Lemma 3 and Lemma 4 imply the opinion disagreement result of Proposition 2. Evidently, ϕ CC (q, g), ϕ C (q, g), and ϕ TT (q, g) areincreasinginq. QED. Proof of Proposition 3. The case of preference disagreement is considered in the main text; here we present a proof for the opinion disagreement case. Consider some distribution g(π) G and some cost of lying c>0. By Proposition 1, the advisor can be credible (to the entire population) under public communication iff ϕ ϕ(c, g). At the same time, under targeted communication, the advisor can report truthfully to the part of population with sufficiently close opinions. Thus, the advisor is (weakly) more credible with public messages when ϕ ϕ(c, g), and is (weakly) more credible with private messages when ϕ > ϕ(c, g). Assume that ϕ(c, g) > 0andthateveryindividualbelievesthemessage he gets (either public or private). If ϕ = ϕ(c, g), then the advisor can be credible (to the entire population) in public communication case, but can not be credible to all individuals with private messages. Indeed, under public communication, the advisor is indifferent between reporting some signal s 48

9 {0, 1} truthfully and lying (the advisor weakly prefers to report the other signal truthfully). That is, the benefit from lying about s to extreme individuals of measure η equals the loss from lying to the other 1 η less extreme individuals plus the cost c. This implies that there is a set of individuals of measure η, 0 < η η such that the benefit of lying to these individuals exceeds c. Hence, under targeted communication, the advisor will strictly prefer to misreport the signal s to the η extreme individuals. Now consider some distribution g(π) / G. Then for any cost of lying c 0andaltruismlevelϕ 0 public communication is credible. At the same time, the advisor can be non-credible to some individuals with private messages. Thus, for all ϕ 0theadvisoris(weakly)morecredibleunder public communication. Clearly, all individuals prefer a FRE to any other (pure or mixed strategies) PBE. Now we show that in private communication with individual i, a FRE is also preferred by the advisor to any other pure strategies PBE for any π i (0, 1). Hence, a FRE is preferred under public communication as well; more credible communication is always beneficial to an advisor with prior π A =0.5 (relativetolesscrediblecommunication). Assume that π i > 0.5 (thecaseofπ i < 0.5 issimilarandthecaseof π i =0.5 is straightforward). In a FRE, individual i takes actions p 0 = p i (0) and p 1 = p i (1) after messages 0 and 1, respectively. In the uninformative equilibrium the advisor sends the same message independently of the signal, and i takes the action π = π i. The FRE generates a greater expected utility to the advisor whenever 1 (1 γ)(p0 1) 2 γp 2 0 +(1 γ)(π 1) 2 + γπ 2 2 > 1 γ(p1 1) 2 +(1 γ)p 2 1 γ(π 1) 2 (1 γ)π 2. 2 This inequality can be rewritten as (π p 0 )(π + p 0 2(1 γ)) > (p 1 π)(π + p 1 2γ). First, note that π p 0 >p 1 π > 0, i.e., the individual with prior π > 0.5 is more responsive to a low signal s =0thantoahighsignals = 1. Second, it is easily verified that p 1 p 0 < 2γ 1(theindividualwithpriorπ > 0.5 is on average less responsive than is the individual with prior 0.5). Hence, π + p 0 2(1 γ) > π + p 1 2γ, which by π + p 0 2(1 γ) > 0 implies that the FRE in private communication with i is preferred to the uninformative equilibrium. QED. 49

10 C Appendix: Dominance solvable setting C.1 The altruistic advisor Here we consider the following modification of the baseline model presented in Section 3. In the first stage of the game, a signal about the state s, with precision Pr(s = θ θ) γ (0.5, 1), is realized. This signal is privately observed by the advisor with probability α (0, 1); the advisor remains uninformed with the probability (1 α). Whether the advisor observes the signal is not observable by the population. If the advisor obtains the signal, she can either pass the signal s on to all individuals, or hide it at a cost c 0, in which case the individuals learn nothing about the state. We denote this case by. That is, the advisor can incur a costly effort to hide information from the population. Importantly, when the individual observes, he cannot tell whether the advisor (i) did not receive a signal or whether she (ii) obtained the signal but chose to hide it. Upon learning the signal s or observing no information, each individual chooses an action a R. The assumptions about the individuals preferences, opinions, and payoffs are the same as in the baseline model, and the advisor s utility is given by U A (a, θ) = 1 0 (a i θ) 2 di ci A hides info ϕ 1 0 (a i θ b i ) 2 di. Consider the action that individual i optimally chooses given the available information. In the case when individual i gets to know the signal, his action is a i (1) = p i (1) + b i if s = 1 and a i (0) = p i (0) + b i if s = 0. When no information is revealed to i, he rationally chooses a i ( ) =p i ( ) +b i, where p i ( ) [p i,0, p i,1 ] (p i (0),p i (1)), where the upper bound p i,1 corresponds to i s posterior given that the advisor passes on the signal s =0andincurseffort to hide s =1. Similarly,thelowerboundp i,0 is i s posterior when the advisor reveals s =1andincurseffort to hide s =0. Theexpressionsforp i,1 and p i,0 are given by: p i,1 = p i,0 = π i (1 α + αγ) π i (1 α + αγ)+(1 π i )(1 α + α(1 γ)), π i (1 α + α(1 γ)) π i (1 α + α(1 γ)) + (1 π i )(1 α + αγ). Preference disagreement. Under preference disagreement, if the level of altruism is sufficiently large, the advisor passes on the signal to the population whenever she has one. 50

11 Lemma 5. For any cost c 0 and preference distribution f(b) with mean b there exists a ϕ(c, b), s.t. for every ϕ > ϕ(c, b) there is a unique rationalizable outcome in which the advisor reveals the signal if she has one. Proof. Assume that the population s belief after observing no signal is given by some p( ) [p 0, p 1 ]. If the informed advisor does not hide the signal s =1, she gets an expected payoff of: E [U A (a(1), θ) s =1] = p(1) (p(1) + b 1) 2 + ϕ(p(1) 1) 2 (1 p(1)) (p(1) + b) 2 + ϕp(1) 2 + b 2 b 2. If the advisor exerts an effort to hide s =1,herexpectedpayoff is given by E [U A (a( ), θ) s =1] = p(1) (p( )+ b 1) 2 + ϕ(p( ) 1) 2 (1 p(1)) (p( )+ b) 2 + ϕp( ) 2 + b 2 b 2 c. The advisor prefers to pass the signal s =1ontothepopulationwhenever p(1) (p(1) + b 1) 2 + ϕ(p(1) 1) 2 (1 p(1)) (p(1) + b) 2 + ϕp(1) 2 > p(1) (p( )+ b 1) 2 + ϕ(p( ) 1) 2 (1 p(1)) (p( )+ b) 2 + ϕp( ) 2 c. This condition is equivalent to the one when the advisor communicates to asingleindividualwithpreferencebias b. Because p( ) <p(1), it can be rewritten as 2 b ϕ > 1+ p(1) p( ) c (p(1) p( )). 2 Similarly, using p( ) >p(0), the advisor s incentive compatibility condition to pass on the signal s =0tothepopulationcanbewritten ϕ > 1 2 b p( ) p(0) c (p(0) p( )). 2 Because p( ) [p 0, p 1 ] (p(0),p(1)), for sufficiently large ϕ > ϕ(c, b) itisa dominant strategy for the informed advisor to reveal the signal independently of p( ). Opinion disagreement. Under opinion disagreement, there exists a nondegenerate set of distributions for which the the advisor hides some signal when the level of altruism is sufficiently large. 51

12 Lemma 6. There exists a non-degenerate set of opinion distributions G such that hiding some signal is a strictly dominant strategy for the advisor when c =0. For any g(π) G and c>0, there exists a threshold level of altruism ϕ(c, g) such that the unique rationalizable outcome involves the advisor hiding some signal for sure when ϕ ϕ(c, g). Proof. First, we show that G is non-empty and uncountable. Consider a simple opinion distribution with two types of individuals, t 1 and t 2, with priors π 1 < π A =0.5 < π 2. The types are equally prevalent, i.e., g(π 1 )=g(π 2 )=0.5. Assume that type t 1 is extreme and is not responsive to the signal s, π 1 =0. Assume that π 2 is sufficiently high to ensure that any p( ) [p 2,0, p 2,1 ] is closer to p A (1) = γ than p 2 (1). This implies that in the case of c =0itisastrictly dominant strategy for the advisor to always hide the signal s =1. Clearly, G contains all such (and, by continuity, many other) distributions; hence, it is uncountable. Second, we note that for any g(π) G and c>0, the advisor s strictly dominant strategy is to hide some signal when ϕ is sufficiently large. Indeed, for any given cost of lying, increasing the level of altruism makes the net benefit from hiding the signal larger relative to the cost c. C.2 The altruistic authority After observing the signal, s, or after observing no signal,, the authority(a, she) chooses between sending a public message m, after which the individuals choose actions, and mandating an action for all individuals, a A R. As before, the cost of coercion is q, where q c =0. Preference disagreement. Under preference disagreement, for sufficiently high levels of altruism, the authority never coerces or hides information. Lemma 7. For any for any preference distribution f(b) with b 2 = b 2 and any q 0, there exists a threshold ϕ(q, b 2, b 2 ), such that for any ϕ > ϕ(q, b 2, b 2 ) there is a unique rationalizable outcome where the authority never coerces the individuals and reveals the signal if she has one. Proof. Assume that the authority obtained the signal. By Lemma 5, for sufficiently large ϕ > ϕ(c =0, b) theauthorityalwaysprefersrevealingthesignal to hiding it independently of p( ) [p 0, p 1 ]. Next, comparing signal revelation 52

13 to coercing and imposing a A (s) =a A (s) =p(s) + 1+ϕ b, ϕ signal revelation is preferred when: ϕ 2 (b 2 b 2 ) ϕq + b 2 q 0. (C.1) Thus, for sufficiently large ϕ > ϕ(q, b 2, b 2 ) ϕ > ϕ(c =0, b) itisadominant strategy of the informed authority to pass on the signal and never coerce. Now consider an uninformed authority and assume that ϕ > ϕ(q, b 2, b 2 ). If the authority does nothing, each individual i rationally chooses action π + b i (the individuals know that the authority never hides information, because ϕ > ϕ 1 (q, b 2, b 2 )). If the authority coerces, she chooses a A ( ) =π + 1+ϕ b. ϕ Thus, the authority and the individuals hold the same beliefs about the state, in which case the authority prefers not to coerce whenever (C.1) issatisfied. As a result, given rational behavior on the part of the population, whenever ϕ > ϕ(q, b 2, b 2 )itisastrictlydominantstrategyfortheauthoritynottocoerce the individuals, and to instead pass the signal on whenever she is informed. Opinion disagreement. Under opinion disagreement, for sufficiently high levels of altruism, the authority coerces the population. Lemma 8. For any for any opinion distribution g(π) and any q 0, there exists a threshold ϕ(q, g), such that for any ϕ > ϕ(q, g) there is a unique rationalizable outcome where the authority coerces the individuals with probability one. Proof. W.l.o.g. we assume that g(π)dπ > 0. Consider the authority π>0.5+ε who obtained the signal s = 0. Iftheauthorityrevealsthesignaltothe population, every individual i optimally chooses an action p i (0) = p A (0) if π i =0.5. Thus, for sufficiently large ϕ > ϕ 0 (q, g) thebenefitfromcoercing exceeds the cost and the authority finds it strictly optimal to impose her preferred action p A (s). If the authority hides the signal s =0anddoesnot coerce, then each individual i picks an action p i ( ) [p i (0),p i (1)]. The benefit from coercing after signal s =0is 1 ϕ ϕ 0 pa (p i ( ) 1) 2 +(1 p A )(p i ( )) 2 p A (p A 1) 2 (1 p A )(p A ) 2 g(π i )dπ i π i >0.5+ε δ 0 > 0, pa (p i ( ) 1) 2 +(1 p A )(p i ( )) 2 p A (p A 1) 2 (1 p A )(p A ) 2 g(π i )dπ i where p A = p A (0). Here the first inequality holds because the expression inside the integral always is positive, and the second inequality holds because 53

14 for every i the action p i ( ) p 0.5+ε ( ) >p A (0) when π i > 0.5+ε. Thus, for ϕ > ϕ 1 (q, g) ϕ 0 (q, g) theauthoritystrictlypreferstocoercethepopulation after observing the signal s =0independentlyofp i ( ). This, in turn, implies that p i ( ) [π i,p i (1)]. Assume that ϕ > ϕ 1 (q, g). If an uninformed authority coerces the population, she derives an expected benefit 1 ϕ ϕ 0 πa (p i ( ) 1) 2 +(1 π A )(p i ( )) 2 π A (π A 1) 2 (1 π A )(π A ) 2 g(π i )dπ i π i >0.5 δ 1 > 0. πa (p i ( ) 1) 2 +(1 π A )(p i ( )) 2 π A (π A 1) 2 (1 π A )(π A ) 2 g(π i )dπ i Here the first inequality holds because the expression inside the integral always is positive, and the second inequality holds because for every i the action p i ( ) π i > π A when π i > 0.5. Thus, the authority strictly prefers to coerce after observing if ϕ > ϕ 2 (q, g) ϕ 1 (q, g). Assume that ϕ > ϕ 2 (q, g) andthattheauthorityobservedthesignals =1. By the same logic as above, she would strictly prefer coercion to revealing the signal when ϕ > max{ϕ 2 (q, g), ϕ 3 (q, g)}. If she hides the signal and does not coerce, then each individual i optimally chooses p i ( ) =p i (1) (because if the individuals get to choose the action, it necessarily means that s =1, provided that ϕ > ϕ 2 (q, g)). This is strictly dominated by coercion for ϕ > max{ϕ 2 (q, g), ϕ 3 (q, g)}. As a result, when ϕ > ϕ(q, g), the unique rationalizable outcome is the one in which the authority always coerces the individuals. D Appendix: Setting with one individual D.1 Model We first develop a framework where an individual obtains information from an altruistic advisor before making a decision. We then consider an alternative framework where theadvisor in fact is anauthority, who can communicate with the individual before he makes his decision, but who also can choose to simply coerce the individual to take a certain action. In both of these frameworks, we analyze the impact of altruism under two different assumptions on the 54

15 nature of disagreement between the two parties: conflicting preferences and conflicting opinions. D.1.1 The altruistic advisor There are two players, an individual (I, he) and an advisor (A, she). The individual must take an action, a R. His payoff from the action depends on an unknown state of the world, θ {0, 1}. While he is unable to obtain information about the state by himself, he can rely on the privately informed advisor for such information. The players prior beliefs about the state of the world are characterized by Pr i (θ =1)=π i (0, 1), for i {I,A}. 34 Following Che and Kartik (2009), we say that the players have different opinions when π I = π A. In the first stage of the game, the advisor privately observes a signal about the state, s, with precision Pr(s = θ θ) γ (0.5, 1). Second, she sends a message m {0, 1} to the individual. If the advisor lies, i.e., sends message m = s, she incurs a cost c Third, upon receiving the advisor s signal, the individual chooses an action a R. The players material payoffs fromthisactionaregivenbyu I (a, θ) = (a θ b(θ)) 2 and u A (a, θ) = (a θ) 2 ci {m=s}, where the bias b(θ) 0captures their preference disagreement, and I {m=s} is an indicator variable taking the value one if and only if the advisor lies, and zero otherwise. In this standard model of communication, we allow the advisor to be altruistic, i.e., the players utilities are given by U I (a, θ) = u I (a, θ) = (a θ b(θ)) 2, U A (a, θ) = u A (a, θ)+ϕu I (a, θ) = (a θ) 2 ci {m=s} ϕ(a θ b(θ)) 2, where ϕ 0capturesthedegreetowhichtheadvisorcaresaboutthe(material) well-being of the individual. 36 The prior beliefs π I and π A, the signal precision γ, the lying cost c, the preference alignment b(θ), and the degree of altruism ϕ are common knowledge. 34 Given that the state is binary, all beliefs can be characterized by Pr(θ = 1). 35 This formulation encompasses cheap-talk (c = 0) and hard information (c = ). We maintain that c (0, ) unless we explicitly consider one of these two cases. 36 We show in Section D.5 that all the main results remain valid if friendship is mutual. We choose unidirectional friendship as our baseline formulation not only for simplicity, but also because it better reflects a social planner who cares about a citizen. 55

16 Strategies and equilibrium. A pure strategy of the advisor specifies, for each signal s, the message m(s) thatshesends,m : {0, 1} {0, 1}. The individual s posterior beliefs conditional on message m are described by Pr I (θ = 1 m), where superscript I signifies that the individual forms his beliefs using his prior π I. A pure strategy of the individual specifies, for each message m, action a I (m) thathetakes,a I : {0, 1} R. We use a solution concept of Perfect Bayesian Equilibria (PBE), where the advisor maximizes her expected utility for each signal s given the individual s strategy a I (m); the individual maximizes his expected utility given his beliefs Pr I (θ =1 m) aftereachmessage m; and the beliefs Pr I (θ =1 m) satisfybayes rulewheneverpossible. D.1.2 The altruistic authority We now replace the advisor with an authority. The authority can choose to behave like an advisor that is, to provide the individual with information before he chooses a R himself but the authority also has the option to instead impose an action on the individual. Formally, after observing the signal, s, the authority chooses between sending a message m(s) totheindividual(inwhich case the game proceeds as in the altruistic advisor framework), and engaging in coercion, whereby the authority simply picks her desired action a R for the individual. 37 We assume that the authority must incur a cost q>0ifshecoercesthe individual. Depending on the application, this cost may reflect the instrumental cost of active intervention, or the authority s intrinsic aversion against removing the individual s liberty to choose. In this altruistic authority framework, we let c =0. Thisreflectsthefact that in many of the applications we consider, it is reasonable to assume that the lying cost is negligible relative to the cost of coercion. 38 This is merely a simplification; all insights remain valid for c>0. The parties utilities are thus 37 Note that such coercion differs from delegation, whereby the individual decides that the authority should exercise the action choice. In contrast, under coercion, the authority herself decides when she should exercise the action choice on behalf of the individual. Moreover, because the authority observes the signal s before deciding whether to coerce, whereas the individual does not observe the signal s, the coercion choice is taken ex interim, whereas a delegation choice must be made ex ante. We discuss how these concepts are related in Section D This is discussed in depth in Section D.3. 56

17 given by U I (a, θ) = u I = (a θ b(θ)) 2, U A (a, θ) = u A + ϕu I = (a θ) 2 qi {coercion} ϕ(a θ b(θ)) 2, where I {coercion} takes value one if and only if coercion occurs. Strategies and equilibrium. A pure strategy of the authority specifies, for each signal s, whether she chooses to coerce or not coerce the individual, C(s) {Coerce, Not coerce}, what action a A (s) R she takes if she chooses to coerce, and what message m(s) {0, 1} she sends if she chooses not to coerce. The individual s posterior beliefs conditional on receiving message m are given by Pr I (θ =1 m) andapurestrategyoftheindividualspecifiesan action a I (m) R for each message m. As before, we use the solution concept of PBE, in which the authority maximizes her expected utility for each signal given the individual s strategy; the individual maximizes his expected utility given his beliefs after each message; and the beliefs satisfy Bayes rule whenever possible. D.1.3 Nature of disagreement We analyze this model under two different assumptions on the nature of disagreement between the individual and the advisor: Conflicting preferences. Under conflicting preferences, the individual s preferences are biased relative to those of the advisor, i.e., b(θ) = 0 for at least one value of θ; however, the parties have the same opinions, i.e., π I = π A π. In the main body of the paper we assume that preference biases are independent on the state: b(0) = b(1) = b. 39 Conflicting opinions. Under conflicting opinions, the parties have fully aligned preferences, i.e., b(θ) = 0; however, they have different opinions, i.e., π I = π A. 39 This merely simplifies the exposition and makes the underlying intuition more transparent; as we show in the Appendix, all our results hold in the more general case of conflicting preferences. 57

18 D.2 Analysis: the altruistic advisor In this section, we study communication between the individual and his altruistic advisor. The stages of this game are summarized in Figure 6 below. In particular, we analyze how the possibility to sustain truthful communication is influenced by the advisor s regard for the individual. Because this crucially depends on the nature of disagreement between the individual and his advisor, we consider each case in turn in order to isolate the mechanisms. Figure 6: Stages of the setting with altruistic advisor. D.2.1 Conflicting preferences Because the parties share the same prior beliefs, π I = π A π, they hold the same posterior beliefs p(s) =Pr(θ =1 s) aboutthestateoftheworld, conditional on the signal s, p(1) = p(0) = πγ πγ +(1 π)(1 γ), π(1 γ) π(1 γ)+(1 π)γ. Under perfect information, when the state of the world θ is known, the individual and the advisor prefer different actions because b = 0. This disagreement carries on to the case of imperfect information when the individual and the advisor have the same beliefs about the state of the world. We analyze the game backwards. In stage 3, the individual observes the message m, forms his posterior belief Pr(θ =1 m), and chooses the action that maximizes his expected payoff: a I (m) =argmax a R E(U I(a, θ) m). Because the loss function is quadratic, the individual s maximization problem has a straightforward solution: a I (m) =Pr(θ =1 m)+b; that is, the individual 58

19 optimally chooses the action that matches his expected state of the world plus the bias b. If the individual believes that the advisor reports truthfully at stage 2, i.e., that m(s) =s, then the individual optimally chooses a I (m) =p(m)+ b. In this case, a FRE exists whenever the advisor s incentive compatibility conditions for truthful reporting are satisfied. Now consider stage 2. For the advisor to report a signal s truthfully, inducing the action a I (m = s) mustgenerateagreaterexpectedpayoff than inducing the action a I (m = s). First, suppose that the advisor obtained the signal s =1instage1. Inthiscase,herposteriorbeliefthatθ =1isp(1), and her posterior belief that θ =0is(1 p(1)). The advisor s expected utility from sending m =1andinducingactiona I (1) = p(1) + b is given by E [U A (a I (1), θ) s =1] = p(1) (a I (1) 1) 2 + ϕ(a I (1) 1 b) 2 (1 p(1)) a I (1) 2 + ϕ(a I (1) b) 2. If the advisor instead lies, i.e., sends the message m =0,sheincursthecost c and induces the action a I (0) = p(0) + b<a I (1). In this case, her expected utility is given by E (U A (a I (0), θ) s =1) = p(1) (a I (0) 1) 2 + ϕ(a I (0) 1 b) 2 (1 p(1)) a I (0) 2 + ϕ(a I (0) b) 2 c. Hence, after getting the signal s = 1, the advisor prefers to send a truthful message over lying if and only if the following inequality (denoted (TT1 pr ), where the subscript pr indicates the conflicting preferences setting) is satisfied: ϕ 1+ 2b p(1) p(0) c (p(1) p(0)). 2 (TT1 pr) Second, suppose that the advisor gets the signal s = 0. An analogous analysis leads to the following incentive compatibility condition for truth-telling ϕ 1 2b p(1) p(0) c (p(1) p(0)). 2 (TT0 pr) Combining (TT1 pr )and(tt0 pr ) yields that altruism improves communication when the parties have different underlying preferences. In particular, truthful communication can be sustained when the level of altruism ϕ is sufficiently high. This result is formally stated in the following proposition: 59

20 Proposition 5. For any bias b and lying cost c there exists a threshold ϕ(c, b) s.t. a fully revealing equilibrium exists if and only if ϕ ϕ(c, b). 40 Clearly, if b =0,theindividualandtheadvisorpreferthesameaction, and truthful communication can be sustained. When the parties have different preferences, however, communication may break down. If so, strengthening the level of altruism eventually restores truthful communication, because altruism mitigates the preference conflict. To gain intuition for the result, consider the action preferred by the advisor, conditional on getting the signal s, a A (s): a A (s) = 1 1+ϕ p(s)+ ϕ 1+ϕ (p(s)+b). This is a weighted sum of the preferred action of a non-altruistic advisor, p(s), and the preferred action of the individual, (p(s) +b). For her own sake, the advisor would like the action p(s) to be implemented. Nevertheless, when she cares about the individual, her optimal action also reflects that he is better off with the action (p(s)+ b). The stronger the advisor s care for the individual, the more she internalizes the individual s preferences, and the closer is a A (s) to a I (s). Clearly, for a high enough level of altruism, truthful communication is attainable. To better understand the exact form of the truth-telling conditions, consider the cheap-talk case with a zero cost of lying, c =0. Thesymmetric quadratic loss function yields that the advisor wants to induce an action as close to her own preferred action, a A (s), as possible. If the individual s bias is positive, b>0, then the advisor always reports the signal s =0truthfully, because a A (0) = a I (0) + ϕ b is closer to a 1+ϕ I(0) than to a I (1) >a I (0). If she obtains the signal s =1,theadvisorreportsittruthfullyifandonlyifaction a A (1) is closer to a I (1) than to a I (0); that is, if a I (1) a A (1) a A (1) a I (0). This condition can be rewritten 2b a 1+ϕ I(1) a I (0), which is equivalent to (TT1 pr )forc = 0. Analogously, if the bias is negative, b<0, the advisor always reports s =1truthfully,andrevealss = 0 if and only if the incentive constraint (TT0 pr )issatisfied. Corollary 1. The threshold ϕ(c, b) decreases with c and increases with b. The truth-telling threshold has intuitive properties. First, a higher cost of lying c makes truthful reporting more attractive for the advisor. When a FRE 40 This result holds in the general case of state dependent preferences, i.e., when b(0) = b(1). 60

21 can be sustained for a larger range of altruism levels, the level of altruism necessary to induce truth-telling, ϕ(c, b), decreases. Second, a more severe bias b intensifies the preference conflict, which increases the advisor s relative attractiveness of misreporting her signal. Consequently, the level of altruism required to mitigate the preference divergence, ϕ(c, b), increases. D.2.2 Conflicting opinions When the individual and the advisor have different opinions, π I = π A, they have different posterior beliefs p i (s) =Pr i (θ =1 s) giventhesamesignals: p i (1) = π i γ π i γ +(1 π i )(1 γ),p i(0) = π i (1 γ) π i (1 γ)+(1 π i )γ, i {I,A}. In terms of preferences over outcomes, the individual is fully aligned with the advisor, i.e., b =0,whichimpliesthattheyhavethesamematerialpayoffs. Their resulting utilities are given by: U I (a, θ) = (a θ) 2, U A (a, θ) = (a θ) 2 ci {m=s} ϕ(a θ) 2. Under perfect information about the state of the world, there is no disagreement between the parties; they prefer the same action, a = θ. In contrast, under imperfect information, the individual and the advisor prefer different actions even if they have the same signal s about the state of the world, because they interpret this signal in light of their respective (different) priors. In our framework, information is always imperfect, since the signal s is not fully informative (γ < 1). We analyze the game backwards. In Stage 3, given the individual s belief about the state of the world, Pr I (θ =1 m), he optimally chooses the action that matches this expected state of the world, a I (m) =Pr I (θ =1 m). If the individual believes that the advisor reports truthfully at stage 2, i.e., that m(s) =s, then the individual chooses action a I (m) =p I (m). The necessary and sufficient conditions for a truth-telling equilibrium to exist thus coincide with the advisor s incentive compatibility conditions for truthful reporting. Now consider stage 2. First, suppose that the advisor obtained the signal s =1instage1. Inthiscase,herposteriorbeliefthatθ = 1 is p A (1), and her posterior belief that θ =0is(1 p A (1)). Given the individual s strategy, sending m =1inducesactiona I (1) = p I (1), while sending m =0induces 61

22 action a I (0) = p I (0) <a I (1). m =1isgivenby The advisor s expected payoff from sending E A (U A (a I (1), θ) s =1) = p A (1) (a I (1) 1) 2 + ϕ(a I (1) 1) 2 (1 p A (1)) a I (1) 2 + ϕa I (1) 2, where the subscript A on her expectation reflects that the advisor evaluates the expected utility using her posterior about the state of the world, whereas the individual chooses the action that is optimal given his posterior. The advisor s expected payoff from sending m =0isgivenby E A (U A (a I (0), θ) s =1) = p A (1) (a I (0) 1) 2 + ϕ(a I (0) 1) 2 (1 p A (1)) (a I (0)) 2 + ϕa I (0) 2 c. The advisor s incentive compatibility condition for truthful reporting when s = 1(denoted(TT1 op ), where the subscript op indicates the conflicting opinions setting) is given by the inequality E A (U A (a I (1), θ) s =1) E A (U A (a I (0), θ) s =0), which can be rearranged as a I (1) + a I (0) 2 p A (1) + c τ (1 + ϕ), (TT1 op) where τ 2(a I (1) a I (0)). An analogous calculation yields the truth-telling condition when the advisor gets the signal s =0,andcombiningthesetwo conditions yields that a fully revealing equilibrium can be sustained if and only if p A (0) c τ (1 + ϕ) a I(1) + a I (0) 2 p A (1) + c τ (1 + ϕ). (TT op) As we shall see, this condition yields that greater level of altruism worsens the prospects to achieve truthful communication. In particular, if truthful communication can be sustained, raising the level of altruism may eventually destroy it; and if truthful communication cannot be sustained, raising the level of altruism cannot help. Before stating this result formally, we develop its intuition, starting from the special case when c =0. Then,(TT op )reduces to p A (0) a I(1) + a I (0) p A (1). (TT 2 op) Since a I (1) = p I (1) and a I (0) = p I (0), this condition identifies the set of (π I, π A ) for which a FRE can be sustained, given γ. Figure 7 below displays this region when γ =

23 The top solid line represents the advisor s truth-telling condition when getting the signal s =1: shecommunicatess =1truthfullyforall(π A, π I ) below this line. Similarly, she communicates s =0truthfullyforall(π A, π I ) above the solid lower line. Hence, a FRE can be sustained in their intersection, which we denote by TT. The location of TT illustrates that truthful communication requires the priors of the individual and his advisor to be sufficiently similar. Along the 45 -line, their priors are identical; hence, the action that the individual chooses if she gets signal s, a I (s) =p I (s), coincides with the action that the advisor desires him to take, a A (s) =p A (s). Everywhere else, their priors differ, so a I (s) = a A (s). Nevertheless, the advisor prefers to tell the truth so long as a A (s) isclosertotheactionthatatruthfulreportinducesthanitistothe action induced by a false report. To see this formally, we can re-write the right inequality in (TT op) asa I (1) a A (1) a A (1) a I (0), using the fact that p A (1) a A (1). The advisor reports the signal s =1truthfullysolongas the action that she wants the individual to take, a A (1), is closer to the action that the individual chooses under truthful reporting, a I (1), than to the action that he chooses if the advisor lies, a I (0). This obtains because the advisor s expected loss function is monotonic in the distance between the action that she prefers, a A (1), and the action that the individual takes, a I (m). When the priors are so different that the advisor prefers the action that he induces by lying to the action that he induces by telling the truth in some state of the world, the FRE breaks down. For all (π A, π I )belowthelower solid line, the advisor is considerably more convinced than the individual that the state of the world is θ =1(fromex-anteperspective). Inthiscase,the advisor would communicate truthfully when getting the signal s =1;however, when getting the signal s =0,shewouldprefertolieandsendthemessage m =1. Intuitively,theadvisorpreferslyingovertruth-telling eventhough there is no preference conflict because she believes that the individual is so misinformed about the true prior that the individual will take a worse action if the advisor sends a true message than if she lies. A similar logic applies to the region above the top solid line, where the FRE breaks down because the advisor would report the signal s =0truthfully,butliewhens =1. The shape of TT does not depend on the advisor s care for the individual, ϕ. Intuitively, this is because altruism does not influence whether the advisor prefers the action that is induced by telling the truth, a I (m = s), or the action that is induced by lying, a I (m = s). Rather, the level of altruism influences how much the advisor suffers from the implementation of an action that deviates from her own preferred action, a A (s). Indeed, when the advisor 63

24 Figure 7: Truth-telling incentives for different priors (π A, π I ). cares about the individual, on top ofher own material expected loss the advisor also internalizes a share of the disutility that she expects the individual to suffer from his erroneous choice of action. We now consider the case when lying entails a cost, c>0. In this case, truth-telling can be sustained for a larger set of (π A, π I ). Formally, this is immediate from the truth-telling condition, (TT op ): for all (π A, π I )suchthat the advisor is indifferent between lying and telling the truth when c =0 i.e., along the boundaries of TT she strictly prefers to tell the truth when c>0. Intuitively, when lying entails no cost, the advisor would like to lie whenever she believes that lying induces the individual to take a better action than does telling the truth. However, when the advisor is averse to lying, she weighs the cost of incurring a lie against the cost of inducing the individual to take (what she believes to be) a suboptimal action. Clearly, in the presence of a lying cost, she will be more prone to reveal the true signal. The above discussion yields that there exists a nonempty set of (π A, π I ) such that a FRE can be sustained in the presence of lying cost c, but not when c =0. WerefertothisregionasT (ϕ,c). The dotted lines in Figure 7 plot the region T (ϕ,c)forϕ =0andc =0.2. The shape of T (ϕ,c)illustratesthat a FRE always exists when the individual s prior is close to zero or one. This is because the benefit of lying is increasing in the distance between a I (s = m) 64

Definitions and Proofs

Definitions and Proofs Giving Advice vs. Making Decisions: Transparency, Information, and Delegation Online Appendix A Definitions and Proofs A. The Informational Environment The set of states of nature is denoted by = [, ],

More information

Deceptive Advertising with Rational Buyers

Deceptive Advertising with Rational Buyers Deceptive Advertising with Rational Buyers September 6, 016 ONLINE APPENDIX In this Appendix we present in full additional results and extensions which are only mentioned in the paper. In the exposition

More information

Some Notes on Costless Signaling Games

Some Notes on Costless Signaling Games Some Notes on Costless Signaling Games John Morgan University of California at Berkeley Preliminaries Our running example is that of a decision maker (DM) consulting a knowledgeable expert for advice about

More information

Bayesian Persuasion Online Appendix

Bayesian Persuasion Online Appendix Bayesian Persuasion Online Appendix Emir Kamenica and Matthew Gentzkow University of Chicago June 2010 1 Persuasion mechanisms In this paper we study a particular game where Sender chooses a signal π whose

More information

Supplementary appendix to the paper Hierarchical cheap talk Not for publication

Supplementary appendix to the paper Hierarchical cheap talk Not for publication Supplementary appendix to the paper Hierarchical cheap talk Not for publication Attila Ambrus, Eduardo M. Azevedo, and Yuichiro Kamada December 3, 011 1 Monotonicity of the set of pure-strategy equilibria

More information

An Example of Conflicts of Interest as Pandering Disincentives

An Example of Conflicts of Interest as Pandering Disincentives An Example of Conflicts of Interest as Pandering Disincentives Saori Chiba and Kaiwen Leong Current draft: January 205 Abstract Consider an uninformed decision maker (DM) who communicates with a partially

More information

Known Unknowns: Power Shifts, Uncertainty, and War.

Known Unknowns: Power Shifts, Uncertainty, and War. Known Unknowns: Power Shifts, Uncertainty, and War. Online Appendix Alexandre Debs and Nuno P. Monteiro May 10, 2016 he Appendix is structured as follows. Section 1 offers proofs of the formal results

More information

Wars of Attrition with Budget Constraints

Wars of Attrition with Budget Constraints Wars of Attrition with Budget Constraints Gagan Ghosh Bingchao Huangfu Heng Liu October 19, 2017 (PRELIMINARY AND INCOMPLETE: COMMENTS WELCOME) Abstract We study wars of attrition between two bidders who

More information

Persuasion Under Costly Lying

Persuasion Under Costly Lying Persuasion Under Costly Lying Teck Yong Tan Columbia University 1 / 43 Introduction Consider situations where agent designs learning environment (i.e. what additional information to generate) to persuade

More information

Graduate Microeconomics II Lecture 5: Cheap Talk. Patrick Legros

Graduate Microeconomics II Lecture 5: Cheap Talk. Patrick Legros Graduate Microeconomics II Lecture 5: Cheap Talk Patrick Legros 1 / 35 Outline Cheap talk 2 / 35 Outline Cheap talk Crawford-Sobel Welfare 3 / 35 Outline Cheap talk Crawford-Sobel Welfare Partially Verifiable

More information

When to Ask for an Update: Timing in Strategic Communication

When to Ask for an Update: Timing in Strategic Communication When to Ask for an Update: Timing in Strategic Communication Work in Progress Ying Chen Johns Hopkins University Atara Oliver Rice University March 19, 2018 Main idea In many communication situations,

More information

Organizational Barriers to Technology Adoption: Evidence from Soccer-Ball Producers in Pakistan

Organizational Barriers to Technology Adoption: Evidence from Soccer-Ball Producers in Pakistan Organizational Barriers to Technology Adoption: Evidence from Soccer-Ball Producers in Pakistan David Atkin, Azam Chaudhry, Shamyla Chaudry Amit K. Khandelwal and Eric Verhoogen Sept. 016 Appendix B: Theory

More information

The Social Value of Credible Public Information

The Social Value of Credible Public Information The Social Value of Credible Public Information Ercan Karadas NYU September, 2017 Introduction Model Analysis MOTIVATION This paper is motivated by the paper Social Value of Public Information, Morris

More information

Persuading Skeptics and Reaffirming Believers

Persuading Skeptics and Reaffirming Believers Persuading Skeptics and Reaffirming Believers May, 31 st, 2014 Becker-Friedman Institute Ricardo Alonso and Odilon Camara Marshall School of Business - USC Introduction Sender wants to influence decisions

More information

When to Ask for an Update: Timing in Strategic Communication. National University of Singapore June 5, 2018

When to Ask for an Update: Timing in Strategic Communication. National University of Singapore June 5, 2018 When to Ask for an Update: Timing in Strategic Communication Ying Chen Johns Hopkins University Atara Oliver Rice University National University of Singapore June 5, 2018 Main idea In many communication

More information

1 Web Appendix: Equilibrium outcome under collusion (multiple types-multiple contracts)

1 Web Appendix: Equilibrium outcome under collusion (multiple types-multiple contracts) 1 Web Appendix: Equilibrium outcome under collusion (multiple types-multiple contracts) We extend our setup by allowing more than two types of agent. The agent s type is now β {β 1, β 2,..., β N }, where

More information

Appendix of Homophily in Peer Groups The Costly Information Case

Appendix of Homophily in Peer Groups The Costly Information Case Appendix of Homophily in Peer Groups The Costly Information Case Mariagiovanna Baccara Leeat Yariv August 19, 2012 1 Introduction In this Appendix we study the information sharing application analyzed

More information

Online Appendix for Sourcing from Suppliers with Financial Constraints and Performance Risk

Online Appendix for Sourcing from Suppliers with Financial Constraints and Performance Risk Online Appendix for Sourcing from Suppliers with Financial Constraints and Performance Ris Christopher S. Tang S. Alex Yang Jing Wu Appendix A: Proofs Proof of Lemma 1. In a centralized chain, the system

More information

Strategic Learning and Information Transmission

Strategic Learning and Information Transmission Strategic Learning and Information Transmission Alexander Frug November 5, 2014 Abstract This paper addresses the problem of information transmission between a biased expert and a decision maker in an

More information

Reputation with a Bounded Memory Receiver

Reputation with a Bounded Memory Receiver Reputation with a Bounded Memory Receiver Daniel Monte Yale University- Department of Economics 28 Hillhouse Avenue New Haven, CT - 06520 UA April 22, 2005 Abstract This paper studies the implications

More information

Coordination and Cheap Talk in a Battle of the Sexes with Private Information

Coordination and Cheap Talk in a Battle of the Sexes with Private Information Department of Economics Coordination and Cheap Talk in a Battle of the Sexes with Private Information Department of Economics Discussion Paper 3-0 Chirantan Ganguly Indrajit Ray Coordination and Cheap

More information

Ex Post Cheap Talk : Value of Information and Value of Signals

Ex Post Cheap Talk : Value of Information and Value of Signals Ex Post Cheap Talk : Value of Information and Value of Signals Liping Tang Carnegie Mellon University, Pittsburgh PA 15213, USA Abstract. Crawford and Sobel s Cheap Talk model [1] describes an information

More information

Honesty vs. White Lies

Honesty vs. White Lies Honesty vs. White Lies KyungMin Kim and Jonathan Pogach May 29 Abstract We study and compare the behavioral consequences of honesty and white lie in communication. An honest agent always recommends the

More information

Intrinsic and Extrinsic Motivation

Intrinsic and Extrinsic Motivation Intrinsic and Extrinsic Motivation Roland Bénabou Jean Tirole. Review of Economic Studies 2003 Bénabou and Tirole Intrinsic and Extrinsic Motivation 1 / 30 Motivation Should a child be rewarded for passing

More information

Game Theory Lecture 10+11: Knowledge

Game Theory Lecture 10+11: Knowledge Game Theory Lecture 10+11: Knowledge Christoph Schottmüller University of Copenhagen November 13 and 20, 2014 1 / 36 Outline 1 (Common) Knowledge The hat game A model of knowledge Common knowledge Agree

More information

We set up the basic model of two-sided, one-to-one matching

We set up the basic model of two-sided, one-to-one matching Econ 805 Advanced Micro Theory I Dan Quint Fall 2009 Lecture 18 To recap Tuesday: We set up the basic model of two-sided, one-to-one matching Two finite populations, call them Men and Women, who want to

More information

Disguising Lies - Image Concerns and Partial Lying in Cheating Games

Disguising Lies - Image Concerns and Partial Lying in Cheating Games Disguising Lies - Image Concerns and Partial Lying in Cheating Games Kiryl Khalmetski Dirk Sliwka February 3, 207 Abstract We study equilibrium reporting behavior in Fischbacher and Föllmi- Heusi (203)

More information

Hierarchical Bayesian Persuasion

Hierarchical Bayesian Persuasion Hierarchical Bayesian Persuasion Weijie Zhong May 12, 2015 1 Introduction Information transmission between an informed sender and an uninformed decision maker has been extensively studied and applied in

More information

Bayes Correlated Equilibrium and Comparing Information Structures

Bayes Correlated Equilibrium and Comparing Information Structures Bayes Correlated Equilibrium and Comparing Information Structures Dirk Bergemann and Stephen Morris Spring 2013: 521 B Introduction game theoretic predictions are very sensitive to "information structure"

More information

ECO421: Communication

ECO421: Communication ECO421: Communication Marcin P ski February 9, 2018 Plan Introduction Asymmetric information means some players know more than the others. In real life, information can be transmitted A simple model of

More information

Limit pricing models and PBE 1

Limit pricing models and PBE 1 EconS 503 - Advanced Microeconomics II Limit pricing models and PBE 1 1 Model Consider an entry game with an incumbent monopolist (Firm 1) and an entrant (Firm ) who analyzes whether or not to join the

More information

joint with Jonathan Pogach (Penn) 2009 FESAMES at Tokyo Hong Kong University of Science and Technology Honesty vs. White Lies Kim & Pogach Overview

joint with Jonathan Pogach (Penn) 2009 FESAMES at Tokyo Hong Kong University of Science and Technology Honesty vs. White Lies Kim & Pogach Overview vs. White vs. joint with Jonathan Pogach (Penn) Kyungmin (Teddy) Kim Hong Kong University of Science and Technology 2009 FESAMES at Tokyo vs. White vs. Question Models Main Results Contributions 1 Honest

More information

Costly Social Learning and Rational Inattention

Costly Social Learning and Rational Inattention Costly Social Learning and Rational Inattention Srijita Ghosh Dept. of Economics, NYU September 19, 2016 Abstract We consider a rationally inattentive agent with Shannon s relative entropy cost function.

More information

Equilibrium Refinements

Equilibrium Refinements Equilibrium Refinements Mihai Manea MIT Sequential Equilibrium In many games information is imperfect and the only subgame is the original game... subgame perfect equilibrium = Nash equilibrium Play starting

More information

Efficient Random Assignment with Cardinal and Ordinal Preferences: Online Appendix

Efficient Random Assignment with Cardinal and Ordinal Preferences: Online Appendix Efficient Random Assignment with Cardinal and Ordinal Preferences: Online Appendix James C. D. Fisher December 11, 2018 1 1 Introduction This document collects several results, which supplement those in

More information

Hierarchical cheap talk

Hierarchical cheap talk Hierarchical cheap talk Attila Ambrus, Eduardo Azevedo, and Yuichiro Kamada July 15, 2009 Abstract We investigate situations in which agents can only communicate to each other through a chain of intermediators,

More information

Discussion Paper Series Number 222

Discussion Paper Series Number 222 Edinburgh School of Economics Discussion Paper Series Number Confidence and Competence in Communication Kohei Kawamura University of Edinburgh) Date July 013 Published by School of Economics University

More information

Models of Reputation with Bayesian Updating

Models of Reputation with Bayesian Updating Models of Reputation with Bayesian Updating Jia Chen 1 The Tariff Game (Downs and Rocke 1996) 1.1 Basic Setting Two states, A and B, are setting the tariffs for trade. The basic setting of the game resembles

More information

Hierarchical cheap talk

Hierarchical cheap talk Hierarchical cheap talk Attila Ambrus, Eduardo M. Azevedo, and Yuichiro Kamada December 29, 2011 Abstract We investigate situations in which agents can only communicate to each other through a chain of

More information

Disclosure of Endogenous Information

Disclosure of Endogenous Information Disclosure of Endogenous Information Matthew Gentzkow and Emir Kamenica Stanford University and University of Chicago March 2016 Abstract We study the effect of disclosure requirements in environments

More information

MULTISTAGE INFORMATION TRANSMISSION WITH VOLUNTARY MONETARY TRANSFER

MULTISTAGE INFORMATION TRANSMISSION WITH VOLUNTARY MONETARY TRANSFER Discussion Paper No. 1006 MULTISTAGE INFORMATION TRANSMISSION WITH VOLUNTARY MONETARY TRANSFER Hiroshi Sadakane June 017 The Institute of Social and Economic Research Osaka University 6-1 Mihogaoka, Ibaraki,

More information

Communication in Cournot Oligopoly

Communication in Cournot Oligopoly Communication in Cournot Oligopoly PRELIMINARY AND INCOMPLETE Maria Goltsman Gregory Pavlov June 5, 20 Abstract We study communication in a static oligopoly model with unverifiable private information.

More information

Selecting Cheap-Talk Equilibria

Selecting Cheap-Talk Equilibria Selecting Cheap-Talk Equilibria Ying Chen, Navin Kartik, and Joel Sobel September 30, 2007 Abstract There are typically multiple equilibrium outcomes in the Crawford-Sobel (CS) model of strategic information

More information

Game Theory: Spring 2017

Game Theory: Spring 2017 Game Theory: Spring 2017 Ulle Endriss Institute for Logic, Language and Computation University of Amsterdam Ulle Endriss 1 Plan for Today So far, our players didn t know the strategies of the others, but

More information

Introduction. 1 University of Pennsylvania, Wharton Finance Department, Steinberg Hall-Dietrich Hall, 3620

Introduction. 1 University of Pennsylvania, Wharton Finance Department, Steinberg Hall-Dietrich Hall, 3620 May 16, 2006 Philip Bond 1 Are cheap talk and hard evidence both needed in the courtroom? Abstract: In a recent paper, Bull and Watson (2004) present a formal model of verifiability in which cheap messages

More information

Strategic Information Transmission under Reputation Concerns

Strategic Information Transmission under Reputation Concerns Strategic Information Transmission under Reputation Concerns Emilia Oljemark October 7, 03 Abstract This paper analyzes strategic information transmission in a repeated model of communication. The sender

More information

Mechanism Design: Implementation. Game Theory Course: Jackson, Leyton-Brown & Shoham

Mechanism Design: Implementation. Game Theory Course: Jackson, Leyton-Brown & Shoham Game Theory Course: Jackson, Leyton-Brown & Shoham Bayesian Game Setting Extend the social choice setting to a new setting where agents can t be relied upon to disclose their preferences honestly Start

More information

Notes on Mechanism Designy

Notes on Mechanism Designy Notes on Mechanism Designy ECON 20B - Game Theory Guillermo Ordoñez UCLA February 0, 2006 Mechanism Design. Informal discussion. Mechanisms are particular types of games of incomplete (or asymmetric) information

More information

Essays on Mechanism Design

Essays on Mechanism Design Essays on Mechanism Design by Douglas Scott Smith A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Economics) in The University of Michigan 2011

More information

EC476 Contracts and Organizations, Part III: Lecture 2

EC476 Contracts and Organizations, Part III: Lecture 2 EC476 Contracts and Organizations, Part III: Lecture 2 Leonardo Felli 32L.G.06 19 January 2015 Moral Hazard: Consider the contractual relationship between two agents (a principal and an agent) The principal

More information

Dating and Divorce. Li, Hao and Sergei Severinov Vancouver School of Economics University of British Columbia. August 9, 2018.

Dating and Divorce. Li, Hao and Sergei Severinov Vancouver School of Economics University of British Columbia. August 9, 2018. Dating and Divorce Li, Hao and Sergei Severinov Vancouver School of Economics University of British Columbia August 9, 2018 Abstract We introduce private information in an otherwise standard search and

More information

Communication with Self-Interested Experts Part II: Models of Cheap Talk

Communication with Self-Interested Experts Part II: Models of Cheap Talk Communication with Self-Interested Experts Part II: Models of Cheap Talk Margaret Meyer Nuffield College, Oxford 2013 Cheap Talk Models 1 / 27 Setting: Decision-maker (P) receives advice from an advisor

More information

Information and Legislative Organization

Information and Legislative Organization Information and Legislative Organization Felix Munoz-Garcia Advanced Microeconomics II - Washington State University A paper by Gilligan and Krehbiel (1987) models the role of parliamentary committees

More information

Sequential Games with Incomplete Information

Sequential Games with Incomplete Information Sequential Games with Incomplete Information Debraj Ray, November 6 For the remaining lectures we return to extensive-form games, but this time we focus on imperfect information, reputation, and signalling

More information

Do Shareholders Vote Strategically? Voting Behavior, Proposal Screening, and Majority Rules. Supplement

Do Shareholders Vote Strategically? Voting Behavior, Proposal Screening, and Majority Rules. Supplement Do Shareholders Vote Strategically? Voting Behavior, Proposal Screening, and Majority Rules Supplement Ernst Maug Kristian Rydqvist September 2008 1 Additional Results on the Theory of Strategic Voting

More information

Microeconomic Theory (501b) Problem Set 10. Auctions and Moral Hazard Suggested Solution: Tibor Heumann

Microeconomic Theory (501b) Problem Set 10. Auctions and Moral Hazard Suggested Solution: Tibor Heumann Dirk Bergemann Department of Economics Yale University Microeconomic Theory (50b) Problem Set 0. Auctions and Moral Hazard Suggested Solution: Tibor Heumann 4/5/4 This problem set is due on Tuesday, 4//4..

More information

Competition in Persuasion

Competition in Persuasion Competition in Persuasion Matthew Gentzkow and Emir Kamenica University of Chicago September 2011 Abstract Does competition among persuaders increase the extent of information revealed? We study ex ante

More information

Supplementary Appendix for Informative Cheap Talk in Elections

Supplementary Appendix for Informative Cheap Talk in Elections Supplementary Appendix for Informative Cheap Talk in Elections Navin Kartik Richard Van Weelden October 19, 2017 Abstract This Supplementary Appendix formalizes two extensions of the baseline model discussed

More information

Opting Out in a War of Attrition. Abstract

Opting Out in a War of Attrition. Abstract Opting Out in a War of Attrition Mercedes Adamuz Department of Business, Instituto Tecnológico Autónomo de México and Department of Economics, Universitat Autònoma de Barcelona Abstract This paper analyzes

More information

9 A Class of Dynamic Games of Incomplete Information:

9 A Class of Dynamic Games of Incomplete Information: A Class of Dynamic Games of Incomplete Information: Signalling Games In general, a dynamic game of incomplete information is any extensive form game in which at least one player is uninformed about some

More information

Competition relative to Incentive Functions in Common Agency

Competition relative to Incentive Functions in Common Agency Competition relative to Incentive Functions in Common Agency Seungjin Han May 20, 2011 Abstract In common agency problems, competing principals often incentivize a privately-informed agent s action choice

More information

Abstract We analyze how communication and voting interact when there is uncertainty about players' preferences. We consider two players who vote on fo

Abstract We analyze how communication and voting interact when there is uncertainty about players' preferences. We consider two players who vote on fo Communication and Voting with Double-Sided Information Λ Ulrich Doraszelski Northwestern University y Dino Gerardi Northwestern University z Francesco Squintani University of Rochester x April 200 Λ An

More information

Endogenous Persuasion with Rational Verification

Endogenous Persuasion with Rational Verification Endogenous Persuasion with Rational Verification Mike FELGENHAUER July 16, 2017 Abstract This paper studies a situation in which a sender tries to persuade a receiver with evidence that is generated via

More information

Advertising and Prices as Signals of Quality: Competing Against a Renown Brand

Advertising and Prices as Signals of Quality: Competing Against a Renown Brand Advertising and Prices as Signals of Quality: Competing Against a Renown Brand Francesca Barigozzi and Paolo G. Garella University of Bologna Martin Peitz International University in Germany March 2005

More information

Correlated Equilibrium in Games with Incomplete Information

Correlated Equilibrium in Games with Incomplete Information Correlated Equilibrium in Games with Incomplete Information Dirk Bergemann and Stephen Morris Econometric Society Summer Meeting June 2012 Robust Predictions Agenda game theoretic predictions are very

More information

Mechanism Design: Basic Concepts

Mechanism Design: Basic Concepts Advanced Microeconomic Theory: Economics 521b Spring 2011 Juuso Välimäki Mechanism Design: Basic Concepts The setup is similar to that of a Bayesian game. The ingredients are: 1. Set of players, i {1,

More information

Government 2005: Formal Political Theory I

Government 2005: Formal Political Theory I Government 2005: Formal Political Theory I Lecture 11 Instructor: Tommaso Nannicini Teaching Fellow: Jeremy Bowles Harvard University November 9, 2017 Overview * Today s lecture Dynamic games of incomplete

More information

Entry under an Information-Gathering Monopoly Alex Barrachina* June Abstract

Entry under an Information-Gathering Monopoly Alex Barrachina* June Abstract Entry under an Information-Gathering onopoly Alex Barrachina* June 2016 Abstract The effects of information-gathering activities on a basic entry model with asymmetric information are analyzed. In the

More information

On the Unique D1 Equilibrium in the Stackelberg Model with Asymmetric Information Janssen, M.C.W.; Maasland, E.

On the Unique D1 Equilibrium in the Stackelberg Model with Asymmetric Information Janssen, M.C.W.; Maasland, E. Tilburg University On the Unique D1 Equilibrium in the Stackelberg Model with Asymmetric Information Janssen, M.C.W.; Maasland, E. Publication date: 1997 Link to publication General rights Copyright and

More information

Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about

Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about 0368.4170: Cryptography and Game Theory Ran Canetti and Alon Rosen Lecture 7 02 December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about Two-Player zero-sum games (min-max theorem) Mixed

More information

Are Obstinacy and Threat of Leaving the Bargaining Table Wise Tactics in Negotiations?

Are Obstinacy and Threat of Leaving the Bargaining Table Wise Tactics in Negotiations? Are Obstinacy and Threat of Leaving the Bargaining Table Wise Tactics in Negotiations? Selçuk Özyurt Sabancı University Very early draft. Please do not circulate or cite. Abstract Tactics that bargainers

More information

Decision-Making With Potentially Biased Advisors

Decision-Making With Potentially Biased Advisors Decision-Making With Potentially Biased Advisors Kevin Kupiec April 18, 2011 Professor Giuseppe (Pino) Lopomo, Faculty Advisor Professor Marjorie McElroy, Faculty Advisor Honors Thesis submitted in partial

More information

Sender s Small Concern for Credibility and Receiver s Dilemma

Sender s Small Concern for Credibility and Receiver s Dilemma April 2012 Sender s Small Concern for Credibility and Receiver s Dilemma Hanjoon Michael Jung The Institute of Economics, Academia Sinica Abstract We model a dilemma that receivers face when a sender has

More information

WHEN ARE SIGNALS COMPLEMENTS OR SUBSTITUTES?

WHEN ARE SIGNALS COMPLEMENTS OR SUBSTITUTES? Working Paper 07-25 Departamento de Economía Economic Series 15 Universidad Carlos III de Madrid March 2007 Calle Madrid, 126 28903 Getafe (Spain) Fax (34-91) 6249875 WHEN ARE SIGNALS COMPLEMENTS OR SUBSTITUTES?

More information

Patience and Ultimatum in Bargaining

Patience and Ultimatum in Bargaining Patience and Ultimatum in Bargaining Björn Segendorff Department of Economics Stockholm School of Economics PO Box 6501 SE-113 83STOCKHOLM SWEDEN SSE/EFI Working Paper Series in Economics and Finance No

More information

A Rothschild-Stiglitz approach to Bayesian persuasion

A Rothschild-Stiglitz approach to Bayesian persuasion A Rothschild-Stiglitz approach to Bayesian persuasion Matthew Gentzkow and Emir Kamenica Stanford University and University of Chicago January 2016 Consider a situation where one person, call him Sender,

More information

A Model of Gossip. Wei Li Massachusetts Institute of Technology. November 14, Abstract

A Model of Gossip. Wei Li Massachusetts Institute of Technology. November 14, Abstract A Model of Gossip Wei Li Massachusetts Institute of Technology November 14, 2002 Abstract This paper analyzes how the gossip process can be manipulated by malicious people and the impact of such manipulation

More information

Costly Expertise. Dino Gerardi and Leeat Yariv yz. Current Version: December, 2007

Costly Expertise. Dino Gerardi and Leeat Yariv yz. Current Version: December, 2007 Costly Expertise Dino Gerardi and Leeat Yariv yz Current Version: December, 007 In many environments expertise is costly. Costs can manifest themselves in numerous ways, ranging from the time that is required

More information

UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer Social learning and bargaining (axiomatic approach)

UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer Social learning and bargaining (axiomatic approach) UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer 2015 Social learning and bargaining (axiomatic approach) Block 4 Jul 31 and Aug 1, 2015 Auction results Herd behavior and

More information

Introduction Persuasion Attribute Puffery Results Conclusion. Persuasive Puffery. Archishman Chakraborty and Rick Harbaugh

Introduction Persuasion Attribute Puffery Results Conclusion. Persuasive Puffery. Archishman Chakraborty and Rick Harbaugh Persuasive Puffery Archishman Chakraborty and Rick Harbaugh 2012 Marketing Science Meetings Puffery Sellers tend to exaggerate World s best hotdogs! That suit looks perfect on you! Our service can t be

More information

Recap Social Choice Functions Fun Game Mechanism Design. Mechanism Design. Lecture 13. Mechanism Design Lecture 13, Slide 1

Recap Social Choice Functions Fun Game Mechanism Design. Mechanism Design. Lecture 13. Mechanism Design Lecture 13, Slide 1 Mechanism Design Lecture 13 Mechanism Design Lecture 13, Slide 1 Lecture Overview 1 Recap 2 Social Choice Functions 3 Fun Game 4 Mechanism Design Mechanism Design Lecture 13, Slide 2 Notation N is the

More information

Using Cheap Talk to Polarize or Unify a Group of Decision Makers

Using Cheap Talk to Polarize or Unify a Group of Decision Makers Using Cheap Talk to Polarize or Unify a Group of Decision Makers Daeyoung Jeong October 30, 015 Abstract This paper develops a model of strategic information transmission from an expert with informational

More information

Disagreement and Evidence Production in Strategic Information Transmission

Disagreement and Evidence Production in Strategic Information Transmission Disagreement and Evidence Production in Strategic Information Transmission Péter Eső and Ádám Galambos April 4, 2012 Abstract We expand Crawford and Sobel s (1982) model of information transmission to

More information

Misinformation. March Abstract

Misinformation. March Abstract Misinformation Li, Hao University of British Columbia & University of Toronto Wei Li University of British Columbia & University of California, Riverside March 2010 Abstract We model political campaigns

More information

Data Abundance and Asset Price Informativeness. On-Line Appendix

Data Abundance and Asset Price Informativeness. On-Line Appendix Data Abundance and Asset Price Informativeness On-Line Appendix Jérôme Dugast Thierry Foucault August 30, 07 This note is the on-line appendix for Data Abundance and Asset Price Informativeness. It contains

More information

Managerial Optimism and Debt Covenants

Managerial Optimism and Debt Covenants Managerial Optimism and Debt Covenants Jakob Infuehr University of Texas at Austin Volker Laux University of Texas at Austin October 26, 2018 Preliminary and Incomplete Abstract: This paper studies the

More information

Lecture Slides - Part 4

Lecture Slides - Part 4 Lecture Slides - Part 4 Bengt Holmstrom MIT February 2, 2016. Bengt Holmstrom (MIT) Lecture Slides - Part 4 February 2, 2016. 1 / 65 Mechanism Design n agents i = 1,..., n agent i has type θ i Θ i which

More information

Strongly Consistent Self-Confirming Equilibrium

Strongly Consistent Self-Confirming Equilibrium Strongly Consistent Self-Confirming Equilibrium YUICHIRO KAMADA 1 Department of Economics, Harvard University, Cambridge, MA 02138 Abstract Fudenberg and Levine (1993a) introduce the notion of self-confirming

More information

Controlling versus enabling Online appendix

Controlling versus enabling Online appendix Controlling versus enabling Online appendix Andrei Hagiu and Julian Wright September, 017 Section 1 shows the sense in which Proposition 1 and in Section 4 of the main paper hold in a much more general

More information

Bargaining with Periodic Participation Costs

Bargaining with Periodic Participation Costs Bargaining with Periodic Participation Costs Emin Karagözoğlu Shiran Rachmilevitch July 4, 017 Abstract We study a bargaining game in which a player needs to pay a fixed cost in the beginning of every

More information

Communication and Voting with Double-Sided Information

Communication and Voting with Double-Sided Information Communication and Voting with Double-Sided Information Ulrich Doraszelski Hoover Institution Dino Gerardi Yale University Francesco Squintani University of Rochester March 2002 An earlier version of this

More information

Monopoly with Resale. Supplementary Material

Monopoly with Resale. Supplementary Material Monopoly with Resale Supplementary Material Giacomo Calzolari Alessandro Pavan October 2006 1 1 Restriction to price offers in the resale ultimatum bargaining game In the model set up, we assume that in

More information

Theory Field Examination Game Theory (209A) Jan Question 1 (duopoly games with imperfect information)

Theory Field Examination Game Theory (209A) Jan Question 1 (duopoly games with imperfect information) Theory Field Examination Game Theory (209A) Jan 200 Good luck!!! Question (duopoly games with imperfect information) Consider a duopoly game in which the inverse demand function is linear where it is positive

More information

A Rothschild-Stiglitz approach to Bayesian persuasion

A Rothschild-Stiglitz approach to Bayesian persuasion A Rothschild-Stiglitz approach to Bayesian persuasion Matthew Gentzkow and Emir Kamenica Stanford University and University of Chicago December 2015 Abstract Rothschild and Stiglitz (1970) represent random

More information

Suggested solutions to the 6 th seminar, ECON4260

Suggested solutions to the 6 th seminar, ECON4260 1 Suggested solutions to the 6 th seminar, ECON4260 Problem 1 a) What is a public good game? See, for example, Camerer (2003), Fehr and Schmidt (1999) p.836, and/or lecture notes, lecture 1 of Topic 3.

More information

Online Appendix for "Auctions in Markets: Common Outside Options and the Continuation Value Effect" Not intended for publication

Online Appendix for Auctions in Markets: Common Outside Options and the Continuation Value Effect Not intended for publication Online Appendix for "Auctions in Markets: Common Outside Options and the Continuation Value Effect" Not intended for publication Stephan Lauermann Gabor Virag March 19, 2012 1 First-price and second-price

More information

Continuity in Mechanism Design without Transfers 1

Continuity in Mechanism Design without Transfers 1 Continuity in Mechanism Design without Transfers 1 David Martimort and Aggey Semenov 3 This Version: March 16, 006 Abstract: We adopt a mechanism design approach to model communication between a principal

More information

Supplementary Materials for. Forecast Dispersion in Finite-Player Forecasting Games. March 10, 2017

Supplementary Materials for. Forecast Dispersion in Finite-Player Forecasting Games. March 10, 2017 Supplementary Materials for Forecast Dispersion in Finite-Player Forecasting Games Jin Yeub Kim Myungkyu Shim March 10, 017 Abstract In Online Appendix A, we examine the conditions under which dispersion

More information

ENDOGENOUS REPUTATION IN REPEATED GAMES

ENDOGENOUS REPUTATION IN REPEATED GAMES ENDOGENOUS REPUTATION IN REPEATED GAMES PRISCILLA T. Y. MAN Abstract. Reputation is often modelled by a small but positive prior probability that a player is a behavioral type in repeated games. This paper

More information

Political Economy of Transparency

Political Economy of Transparency Political Economy of Transparency Raphael Galvão University of Pennsylvania rgalvao@sas.upenn.edu November 20, 2017 Abstract This paper develops a model where short-term reputation concerns guide the public

More information