Competing Mechanisms: One-Shot versus Repeated Games

Similar documents
Competition relative to Incentive Functions in Common Agency

Competing Mechanism Design with Frictions

A Folk Theorem for Contract Games with Multiple Principals and Agents

Negotiation and Take it or Leave it in Common Agency

Microeconomics II Lecture 4: Incomplete Information Karl Wärneryd Stockholm School of Economics November 2016

ON THE REVELATION PRINCIPLE AND RECIPROCAL MECHANISMS IN COMPETING MECHANISM GAMES

Informed Principal in Private-Value Environments

Bayesian Games and Mechanism Design Definition of Bayes Equilibrium

Mechanism Design: Basic Concepts

REPEATED GAMES. Jörgen Weibull. April 13, 2010

Mechanism Design with Ambiguous Transfers

Efficient Repeated Implementation

Definitions and Proofs

Robust Mechanism Design and Robust Implementation

On the Informed Principal Model with Common Values

Discussion Paper #1541

RECIPROCAL RELATIONSHIPS AND MECHANISM DESIGN

Theory of Auctions. Carlos Hurtado. Jun 23th, Department of Economics University of Illinois at Urbana-Champaign

Robust Predictions in Games with Incomplete Information

Notes on Mechanism Designy

Microeconomic Theory (501b) Problem Set 10. Auctions and Moral Hazard Suggested Solution: Tibor Heumann

Deceptive Advertising with Rational Buyers

1 Web Appendix: Equilibrium outcome under collusion (multiple types-multiple contracts)

Monopoly with Resale. Supplementary Material

Game Theory. Monika Köppl-Turyna. Winter 2017/2018. Institute for Analytical Economics Vienna University of Economics and Business

Repeated Implementation with Finite Mechanisms and Complexity

Static Information Design

Basics of Game Theory

Bayes Correlated Equilibrium and Comparing Information Structures

Mechanism Design: Dominant Strategies

On Competing Mechanisms under Exclusive Competition

Political Economy of Institutions and Development: Problem Set 1. Due Date: Thursday, February 23, in class.

On Decentralized Incentive Compatible Mechanisms for Partially Informed Environments

Entry under an Information-Gathering Monopoly Alex Barrachina* June Abstract

University of Warwick, Department of Economics Spring Final Exam. Answer TWO questions. All questions carry equal weight. Time allowed 2 hours.

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

Computing Equilibria of Repeated And Dynamic Games

Solving Extensive Form Games

Persuading a Pessimist

Mechanism Design II. Terence Johnson. University of Notre Dame. Terence Johnson (ND) Mechanism Design II 1 / 30

Bargaining, Contracts, and Theories of the Firm. Dr. Margaret Meyer Nuffield College

Supplementary appendix to the paper Hierarchical cheap talk Not for publication

BARGAINING AND EFFICIENCY IN NETWORKS

EC476 Contracts and Organizations, Part III: Lecture 2

Money, Barter, and Hyperinflation. Kao, Yi-Cheng Department of Business Administration, Chung Yuan Christian University

Correlated Equilibrium in Games with Incomplete Information

Indescribable Contingencies versus Unawareness and Incomplete Contracting

NTU IO (I) : Auction Theory and Mechanism Design II Groves Mechanism and AGV Mechansim. u i (x, t i, θ i ) = V i (x, θ i ) + t i,

Mechanism Design: Implementation. Game Theory Course: Jackson, Leyton-Brown & Shoham

6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2

Theory Field Examination Game Theory (209A) Jan Question 1 (duopoly games with imperfect information)

ENDOGENOUS REPUTATION IN REPEATED GAMES

On the Informed Principal Model with Common Values

EconS Microeconomic Theory II Midterm Exam #2 - Answer Key

Lecture Slides - Part 4

Lecture 4. 1 Examples of Mechanism Design Problems

The Revenue Equivalence Theorem 1

Continuity in Mechanism Design without Transfers 1

Bertrand-Edgeworth Equilibrium in Oligopoly

Some Notes on Adverse Selection

Revenue Maximization in Multi-Object Auctions

6.207/14.15: Networks Lecture 24: Decisions in Groups

EconS Advanced Microeconomics II Handout on Repeated Games

The Folk Theorem for Finitely Repeated Games with Mixed Strategies

Some forgotten equilibria of the Bertrand duopoly!?

Game Theory and Algorithms Lecture 2: Nash Equilibria and Examples

Equilibrium Refinements

An Efficient Solution to the Informed Principal Problem

Repeated Games with Perfect Monitoring

Repeated Downsian Electoral Competition

Mechanism Design: Review of Basic Concepts

Price and Capacity Competition

Imperfect Monitoring and Impermanent Reputations

ECO421: Communication

This is designed for one 75-minute lecture using Games and Information. October 3, 2006

Government 2005: Formal Political Theory I

Introduction. 1 University of Pennsylvania, Wharton Finance Department, Steinberg Hall-Dietrich Hall, 3620

Perfect Bayesian Equilibrium. Definition. The single-crossing property. This is a draft; me with comments, typos, clarifications, etc.

Individually Rational, Balanced-Budget Bayesian Mechanisms and the Informed Principal Problem.

On the Impossibility of Black-Box Truthfulness Without Priors

Chapter 9. Mixed Extensions. 9.1 Mixed strategies

Wars of Attrition with Budget Constraints

Are Obstinacy and Threat of Leaving the Bargaining Table Wise Tactics in Negotiations?

The Impact of Organizer Market Structure on Participant Entry Behavior in a Multi-Tournament Environment

Bayesian Persuasion Online Appendix

Area I: Contract Theory Question (Econ 206)

3.3.3 Illustration: Infinitely repeated Cournot duopoly.

Local Communication in Repeated Games with Local Monitoring

Persuasion Under Costly Lying

Limit pricing models and PBE 1

Crowdsourcing contests

EFFICIENCY IN GAMES WITH MARKOVIAN PRIVATE INFORMATION. 1. Introduction

Perfect Conditional -Equilibria of Multi-Stage Games with Infinite Sets of Signals and Actions (Preliminary and Incomplete)

SYMMETRIC MECHANISM DESIGN. October 19, 2015

BELIEFS & EVOLUTIONARY GAME THEORY

NASH IMPLEMENTATION USING SIMPLE MECHANISMS WITHOUT UNDESIRABLE MIXED-STRATEGY EQUILIBRIA

Dynamic Common Agency

Preliminary Results on Social Learning with Partial Observations

Economics 201B Economic Theory (Spring 2017) Bargaining. Topics: the axiomatic approach (OR 15) and the strategic approach (OR 7).

Renegotiation-Proof Third-Party Contracts under Asymmetric Information

Transcription:

Competing Mechanisms: One-Shot versus Repeated Games Sambuddha Ghosh Shanghai U. of Fin. & Economics Seungjin Han McMaster University October 8, 2016 Abstract This paper studies a game where multiple principals simultaneously offer mechanisms to multiple agents, who may or may not have private information and can send private messages to principals. The game may be infinitely repeated, with the private information being independent over time and across agents. We show that the minmax value under no restrictions on the complexity of the mechanisms can be computed very easily as the solution to a simpler problem involving only actions and direct mechanisms. Furthermore, as players become patient, all payoff vectors that give each principle strictly above the minmax values above can be delivered in a perfect Bayesian equilibrium. Such equilibria involve each agent reporting only her type on the equilibrium path, and after a deviation by an agent; following a deviation by a principal, each agent reports her type and an action from the set of actions available to the deviating principal. In contrast to the one-shot game, it is very easy to determine if an allocation can arise in equilibrium and, if so, the mechanisms to support it (including punitive mechanisms). Our construction makes more complicated mechanisms irrelevant by using agents to neutralise any advantage a principal might have had from offering complex mechanisms after he deviates. Acknowledgments to be added. 1

Contents 1 Introduction 3 2 A general model of competing mechanisms 7 2.1 One-shot game.................................... 8 2.2 Repeated game.................................... 8 2.3 Social choice functions and direct mechanisms................... 9 3 One-shot game 10 3.1 Complete information one-shot game........................ 14 3.1.1 The folk theorem for one-shot games of complete information...... 18 4 Repeated game of complete information 19 5 Repeated game of incomplete information 21 5.1 Incentive compatibility................................ 21 5.2 Lower bound on a player s equilibrium payoff................... 23 5.2.1 Folk theorem................................. 25 5.3 Comparison: one-shot versus repeated games................... 28 6 Discussion 29 6.1 Relaxing incentive compatibility.......................... 30 6.2 Two agents...................................... 33 6.3 Observability..................................... 33 6.4 Markov types..................................... 34 A Proof of Lemma 2 35 B Proof of Lemma 3 37 C Proof of Theorem 6 38 D Proof of Theorem 7 40 E Proof of Theorem 8 43 2

1 Introduction Repeated competing mechanism games are ubiquitious; for example, buyers repeatedly rent a car from one of two competing rental enterprises, various suppliers repeatedly deliver raw materials to firms who manufacture similar products. Short-term contracts govern the buyerseller interaction, while a longer term interaction is ongoing, both among sellers and across the two sides of the market. Unfortunately, for many applications the existing models are too complex without exogenous restrictions on the mechanisms allowed. The root of the problem lies in endogenous, in the sense of being generated by the equilibrium actions and deviations from it, private information. Because buyers are looking for better deals in the market, they are informed about contracts or terms of trade offered by sellers in the market. A seller thus has an incentive to come up with a sophisticated trading scheme that makes buyers reveal this market information, on which the seller s terms of trade can then depend. Our goal is to identify situations under which relatively simple mechanisms, such as direct mechanisms, could indeed be used without ad hoc stipulations to that effect. A Bertrand duopoly, with a common constant marginal cost, suffices to illustrate why this introduces significant complexity. In a one-shot pricing game between the duopolists, the monopoly price cannot be supported because sellers have an incentive to cut prices. However if sellers can offer contracts that match the best price reported by a majority of buyers via private messages, 1 they can implicitly collude even in the one-shot setting and charge a price higher than the Bertrand duopoly level; this does not even require buyers to present hard evidence on a competitor s price such as a price tag. This implies that unlike the singleprincipal setting, restricting attention to direct mechanisms and allowing terms of trade to depend only on buyers payoff types, entails loss of generality when sellers compete in offering selling mechanisms to determine their terms of trade or allocation decision (e.g., McAfee (1993)). While competition by multiple principals over the space of mechanisms is a very natural model, we must first answer a conceptual question: What class of mechanisms can we restrict attention to in such settings? For the one-shot competing mechanism game, Epstein and Peters (1999) propose a class of universal mechanisms that allows agents (e.g., buyers) to describe mechanisms offered by competing principals. However, the message spaces that are used in describing competing principals mechanisms are quite complex in that they resemble the universal type-space for hierarchies of belief due to the infinite regress problem: my mechanism depends on the competitor s mechanism, his mechanism depends on mine, and so on ad infinitum. Our paper provides a comprehensive study of competing mechanisms, both in the one-shot 1 A seller can use a mechanism that makes his price match the lowest price reported by a majority of buyers when many buyers participate in the mechanism. 3

and the repeated versions. In the repeated game each agent s has a private payoff type, drawn independently across periods and agents. Each principal offers a one-period mechanism that is seen by the agents only. The set of mechanism profiles Γ can include arbitrarily complex mechanisms in terms of the nature of the messages. Each agent sends private messages to the principals, who execute the mechanisms offered at the beginning of the period. For simplicity, mechanisms and actions are assumed to be observable at the end of the period. 2 We provide a loose summary of our results: In the game of complete information, we prove all of the following values to be equal the maxmin and the minmax values defined with respect to the complex mechanisms in Γ, and the maxmin and the minmax values with respect to simple actions (as if there were no mechanisms). When the game is repeated, a folk theorem holds allocations that give each principal above the maxmin/minmax above can be sustained in equilibrium with only simple actions when players are patient. When agents have private information about payoff types, we show that the minmax value of principal j equals the maxmin value with respect to complex mechanisms. However this can be very tractably calculated as the maxmin value of j in a game where j can offer only simple actions and the others can offer only direct mechanisms. Sustaining payoffs above the minmax value with patient playes requires mechanisms that ask for only type reports on path, and for the type and an action from principal j s action set when j is being punished. 3 Our focus is on the tractability of the equilibrium mechanisms. Direct mechanisms are almost enough when players are patient, even if where there is private information and possibly interdependence of utilities. Our approach is constructive. It does not amount to simply saying that any equilibrium of the repeated game with complex mechanisms can be supported with simpler mechanisms in the repeated game. Our result is far more useful because we do not need an equilibrium of the complex mechanism game to construct the simple mechanisms. Given any allocation, i.e. mapping from types to actions, our proof shows how to compute the minmax and check if the allocation can be supported in equilibrium; if it can, the proof constructs these mechanisms using essentially direct mechanisms. A more detailed look at our mechanisms. earlier. We now elaborate on the results stated Yamashita (2010) shows that the subclass of recommendation mechanisms is also sufficient to support all (pure-strategy) equilibrium allocations when there are at least three agents: Each principal asks agents to suggest which direct mechanism he should offer, and 2 Actually only agents need to observe the actions and mechanisms; see the discussion in Section 6. 3 The action reported by i is her prediction of what would be played by the deviator if all the others were to carry out their roles. 4

commits to offer the one recommended by a majority of agents. Recommendation mechanisms can be viewed as a natural extension of menus for common agency (Martimort and Stole (2002), Page and Monterio (2003), and Peters (2001)). 4 Just as a menu lets the common agent choose an alternative, a recommendation mechanism lets agents collectively pick a direct mechanism for each principal. It also encodes punishments in the following sense If any principal deviates from offering the recommendation mechanism, agents recommend the use of a direct mechanism that punishes the deviating principal in the same period. 5 Yamashita also shows that the lower bound of each principal s payoff supportable in an equilibrium is the minmax value of his payoff over the set of arbitrarily complex mechanisms Γ allowed in the competing mechanism game. Tying the lower bound to the very set of complex mechanisms makes it impossible to express it in terms of model primitives such as actions and direct mechanisms. Furthermore, this lower bound sits uneasily with the idea that a recommendation mechanism allows a principal to have the same effect as making his mechanism contingent on the deviating principal s mechanism because the agents can observe the deviator s mechanism and then recommend what the non-deviator should do. The right notion of the lower bound then seems to be the maxmin value rather than the minmax. We resolve this paradox by formally proving that the minmax value of a principal s equilibrium payoff equals the maxmin value if the set of mechanisms Γ allowed in the game is rich enough to include recommendation mechanisms. This equivalence is independent of whether or not random actions are allowed but rather relies on the fact that a principal s recommendation mechanism induces the same effect as changing his mechanism to track the deviator s mechanism. This equivalence result plays a crucial role in comparing the lower bounds in the one-shot game with those in the repeated game. The repeated game with patient players provides a stark contrast to the one-shot game. First, simple mechanisms very close to direct mechanisms (DMs) can mete out all punishments. Second, perhaps more importantly, this characterization expresses the lower bound on a principal s equilibrium payoffs in terms of model primitives such as actions and incentive compatible DMs. The difficulty is simplifying off-path punishments in such settings. Suppose that principal 1 must punish principal 2 for having deviated. We instruct the agents to always induce a fixed action from the deviating principal that depends only on the mechanism that 2 offers (and nothing else). If agents do not do so and an unexpected action is taken by the deviating principal, it can be inferred that at least one agent is deviated. We show that even when the identity of deviating agent is not known, principals can punish every agent with equal probability, based on the player-specific punishment (Fudenberg and Maskin 1986, 4 The rich applications of common agency (Martimort 2006) can be extended to multiple agency (Prat and Rustichini 2003). Han (2006) extends the menu theorem to bilateral contracting where multiple principals negotiate contracts with multiple agents independently. 5 The significance of having three or more agents in Yamashita (2010) is that one agent s deviation from the common recommendation has no effect. 5

Abreu, Dutta and Smith 1994). This deters agents deviation. This means that in the repeated game the deviating principal cannot do any better by offering an arbitrary mechanism off the path following his deviation than he could by offering a single action. When the deviating principal cannot use very complicated best responses, simple mechanisms suffice to mete out punishments. In addition, we show that a weaker notion of incentive compatibility can be applied in the repeated game. Given the profile of DMs, the action profile determined by the DMs carries information about the type messages sent by agents to a certain extent. As long as her untruthful type messages lead to revealing her as the only deviating agent with positive probability, an agent will not report such untruthful type messages. Therefore, we do not need to impose the incentive compatibility over such profiles of type messages. In contrast to the one-shot game, twe can express the lower bound of principal j s equilibrium payoff in the repeated game in terms of model primitives: It is equal to the maxmin value of j s payoff in terms of j s actions and the other principals incentive compatible DMs conditional on j s action. This reduces the lower bound as we go from the one-shot to the repeated game. The weaker notion of incentive compatibility can also be applied to the equilibrium allocations. Combining it with the reduced lower-bound implies that the repeated game supports more allocations in equilibrium. When players are patient, principals can support an allocation that yields principals payoffs above their lower bounds by offering only DMs on the equilibrium path but action-reporting DMs (ADMs) off the path following a principal s deviation: ADMs ask agents to report what action they are inducing from the deviating principal, along with their types. Our equilibrium characterization in the repeated game with incomplete information can include more equilibrium allocations if we can further relax incentive compatibility. As shown in Section 6, incentive compatibility can be indeed further relaxed on the equilibrium path so that it does not need to be imposed when principals know that at least one agent has deviated but not the identity of the deviating agent; In this case, principals punish every agent with positive probability. 6 However, punishing every agent with positive probability (e.g., equal probability) does not work off the path. Suppose that an agent deviates to lie in the final phase after she was punished In this final phase, all other players except her get rewards. When the identity of the deviating agent is not known, every agent is punished with positive probability. This means that there is positive probability that another agents is punished while the real deviator goes free, participates in punishing an innocent agent, and is rewarded forever afterwards. A positive probability of such a reward may suffice to trigger a deviation in the final phase if the agent is sufficiently patient. This may in turn make an agent deviate on the path since the deviation in the final phase can increase her expected lifetime average 6 Section 6 also discusses how ex-ante incentive compatility can be imposed on the path, based on statistical checks on messages over periods (Jackson and Sonnenschein (2007), Renault, Solan, and Vielle (2013)). 6

payoff even when her deviation on the path is detected. Therefore, incentive compatibility is tighter off the path than it is on the path. Closest to ADMs proposed in our paper are perhaps the revelation mechanisms that Pavan and Calzolari (2010) proposed for the one-shot common agency game. An incentive compatible revelation mechanism asks the common agent to report decisions made by all the other principals along with her payoff type. They show that revelation mechanisms do not provide the characterization of equilibrium allocations in the one-shot common agency game but are useful in deriving equilibria in applications. ADMs in our paper is simpler because agents do not need to report the whole actions taken by other principals, but only the deviator s action. ADMs do not impose incentive compatibility over agents reports on the deviator s action. Importantly, ADMs are never used in an equilibrium of the repeated game because they are required only off the path; direct mechanisms are sufficient on the path. The one-shot competing mechanism game inherently cannot pin down the lower bound of a principal s equilibrium payoff in terms of model primitives even with the equivalence results between the minmax and maxmin we establish in this paper. Recently, Peters and Trancoso-Valverde (2013) show that any allocation that is implementable by a single mechanism designer can be implementable in a decentralised equilibrium, when all players are able to offer mechanisms to one another and a second round of communication is available to cross-check messages sent in the first round. However, they adopt Bayesian equilibrium as solution concept instead of perfect Bayesian equilibrium and the lower bound of the player s equilibrium payoff is still not characterized for the case of incomplete information. 2 A general model of competing mechanisms We first describe the underlying one-shot game. The sets of principals and agents 7 are, respectively, J := {1,, J} and I := {J + 1,, J + I} with J 2 and I 3. Each agent i is privately informed about her type θ i, which is drawn from a finite set Θ i according to a distribution µ i ; the profile of types θ = (θ J+1,... θ J+I ) is drawn from Θ := i Θ i according to the joint distribution µ = i µ i. Each principal j makes a decision a j (henceforth referred to as an action) from a finite 8 set A j ; a random action of principal j is denoted by α j A j := A j. A profile of pure actions is a = (a 1,..., a J ) A := j J A j, while a profile of random actions is α A := j J A j, and A j := k j A k. The vn-m (von Neumann-Morgenstern) expected utility function for player l (principal or agent) is u l : A Θ R; payoffs are uniformly bounded by ū <, i.e. u l (α, θ) < ū for all α A, all l I J, and all θ Θ. All this 7 To avoid confusion, we use masculine pronouns for principals and feminine pronouns for agents. 8 Finiteness of the type and action spaces is not critical for our results, but are usually made in the literature. With a modicum of technicalities we can deal with a compact set of actions and a countable type-space. 7

information is encapsulated in the underlying game: G := (J ; I; (A j ) j J ; (Θ i ) i I ; (µ i ) i I ; (u l ) l J I ). (1) The underlying game can be thought of as the simplest game where each principal s strategy space is the set of random actions. 2.1 One-shot game Fix an underlying game G as in (1), and for each j J fix a collection of compact sets {M ij i I} and a set Γ j which comprises continuous mappings γ j from M j := i I M ij to A j. We sometimes refer to these as complex mechanisms to highlight the fact that these may allow very general communication; for example, they may allow agents to report not only their own types, but also what mechanisms were offered by the other principals. moves: The one-shot competing mechanism game (G, Γ) is the game with the following timing of 1. Each principal j simultaneously offers a mechanism γ j from Γ j. 2. After observing the profile of mechanisms γ = (γ 1,..., γ J ) offered from Γ := j Γ j, each agent send private messages, one to each principal, without observing others messages; agent i s message to j is m ij M ij. 3. A principal s action, which may be random, is determined by his mechanism, given the messages he receives, so that principal j takes action γ j (m j ) A j when he receives the profile of messages m j := (m ij ) i I M j. 4. Finally, each player l I J earns the payoff u l (γ 1 (m 1 ),..., γ J (m J ), θ). The following assumption in maintained throughout: messages from agent i to principal j are private, i.e. it is not observable by other players, principals or agents. 2.2 Repeated game We now describe the infinitely repeated game (G, Γ) (δ). It involves playing the competing mechanism stage-game (G, Γ) at each time t 1, with a common discount factor δ (0, 1) across periods, the only restriction being that mechanism profiles must be drawn from Γ. At the start of each period, each agent i s type is independently and repeatedly drawn from the full support distribution µ i and hence µ := i µ i (See Section 6.4 for discussion on Markov types). All players observe the draw of a public correlation device (PCD), before principals 8

offer their mechanisms. The PCD makes it possible to correlate actions. 9 We adopt the following notational convention: If κ is a variable in the stage game, we denote its period t value by κ t, with the understanding that t is a superscript and not an exponent. Each principal can offer a mechanism γj t Γ j that maps into actions, where the mechanism may depend on the value of the PCD; as described earlier, we allow each principal j to commit to random actions in A j as it seems to demand no more commitment power than commitment to pure actions, and is actually used in settings where a prize is randomly awarded to one of the winners of a contest. Let α t A be the action profile at time t. Starting with the null history h 0, a t-period history h t is constructed from the (t 1)-period history according to the formula h t = h t 1 (γ t, α t ), where denotes concatenation. At the end of period t, both agents and principals observe the history h t. Thus, mechanisms offered and actions chosen are assumed to be public at the end of each period; while this may be regarded as too strong an assumption, our results would go through substantively unchanged even if principals never observed the others mechanisms or actions (see Section 6.3). The only substantive assumption is that agents learn the actions taken. 10 The (average) discounted payoff of player l J I from period τ onwards is (1 δ) ( t τ δt τ u l α t, θ t), where α t is the action profile taken at time t. We employ the standard notion of PBE (perfect Bayesian equilibrium) as the equilibrium solution concept. In particular, it imposes sequential rationality and Bayes rule wherever possible; 11 furthermore all players l n share a common belief about player n, and a player cannot signal what he does not know (see Fudenberg and Tirole, 1991). 2.3 Social choice functions and direct mechanisms An allocation or social choice function is a mapping f : Θ A from type profiles to (possibly correlated) probability distributions over actions. The set of (correlated) stage-scfs is denoted by F. A subclass of SCFs is obtained by independently mixing actions by principals: F 0 := {f F f(θ) A 1 A J =: A}. (2) Note that F 0 F. 9 While public correlation is normally assumed in repeated games, it is not trivial to show when it may be dispensed with (see Fudenberg and Maskin 1991); however, in our setting we can use messages from the agents to generate correlation devices through the mechanisms. We adopt the simpler formulation with an exogenous correlation device. 10 Agents learn the mixed actions taken. While this is non-standard in all but the earliest papers on repeated games, it seems less objectionable in the mechanism design literature as principals can commit to randomize in an observable way. Such observability of actions plays a critical role, say in Ortner and Chassang (2015). 11 In other words, if the equilibrium strategies make an information set h off path, but on path if when we condition on an information set h, then the beliefs at h are determined by applying Bayes rule conditional on h being reached. 9

A SCF f in F can be implemented when principals implement a profile of incentive compatible direct mechanisms (DMs) conditional on the realization of the PCD. Given f F, an incentive compatible DMs can be formulated in the following way. First, all players observe the draw ω of a random variable distributed uniformly on the interval [0, 1), whereupon principal j is expected to select a DM π j (, ω) : Θ A j. Note that we do not force all the uncertainty to be resolved by the PCD; it is only required that the residual randomness is the product of independent random actions conditional on truthful reports. Henceforth, we denote principal j s DM simply by π j : Θ A j. Let Π j be the set of all DMs available to principal j, the nature of which depends on the application under consideration; let Π := j J Π j. An uncorrelated SFC f in F 0 thus consists of DMs, one for each principal. 3 One-shot game Yamashita (2010) considers the one-shot competing mechanism game (G, Γ) with deterministic actions and no PCD. Let us formulate incentive compatibility in the one-shot game. For any given profile of DMs π = (π 1,..., π J ), the expected payoff of agent i of type θ i who reports θ ij to principal j, when the other agents report truthfully, is E µ i [u i (π 1 (θ i1, θ i ),, π J (θ ij, θ i ), θ i, θ i )], where E µ i is the expectation operator with respect to the probability distribution µ i over Θ i. A profile of DMs π = (π 1,..., π J ) is unconstrained incentive compatible (UIC) if for all i I and all θ = (θ i, θ i ) Θ we have E µ i [u i (π(θ), θ)] E µ i [u i (π 1 (θ i1, θ i ),..., π J (θ ij, θ i ), θ)] (θ i1,..., θ ij ) (Θ i ) J. (3) It requires that each agent s expected payoff from truth telling be no less than that from any possible type reporting. 12 A profile of mechanisms γ = (γ 1,..., γ J ) Γ leads to a continuation game where agents send private messages to principals. The profile of private messages sent by agent i following communication strategy s i : Γ Θ i M i1 M ij when her type is drawn to be θ i is s i (γ, θ i ) = (s i1 (γ, θ i ),..., s ij (γ, θ i )) M i1 M ij. Let S i be the set of all communication strategies of agent i. These messages induce actions from the mechanisms. 12 This definition applies almost immediately to an SCF f = (f 1,..., f J) in F 0 because f F 0 is a vector of DMs. 10

If γ is an equilibrium mechanism profile of the one-shot game, it must be the case that for all agents i, all types θ i Θ i, and all mappings s i S i the following inequality holds: E µ i u i ( [γj ( {slj (γ, θ l )} l I )] j ) ( [ ( )] ) E µ i u i γ j {s lj (γ, θ l )} l I,l i, s ij(γ, θ i ). j Now if we let (s lj (γ, θ l )) l I = m j (θ), and define a new function π j := γ j m j the above condition reduces to (3): an equilibrium γ must induce a profile of DMs that satisfies UIC. Let Π U (γ) denote the set of all profiles of UIC DMs that can be induced by all (pure-strategy) continuation equilibria of the one-shot game at γ Γ. Let Π U denote the set of all profiles of UIC DMs. The payoff of principal j in the worst continuation equilibrium at γ is then u j (γ) := min E µ [u j (π (θ), θ)]. (4) π Π U (γ) Yamashita (2010) shows that there exists a threshold payoff, denoted by wj 1, for each principal such that an SFC f is implemented in an equilibrium if and only if it is incentive compatible (i.e., UIC) and gives each l no less than wj 1. Theorem 1 (pp 796, Yamashita 2010) also shows that that if the recommendation mechanism γl R is in Γ l for all l J, then this threshold is the minmax value over the spaces of complex mechanisms: w 1 j = min γ j Γ j where the superscript refers to the one-shot game. if max γ j Γ j u j (γ j, γ j ), (5) For the case with no PCD, an SCF f is WIR (weakly individually rational) for principals E µ [u j (f(θ), θ)] w 1 j j J. (6) Recall that F 0 is the subclass of SCFs that involve independently mixed actions by principals. The smaller class of SCFs that also satisfies UIC is denoted by F0 U. Theorem 1 in Yamashita (2010) shows that the set of SCFs supported in (pure-strategy) equilibria of the one-shot game with no PCD is F 1 0 (µ) := {f F U 0 : f is WIR j J }. (7) If a PCD is allowed, the set of (correlated) SCFs supported in equilibria of the one-shot game (G, Γ) is the convex hull: F 1 (µ) := co ( F 1 0 (µ) ). (8) We now show that the minmax value and the maxmin value, both defined in terms of complex mechanisms, are equal. To better understand it, let us start with an example. Example 1. Suppose there are two principals and three or more agents, and eight profiles of 11

UIC DMs 13 Π U = {π a, π b, π c, π d, π e, π f, π g, π h }, where π k = (π1 k, πk 2 ). Consider a competing mechanism game where each principal j can offer mechanisms from Γ j = {γ j, γ j, γr j }, which includes the recommendation mechanism. For each profile of mechanisms (γ 1, γ 2 ) Γ 1 Γ 2, Table 1 specifies all UIC profiles of DMs that can be induced in a continuation equilibrium. We will derive principal 2 s minmax and maxmin payoff values, using the worst continuation equilibrium. Suppose that principal 2 s preferences over the profiles of DMs follow the alphabetical order of the profiles, from best to worst. Principal 2 s payoff in the worst continuation equilibrium is then achieved by the bold profiles in the table. γ 2 γ 2 γ2 R γ 1 π a, π c π f π a, π c, π f γ 1 π g π e π b, π e, π g γ1 R π a, π c, π g π d, π e, π f π d, π e, π f, π a, π b, π c, π g, π h Table 1: Profiles of DMs induced in continuation equilibria Any profile of DMs π k = (π1 k, πk 2 ) satisfying UIC can be supported in equilibrium if both principals offer the recommendation mechanisms (γ1 R, γr 2 ), by agents recommending πk j to principal j along with their type reports. With three or more agents, a unilateral deviation by any agent from recommending π k j than π k j. cannot make any principal j implement something other A key point is that any UIC DMs that is induced in a continuation equilibrium at any (γ k, γ j ) can be also induced from a continuation equilibrium at (γ R k, γ j), where one principal switches to the recommendation mechanism. To see this, note that the profile π f is induced in a continuation equilibrium following (γ 1,γ 2 ). Thent πf can be induced in a continuation equilibrium following (γ1 R,γ 2 ), where principal 1 offers the recommendation mechanism γr 1 whereas principal 2 continues to offer γ 2, if all agents simply recommend πf 1 to principal 1 and keep the same communication strategy for principal 2 as in the continuation game following (γ 1, γ 2 ). Thus each cell in the last row (column) contains all DM profiles that appear in the earlier rows (columns). Also note that inducing some profiles such as π d might require the use of recommendation mechanisms when the set Γ is restricted. Consider principal 2 s maxmin payoff value. From the table, it is easy to check that γ R 1 arg min u 2 (γ 1, γ 2 ) for all γ 2 Γ 2. (9) γ 1 Γ 1 13 For simplicity we leave out other DMs and complex mechanisms, such as those with mixing. 12

This implies that principal 1 s recommendation mechanism achieves 2 s maxmin value: [ )] max min u 2 (γ 1, γ 2 ) = max u 2 (γ1 R, γ 2 ) = E µ u 2 (π f (θ), θ. γ 2 Γ 2 γ 1 Γ 1 γ 2 Γ 2 Similarly, is easy to see that principal 2 s minmax value can be attained when principal 1 offers the recommendation mechanism: [ )] min max u 2 (γ 1, γ 2 ) = max u 2 (γ1 R, γ 2 ) = E µ u 2 (π f (θ), θ. γ 1 Γ 1 γ 2 Γ 2 γ 2 Γ 2 Therefore, the maxmin value equals the minmax value. The above illustrates a general equivalence result proved below. Theorem 1 If each principal s permissible complex mechanisms includes the recommendation mechanism, for all j J we have max γ j Γ j min γ j Γ j u j (γ j, γ j ) = min γ j Γ j max u j (γ j, γ j ) =: wj 1. (10) γ j Γ j Proof. Fix any γ j Γ j. Any profile of DMs induced in a continuation equilibrium at (γ j, γ j ) can be induced in a continuation equilibrium at (γ R j, γ j) for all γ j Γ j. This implies that which can be restated as Taking maximum over Γ j : u j (γ R j, γ j ) u j (γ j, γ j ) for all γ j Γ j and all γ j Γ j, (11) u j (γ R j, γ j ) = min u j (γ j, γ j ) for all γ j Γ j. γ j Γ j max u j (γ j, R γ j ) = max min u j (γ j, γ j ). (12) γ j Γ j γ j Γ j γ j Γ j This means that, regardless of principal j s mechanism, the other principals can induce principal j s minimum payoff given j s mechanism by offering the recommendation mechanisms. On the other hand, let γ j be the profile of the other principals mechanisms that attain j s minmax value: In particular, max γ j Γ j u j (γ j, γ j ) = min γ j Γ j max u j (γ j, γ j ). (13) γ j Γ j max u j (γ j, γ j ) max u j (γ j, R γ j ). (14) γ j Γ j γ j Γ j 13

Inequality (11) implies that u j (γ R j, γ j ) u j (γ j, γ j ) for all γ j Γ j ; and hence, taking maximum over all γ j, we get This means that (14) holds as an equality: max γ j Γ j u j (γ R j, γ j ) max γ j Γ j u j (γ j, γ j ). Equations (12), (13) and (15) yield (10). max u j (γ j, R γ j ) = max u j (γ j, γ j ). (15) γ j Γ j γ j Γ j The equivalence between the maxmin value and minmax value in terms of complex mechanisms has nothing to do with random actions being allowed or not. Theorem 1 applies as long as the set of mechanisms available for each principal is rich enough to include recommendation mechanisms (or mechanisms isomorphic to recommendation mechanisms). However, this lower bound is not expressed in term of model primitives such as actions or DMs because it is tied to the very set of mechanisms Γ allowed in the game. Nonetheless, this equivalence plays a crucial role in comparing equilibrium payoffs in the one-shot game with those in the repeated game, where it is used to show the lower bound of a principal s equilibrium payoff in the repeated game is generally lower than that in the one-shot game. 3.1 Complete information one-shot game In this section, we focus on complete information, i.e. agents do not have private information about types. The point in studying competing mechanism design without exogenously given private information about types is to highlight the role of mechanisms in a world with multiple principals because each principal wants to extract information from agents about what other principals are doing. Even with complete information and without repetition, agents play a vital role as soon as we allow the principals to use mechanisms rather than confining them to actions. We showed in the previous section that Yamashita s minmax value w 1 j = min γ j Γ j max u j (γ j, γ j ) γ j Γ j equals the maxmin value over complex mechanism space. Now we show that if the set of mechanisms Γ allowed in the competing mechanism game is sufficiently rich to include recommendation mechanisms (equivalently mechanisms isomorphic to recommendation mechanisms) 14

this lower bound can be expressed in terms of actions in the underlying game. Under complete information a DM is simply an action and agents have nothing to report about their types. Therefore, in the case of complete information, agents recommend actions to any principal who offers a recommendation mechanism and thereby commits to take the action recommended by a majority of agents. If one principal unilaterally deviates from offering such a mechanism, all agents recommend that the other principals choose actions that punish the deviating principal. Thus, recommendation mechanisms specify not only how to implement the equilibrium actions but also punishments to deter principals from deviating from this mechanism. The next theorem shows that in the complete information case, this complex minmax value wj 1 equals the simple minmax value, equivalently the simple maxmin value, where the qualifier simple means that all principals are restricted to use only actions (i.e., constant mechanisms). Theorem 2 For all j J, the lower bound of principal j s equilibrium payoff wj 1 complete-information competing mechanism game (G, Γ) satisfies in the max α j A j min u j (α j, α j ) = wj 1 = α j A j Proof. By Lemmas 1 and 3 below. min α j A j max u j (α j, α j ) j J. (16) α j A j The proof of Theorem 2 consists of two parts. First, Lemma 1 shows that both equalities in (16) holds with inequalities as in (17). Second, (22) in Lemma 3 shows that the simple maxmin value in terms of actions is equal to the simple minmax value in terms of actions. The two lemmas together imply (16). Lemma 1 The complex minmax value wj 1 competing mechanism game (G, Γ) satisfies of any principal j in the complete-information max α j A j min u j (α j, α j ) wj 1 α j A j min α j A j max u j (α j, α j ). (17) α j A j Proof. First, a simple action α j is a constant mechanism that always assigns α j regardless of agents messages. Note that the lower bound of principal j s equilibrium payoff can be reached when the other principals offer recommendation mechanisms. If principal j deviates to a simple action α j, agents can recommend to all other principals an array of actions ϕ j j (α j) = { ϕ j l (α j)} l j, i.e., action ϕ j l (α j) to each principal l j, that minimizes principal j s payoff conditional on α j : ϕ j j (α j) := arg min α j A j u j (α j, α j ). (18) 15

This is the worst punishment that non-deviating principals can induce in a continuation equilibrium upon principal j playing α j. The maximum payoff that principal j can receive by playing a simple action is then max u j ( ϕ j j (α j), α j ) = max min u j (α j, α j ). α j A j α j A j α j A j We can now write down a chain of inequalities, where the first comes from the fact that the class of mechanisms Γ j includes the set of actions A j, and the second from the fact that players other than j have access to recommendation mechanisms: max γ j Γ j Combining this with Theorem 1, min γ j Γ j u j (γ j, γ j ) max α j A j min u j (γ j, γ j ) γ j Γ j max α j A j u j ( ϕ j j (α j), α j ) = max min u j (α j, α j ). α j A j α j A j w 1 j = max γ j Γ j min γ j Γ j u j (γ j, γ j ) max α j A j min u j (α j, α j ). (19) α j A j To prove the other inequality, suppose that non-deviating principals commit to taking the actions that are recommended by a majority of agents; let agents recommend the action α j l to principal l j, regardless of principal j s mechanism, with the notation α j j = {αj l } l j, where α j j the maximum payoff that principal j can receive is then { } arg min max u j (α j, α j ) ; (20) α j A j α j A j min max u j (α j, α j ). α j A j α j A j Since this is one way of punishing principal j, we have (19) and (21) imply (17). w 1 j min max u j (α l, α j ). (21) α j A j α j A j Now we show that the simple maxmin value in terms of actions is equal to the simple minmax value in terms of actions: max α j A j min u j (α j, α j ) = α j A j min α j A j max u j (α j, α j ). α j A j 16

If A j were to be compact and convex, we can apply Sion s minimax theorem directly to show the equality above. Theorem 3 (Sion s minimax theorem (1958)) Let X and Y be convex, compact spaces. If the real-valued function f on X Y is quasi-concave and upper semi-continuous in x and quasai-convex and lower semi-continous in y, inf sup x X y Y f(x, y) = sup y Y inf x X f(x, y). However, A j = k j A k is not convex because it is the set of all probability distributions over independent action profiles without correlation among principals actions. Therefore, we cannot directly swap max αj A j and min α j A j even when the principal s utility function satisfies the v-nm expected utility property. following lemma, letting J be the deviator for notational simplicity. To establish the equality above, we use the Lemma 2 If u J (α 1,..., α J ) satisfies the property of v-nm expected utility, then for all 1 k J 2 min max min u J(α 1,..., α J ) = min max min u J(α 1,..., α J ). (α 1,...,α k ) α J (α k+1,...,α J 1 ) (α 1,...,α k 1 ) α J (α k,...,α J 1 ) Proof. See the appendix. Given k, fix α k 1 = (α 1,..., α k 1 ). Then, we can define h J (α k 1, α k, α J ) := min (α k+1,...,α J 1 ) u J(α k 1, α k, α k+1,..., α J 1 }{{}, α J). The proof of Lemma 2 shows that, for any given α k 1, h J (α k 1, α k, α J ) maintains the v-nm expected property over α k and α J when u J ( ) is an v-nm expected utility function. Therefore, we have ( ) min max min u J(α k 1, α k,..., α J 1, α J ) (α 1,...,α k ) α J (α k+1,...,α J 1 ) [ ] = min min max h J (α k 1, α k, α J ) α k 1 α k α J [ ] = min max min h J (α k 1, α k, α J ) α k 1 α J α k [ ] = min max min u J(α k 1, α k,..., α J 1, α J ), α k 1 α J (α k,...,α J 1 ) where the second equality follows follows from Sion s minimax theorem with X = A k and 17

Y = A J 14 min max α k A k α J A J Now we present Lemma 3. h J (α k 1, α k, α J ) = max min h J (α k 1, α k, α J ). α J A J α k A k Lemma 3 For all j J, max α j A j Proof. See the appendix. min u j (α j, α j ) = α j A j min α j A j Theorem 2 follows from (17) in Lemma 1 and (22) in Lemma 3. max u j (α j, α j ). (22) α j A j The significance of Theorem 2 is that it pins down the principal s lower bound in terms of actions in the one-shot competing mechanism game (G, Γ) with complete information, regardless of the complexity of mechanisms allowed in the game, as long as simple actions are allowed. 3.1.1 The folk theorem for one-shot games of complete information Now we characterize the set of equilibrium allocations supportable in the one-shot competing mechanism (G, Γ) with complete information. An action profile α = (α 1,..., α J ) A = A 1... A J is weakly individually rational (WIR) for principals if u j (α) wj 1 j J, where wj 1 is the lower bound of principal j s equilibrium payoff given by equation (16). The set of action profiles that induce WIR payoffs is F0 1 := {α A α is WIR j J }. (23) If a PCD is allowed, the set of (correlated) action profiles supported in the equilibrium of the one-shot game (G, Γ) with complete information is F 1 := co ( F0 1 ). The lower bounds identified in Lemma 1 characterize the set of equilibrium allocations as follows. The proof in Yamashita (2010) simplifies to this under complete information. Theorem 4 In the one-shot competing mechanism game (G, Γ) with complete information, 1. (correlated) action profiles in F 1 are supportable in a PBE; 14 This is due to the fact that h J(α k 1, α k, α J) maintains the v-nm expected property over α k and α J given that A k and A J are both compact and convex. 18

2. such a PBE can be constructed using only recommendation mechanisms under which Proof. each principal j asks every agent to send a message in his space of actions A j and commits to take the action recommended by a majority. Given the realization of the PCD, an action profile that gives any principal j a payoff strictly below his own minmax value cannot be sustained in equilibrium so that any action profile is supportable in a PBE must be WIR and therefore cannot be outside F 1. In equilibrium, each principal l offers a recommendation mechanism γl R : A I l A l regardless of the realization of the PCD. Given the realization of the PCD, suppose that α = (α 1,... α J ) A is the action profile that needs to be supported in equilibrium. Agents follow three rules: 1. if principals offer recommendation mechanisms, all agents send the message α j to principal j; 2. if principal j unilaterally deviates to any mechanism γ j Γ j, agents recommend the action profile α j l, defined in (20), to principal l j, and send messages to principal j that form a continuation equilibrium of the subgame given by (γ j R, γ j); 3. if anything else happens agents can send to principals messages that represent a continuation equilibrium of the subgame given the profile of mechanisms offered by principals. Because I 3, agents cannot deviate unilaterally and change a non-deviating principal s action from the common recommendation. Therefore, if principal j deviates, all agents recommend to each principal l other than j the action α j l, which by Theorem 2 gives j a payoff w1 j, which is no better than the equilibrium payoff u j (α) by hypothesis. principal j to offer γ R j. So it is a best response for Allowing random actions equates the maxmin and minmax, significantly reducing the amount of the information that a non-deviator needs in order to punish a deviator. Because a non-deviating principal l can simply take the action α j l that minmaxes deviating principal j, all he needs is the identity of the deviator. Therefore, instead of the recommendation mechanism, a non-deviating principal l can offer a deviator-reporting mechanism, asking agents to report the identity of a deviating principal, if any; principal l takes action α j l if a majority of agents report j as the deviating principal. Such a deviator-reporting mechanism can be viewed as being isomorphic to the recommendation mechanism. 4 Repeated game of complete information The infinitely repeated game (G, Γ) (δ) involves playing the competing mechanism stagegame (G, Γ) at each time t 1, with a common discount factor δ (0, 1) across periods, where 19

principals are allowed to use mechanism profiles in Γ. Since there is no private information, each player l s stage payoff function is fixed to u l (α) over time. Lemma 1 shows that principal j s equilibrium payoff in the one-shot game (G, Γ) cannot be lower than the maxmin value of his payoff in terms of actions: If principal j, as a deviator, restricts himself to offer only an action, then non-deviating principals can respond to that by choosing actions that minimize j s payoff. Since an action can be thought of as a constant mechanism in Γ j, principal j can offer more in the competing mechanism game where he is allowed to offer any mechanism from Γ j. This implies that principal j s equilibrium payoff cannot be lower than max αj A j min α j A j u j (α j, α j ). This argument is also valid in the repeated game (G, Γ) (δ) because principal j cannot do better if he restricts himself to offer only an action. Lemma 3 shows that this maxmin value is equal to the minmax value over actions. Therefore, the lower bound of principal j s equilibrium payoff in the repeated game (G, Γ) (δ) is also wj 1 specified in (16) in Theorem 2. The complete-information Folk theorem is established as follows. Theorem 5 (Complete-information folk theorem) Let (G, Γ) be any one-shot competing mechanism game with the complete information. Then there exists δ < 1 such that for any δ δ 1. any (correlated) action profile in F = {α A u j (α ) > wj 1 of a PBE of the infinitely repeated game (G, Γ) (δ) 15 ; j J } is the outcome 2. such a PBE can be supported using actions, i.e., constant mechanisms on and off the path. This is the standard folk theorem for infinitely repeated games, with the lower bound given by (16) in Theorem 2. Contrary to the set of equilibrium allocations F 1 supported in the one-shot game (G, Γ), a (correlated) action profile α in F only needs to be ex-ante (strictly) individually rational for principals: u j (α ) > wj 1 j J but the realized action profile from α does not need to be individually rational. The reason is that it is okay for principal j to get a payoff lower than wj 1 time to time in the repeated game as long as his payoff is above wj 1 on average. Of course, any equilibrium allocation in the one-shot game is also an equilibrium allocation in the repeated game. However, the folk theorem does not cover the allocations in F 1 \ F. Those allocations may be supported in an equilibrium of the repeated game if principals offer recommendation mechanisms. The key message from the folk theorem is that mechanisms that principals need to use on and off the path to support equilibrium allocations in F are simpler in the repeated game than 15 We slightly abuse the notation u j here in that u j(α ) denotes the ex-ante expected payoff from the correlated action profile α A before the realization of the PCD. 20

in the one-shot game. In the latter it is essential for principals to commit themselves to take a certain action according to the rule described in the recommendation mechanism. However, in the repeated game, they do not need commitment power because principals only need to take actions on and off the path - each principal l only needs to take his equilibrium action on the path, and α j l off the path following principal j s deviation. This is because even when principals are allowed to use complex mechanisms in Γ, the lower bound of the principal s equilibrium payoff is the same as the minmax value in terms of actions. Finally, in the repeated game, principals make equilibrium actions correlated by choosing their actions contingent on the realization of the PCD. However, in the one-shot game, agents make equilibrium actions correlated by choosing the recommendation of an action to each principal contingent on the realization of the PCD given each principal s recommendation mechanism; each principal offers his recommendation mechanism independent of the realization of the PCD in the one-shot game. 5 Repeated game of incomplete information The previous section shows that one-shot and repeated games, (G, Γ) and (G, Γ) (δ), share the same lower bound on each principal j s equilibrium payoff absent private information on agents types. Without private information, the only meaningful difference is that repeated interaction dispenses with commitment to complex mechanisms on or off the equilibrium path because simple actions can achieve the same minmax value as in the case with complex mechanisms However, the presence of private information about agents types fundamentally changes the nature of competition in both one-shot and repeated games and the equilibrium allocations. Repetition allows us to relax incentive compatibility because an agent s punishment can be deferred in some cases. We first explain an appropriate notion of IC in this repeated setting. Before we do so is that we could combine incentives with statistical tests about the veracity of the messages through statistical tests. Our current approach postponses the introduction of statistical tests, with their merits and demerits, to until after the basic results have been developed. 5.1 Incentive compatibility Given a profile of DMs, agents type messages induce a profile of actions. A profile of induced actions carries some information on agents type messages to a certain extent and can sometimes which agent deviated from truth telling. However, when an agent s lie is detected, it is too late to punish her in the one-shot game; this is why UIC is used in the one-shot game to prevent all possible lies. In the repeated game incentive compatibility does not need to be imposed over those type messages by agent i that can lead to a profile of actions revealing her 21