Obviously Strategy-Proof Mechanisms

Size: px
Start display at page:

Download "Obviously Strategy-Proof Mechanisms"

Transcription

1 Obvously Strategy-Proof Mechansms Shengwu L Job Market Paper Frst uploaded: 3 Feb Ths verson: 7 Jan Abstract What makes some strategy-proof mechansms easer to understand than others? To address ths queston, I propose a new soluton concept: A mechansm s obvously strategy-proof (OSP) f t has an equlbrum n obvously domnant strateges. Ths has a behavoral nterpretaton: A strategy s obvously domnant f and only f a cogntvely lmted agent can recognze t as weakly domnant. It also has a classcal nterpretaton: A choce rule s OSP-mplementable f and only f t can be carred out by a socal planner under a partcular regme of partal commtment. I fully characterze the set of OSP mechansms n a canoncal settng, wth one-dmensonal types and transfers. A laboratory experment tests and corroborates the theory. 1 Introducton Domnant-strategy mechansms are often sad to be desrable. They reduce partcpaton costs and cogntve costs, by makng t easy for agents to decde I thank especally my advsors, Paul Mlgrom and Murel Nederle. I thank Nck Arnost, Douglas Bernhem, Gabrel Carroll, Paul J. Healy, Matthew Jackson, Fuhto Kojma, Roger Myerson, Mchael Ostrovsky, Alvn Roth, and Ilya Segal for ther nvaluable advce. I thank Paul J. Healy for hs generosty n allowng my use of the Oho State Unversty Expermental Economcs Laboratory. I thank Murel Nederle and the Stanford Economcs Department for fnancal support for the experment. Ths work was supported by the Kohlhagen Fellowshp Fund, through a grant to the Stanford Insttute for Economc Polcy Research. All errors reman my own. shengwu@stanford.edu 1

2 what to do. 1 They protect agents from strategc errors. 2 Domnant-strategy mechansms prevent waste from rent-seekng esponage, snce spyng on other players yelds no strategc advantage. Moreover, the resultng outcome does not depend senstvely on each agent s hgher-order belefs. 3 These benefts largely depend on agents understandng that the mechansm has an equlbrum n domnant strateges;.e. that t s strategy-proof (SP). Only then can they conclude that they need not attempt to dscover ther opponents strateges or to game the system. 4 However, some strategy-proof mechansms are smpler for real people to understand than others. For nstance, choosng when to qut n an ascendng clock aucton s the same as choosng a bd n a second-prce sealed-bd aucton (Vckrey, 1961). The two formats are strategcally equvalent; they have the same reduced normal form. 5 Nonetheless, laboratory subjects are substantally more lkely to play the domnant strategy under a clock aucton than under sealed bds (Kagel et al., 1987). Theorsts have also expressed ths ntuton: Some other possble advantages of dynamc auctons over statc auctons are dffcult to model explctly wthn standard economcs or game-theory frameworks. For example,... t s generally held that the Englsh aucton s smpler for real-world bdders to understand than the sealed-bd second-prce aucton, leadng the Englsh aucton to perform more closely to theory. (Ausubel, 2004) 1 Vckrey (1961) wrtes that, n second-prce auctons: Each bdder can confne hs efforts and attenton to an apprasal of the value the artcle would have n hs own hands, at a consderable savng n mental stran and possbly n out-of-pocket expense. 2 For nstance, school choce mechansms that lack domnant strateges may harm parents who do not strategze well (Pathak and Sönmez, 2008). 3 Wlson (1987) wrtes, Game Theory has a great advantage n explctly analyzng the consequences of tradng rules that presumably are really common knowledge; t s defcent to the extent t assumes other features to be common knowledge, such as one player s probablty assessment about another s preferences or nformaton. 4 Polcymakers could announce that a mechansm s strategy-proof, but that may not be enough. If agents do not understand the mechansm well, then they may be justfably skeptcal of such declaratons. For nstance, Google s advertsng materals for the Generalzed Second-Prce aucton appeared to mply that t was strategy-proof, when n fact t was not (Edelman et al., 2007). Moreover, Rees-Jones (2015) and Hassdm et al. (2015) fnd evdence of strategc mstakes n approxmately strategy-proof matchng markets, even though partcpants face a hgh-stakes decson wth expert advce. 5 Ths equvalence assumes that we restrct attenton to cut-off strateges n ascendng auctons. 2

3 In ths paper, I model explctly what t means for a mechansm to be obvously strategy-proof. Ths approach nvokes no new prmtves. Thus, t dentfes a set of mechansms as smple to understand, whle remanng as parsmonous as standard game theory. A strategy S s obvously domnant f, for any devatng strategy S, startng from any earlest nformaton set where S and S dverge, the best possble outcome from S s no better than the worst possble outcome from S. A mechansm s obvously strategy-proof (OSP) f t has an equlbrum n obvously domnant strateges. By constructon, OSP depends on the extensve game form, so two games wth the same normal form may dffer on ths crteron. Obvous domnance mples weak domnance, so OSP mples SP. Ths defnton dstngushes ascendng auctons and second-prce sealedbd auctons. Ascendng auctons are obvously strategy-proof. Suppose you value the object at $10. If the current prce s below $10, then the best possble outcome from quttng now s no better than the worst possble outcome from stayng n the aucton (and quttng at $10). If the prce s above $10, then the best possble outcome from stayng n the aucton s no better than the worst possble outcome from quttng now. Second-prce sealed-bd auctons are strategy-proof, but not obvously strategy-proof. Consder the strateges bd $10 and bd $11. The earlest nformaton set where these dverge s the pont where you submt your bd. If you bd $11, you mght wn the object at some prce strctly below $10. If you bd $10, you mght not wn the object. The best possble outcome from devatng s better than the worst possble outcome from truth-tellng. Ths captures an ntuton expressed by expermental economsts: The dea that bddng modestly n excess of x only ncreases the chance of wnnng the aucton when you don t want to wn s far from obvous from the sealed bd procedure. (Kagel et al., 1987) I produce two characterzaton theorems, whch suggest two nterpretatons of OSP. The frst nterpretaton s behavoral: Obvously domnant strateges are those that can be recognzed as domnant by a cogntvely lmted agent. The second nterpretaton s classcal: OSP mechansms are those that can be carred out by a socal planner wth only partal commtment power. Frst, I model an agent who has a smplfed mental representaton of the world: Instead of understandng every detal of every game, hs understandng s lmted by a coarse partton on the space of all games. I show 3

4 Fgure 1: Smlar mechansms from 1 s perspectve. that a strategy S s obvously domnant f and only f such an agent can recognze S as weakly domnant. Consder the mechansms n Fgure 1. Suppose Agent 1 has preferences: A B C D. In mechansm (), t s a weakly domnant strategy for 1 to play L. Both mechansms are ntutvely smlar, but t s not a weakly domnant strategy for Agent 1 to play L n mechansm (). In order for Agent 1 to recognze that t s weakly domnant to play L n mechansm (), he must use contngent reasonng. That s, he must thnk through hypothetcal scenaros case-by-case: If Agent 2 plays l, then I should play L, snce I prefer A to B. If Agent 2 plays r, then I should play L, snce I prefer C to D. Therefore, I should play L, no matter what Agent 2 plays. Notce that the quoted nferences are vald n (), but not vald n (). Suppose Agent 1 s unable to engage n contngent reasonng. That s, he knows that playng L mght lead to A or C, and playng R mght lead to B or D. However, he does not understand how, case-by-case, the outcomes after playng L are related to the outcomes after playng R. Then t s as though he cannot dstngush () and (). Ths dea can be made formal and general. I defne an equvalence relaton on the space of mechansms: The experence of agent at hstory h records the nformaton sets where was called to play, and the actons that took, n chronologcal order. 6 Two mechansms G and G are - ndstngushable f there s a bjecton from s nformaton sets and actons n G, onto s nformaton sets and actons n G, such that: 6 An experence s a standard concept n the theory of extensve games; experences are used to defne perfect recall. 4

5 1. G can produce for some experence f and only f G can produce for ts bjected partner experence. 2. An experence mght result n some outcome n G f and only f ts bjected partner mght result n that same outcome n G. Wth ths relaton, we can partton the set of all mechansms nto equvalence classes. For nstance, the mechansms n Fgure 1 are 1-ndstngushable. The partton defned by the relaton G and G are -ndstngushable rules out contngent reasonng. Suppose an agent knows only the experences that a mechansm mght generate, and the resultng outcomes. He retans substantal knowledge about the structure of the mechansm. He knows all the ponts at whch he may be called to play, and all the actons avalable at each pont. He knows, for any sequence of ponts he was called to play and actons that he took, whether the game mght end and what outcomes mght result. However, he s unable to reason case-by-case about hypothetcal scenaros. The frst characterzaton theorem states: A strategy S s obvously domnant n G f and only f t s weakly domnant n every G that s - ndstngushable from G. Ths shows that obvously domnant strateges are those that can be recognzed as weakly domnant wthout contngent reasonng. An obvously domnant strategy s weakly domnant n any -ndstngushable mechansm. In that sense, such a strategy s robustly domnant. The second characterzaton theorem for OSP relates to the problem of mechansm desgn under partal commtment. In mechansm desgn, we usually assume that the Planner can commt to every detal of a mechansm, ncludng the events that an ndvdual agent does not drectly observe. For nstance, n a sealed-bd aucton, we assume that the Planner can commt to the functon from all bd profles to allocatons and payments, even though each agent only drectly observes hs own bd. Sometmes ths assumpton s unrealstc. If agents cannot ndvdually verfy the detals of a mechansm, the Planner may be unable to commt to t. Mechansm desgn under partal commtment s a pressng problem. Auctons run by central brokers over the Internet account for bllons of dollars of economc actvty (Edelman et al., 2007). In such settngs, bdders may be unable to verfy that the other bdders exst, let alone what actons they have taken. As another example, some wreless spectrum auctons use computatonally demandng technques to solve complex assgnment problems. In these settngs, ndvdual bdders may fnd t dffcult and costly to verfy the output of the auctoneer s algorthm (Mlgrom and Segal, 2015). 5

6 For the second characterzaton theorem, I consder a metagame where the Planner prvately communcates wth agents, and eventually decdes on an outcome. The Planner chooses one agent, and sends a prvate message, along wth a set of acceptable reples. That agent chooses a reply, whch the Planner observes. The Planner can then ether repeat ths process (possbly wth a dfferent agent) or announce an outcome and end the game. The Planner has partal commtment power: For each agent, she can commt to use only a subset of her avalable strateges. However, the subset she promses to Agent must be measurable wth respect to s observatons n the game. That s, f the Planner plays a strategy not n that subset, then there exsts some agent strategy profle such that Agent detects that the Planner has devated. We call ths a blateral commtment. Suppose we requre that each agent s strategy be optmal, for any strateges of the other agents, and for any Planner strategy compatble wth s blateral commtment. What choce rules can be mplemented n ths metagame? The second characterzaton theorem states: A choce rule can be supported by blateral commtments f and only f that choce rule s OSPmplementable. Consequently, n addton to formalzng a noton of cogntve smplcty, OSP also captures the set of choce rules that can be carred out wth only blateral commtments. After defnng and characterzng OSP, I apply ths concept to several mechansm desgn envronments. For the frst applcaton, I consder bnary allocaton problems. In ths envronment, there s a set of agents N wth contnuous sngle-dmensonal types θ [θ, θ ]. An allocaton y s a subset of N. An allocaton rule f y s a functon from type profles to allocatons. We augment ths wth a transfer rule f t, whch specfes money transfers for each agent. Each agent has utlty equal to hs type f he s n the allocaton, plus hs net transfer. u (θ, y, t) = 1 y θ + t (1) Bnary allocaton problems encompass several canoncal settngs. They nclude prvate-value auctons wth unt demand. They nclude procurement auctons wth unt supply; not beng n the allocaton s wnnng the contract, and the bdder s type s hs cost of provson. They also nclude bnary publc good problems; the feasble allocatons are N and the empty set. Mechansm desgn theory has extensvely nvestgated SP-mplementaton n ths envronment. f y s SP-mplementable f and only f f y s monotone 6

7 n each agent s type (Spence, 1974; Mrrlees, 1971; Myerson, 1981). If f y s SP-mplementable, then the requred transfer rule f t s essentally unque (Green and Laffont, 1977; Holmström, 1979). What are analogues of these canoncal results, f we requre OSP-mplementaton rather than SP-mplementaton? Are ascendng clock auctons specal, or are there other OSP mechansms n ths envronment? I prove the followng theorem: Every mechansm that OSP-mplements an allocaton rule s essentally a monotone prce mechansm, whch s a new generalzaton of ascendng clock auctons. Moreover, ths s a full characterzaton of OSP mechansms: For any monotone prce mechansm, there exsts some allocaton rule that t OSP-mplements. These results mply that when we desre OSP-mplementaton n a bnary allocaton problem, we need not search the space of all extensve game forms. Wthout loss of generalty, we can focus our attenton on the class of monotone prce mechansms. 7 Addtonally, I characterze the set of OSP-mplementable allocaton rules. For ths part, I assume that the lowest type of each agent s never n the allocaton, and s requred to have a zero transfer. Gven an allocaton rule, I show how to dentfy subsets of R N that contan vable prce paths for a monotone prce mechansm. I provde a necessary and suffcent condton for an allocaton rule to be OSP-mplementable. As a second applcaton, I consder a generalzaton of the Edelman et al. (2007) onlne advertsng envronment. In ths settng, agents bd for advertsng postons, each worth a certan number of clcks. Each agent s type s a vector of per-clck values, one for each poston. I show that f preferences satsfy a sngle-crossng condton, then we can OSP-mplement the effcent allocaton and the Vckrey payments. As a thrd applcaton, I produce an mpossblty result for a classc matchng algorthm: Wth 3 or more agents, there does not exst a mechansm that OSP-mplements Top Tradng Cycles (Shapley and Scarf, 1974). I conduct a laboratory experment to test the theory. In the experment, I compare three pars of mechansms. In each par, both mechansms mplement the same allocaton rule. One mechansm s obvously strategy-proof. The other mechansm s strategy-proof, but not obvously strategy-proof. Standard theory predcts that both mechansms result n domnant strategy play, and have dentcal outcomes. Instead, subjects play the domnant 7 Of course, f we do not mpose the addtonal structure of a bnary allocaton problem, then there exst OSP mechansms that are not monotone prce mechansms. Ths paper contans several examples. 7

8 strategy at sgnfcantly hgher rates under the OSP mechansm, compared to the mechansm that s just SP. Ths effect occurs for all three pars of mechansms, and perssts even after playng each mechansm fve tmes wth feedback. The rest of the paper proceeds n the usual order. Secton 2 revews the lterature. Secton 3 provdes formal defntons and characterzatons. Secton 4 covers applcatons. Secton 5 reports the laboratory experment. Secton 6 concludes. Proofs omtted from the man text are n Appendx A. 2 Related Lterature It s wdely acknowledged that ascendng auctons are smpler for real bdders than second-prce sealed-bd auctons (Ausubel, 2004). Laboratory experments have nvestgated and corroborated ths clam (Kagel et al., 1987; Kagel and Levn, 1993). More generally, Charness and Levn (2009) and Esponda and Vespa (2014) document that laboratory subjects fnd t dffcult to reason case-by-case about hypothetcal scenaros. Ths mental process s often called contngent reasonng, but has receved lttle formal treatment n economc theory. There s also a strand of lterature, ncludng Vckrey s semnal paper, that observes that sealed-bd auctons rase problems of commtment (Vckrey, 1961; Rothkopf et al., 1990; Cramton, 1998). For nstance, t may be dffcult to prevent shll bddng wthout thrd-party verfcaton. Rothkopf et al. (1990) argue that robustness n the face of cheatng and of fear of cheatng s mportant n determnng aucton form. Ths paper formalzes and unfes both these strands of thought. It shows that mechansms that do not requre contngent reasonng are dentcal to mechansms that can be run under blateral commtment. Eyster and Rabn (2005) and Esponda (2008) model agents who do not fully account for other agents prvate nformaton. An extensve lterature on level-k reasonng 8 models agents who hold non-equlbrum belefs about other agents strateges. These are conceptually dstnct from mstakes n contngent reasonng. In partcular, these models predct no devatons from domnant-strategy play n strategy-proof mechansms. 9 8 Stahl and Wlson (1994, 1995); Nagel (1995); Camerer et al. (2004); Crawford and Irberr (2007a,b). 9 Level-0 agents may devate from domnant strategy play n a strategy-proof mechansm. However, the behavor of level-0 agents s a prmtve of the theory, and a suffcently large populaton of level-0 agents can explan any data. 8

9 The Prsoner s Dlemma s a specal case of game () n Fgure 1; playng defect s not obvously domnant. On the other hand, f Agent 1 s nformed of Agent 2 s acton before makng hs decson, then playng defect s obvously domnant. Shafr and Tversky (1992) fnd that laboratory subjects n a Prsoner s Dlemma are more lkely to play the weakly domnant strategy when they are nformed beforehand that ther opponent has cooperated (84%) or when they are nformed beforehand that ther opponent has defected (97%), compared to when they are not nformed of ther opponent s strategy (63%). Ths paper relates to the planned US aucton to repurchase televson broadcast rghts. In ths settng, complex underlyng constrants have the result that Vckrey prces cannot be computed wthout large approxmaton errors. Mlgrom and Segal (2015) propose the use of a clock aucton to repurchase broadcast rghts. They recommend ths over an equvalent sealed-bd procedure, argung that clock auctons make strategy-proofness self-evdent even for bdders who msunderstand or mstrust the auctoneer s calculatons. The Mlgrom-Segal clock aucton uses advanced computatonal technques to solve a challengng allocaton problem. However, t s obvously strategy-proof. In combnatoral aucton problems, fndng the optmal soluton s NPhard, so the Vckrey-Clarke-Groves mechansm may be computatonally nfeasble. Consequently, there has been substantal nterest n posted-prce mechansms that approxmate the optmum n polynomal tme (Bartal et al., 2003; Feldman et al., 2014). These have the (prevously unmodeled) advantage of beng obvously strategy-proof. For some mechansms, there exst polynomal-tme algorthms that verfy that the mechansm s strategy-proof (Brânze and Procacca, 2015; Barthe et al., 2015). These are useful f agents do not trust that the mechansm s strategy-proof, but are otherwse computatonally sophstcated. OSP requres equlbrum n obvously domnant strateges. Ths s dstnct from O-solvablty, a soluton concept used n the computer scence lterature on decentralzed learnng. (Fredman, 2002, 2004) Strategy S overwhelms S f the worst possble outcome from S s strctly better than the best possble outcome from S. O-solvablty calls for the terated deleton of overwhelmed strateges. One dfference between the two concepts s that O-solvablty s for normal form games, whereas OSP nvokes a noton of an earlest pont of departure, whch s only defned n the extensve form. O-solvablty s too strong for our current purposes, because almost 9

10 no games studed n mechansm desgn are O-solvable Defnton and Characterzaton The planner operates n an envronment consstng of: 1. A set of agents, N {1,..., n}. 2. A set of outcomes, X. 3. A set of type profles, Θ N Θ. 4. A utlty functon for each agent, u : X Θ R An extensve game form wth consequences n X s a tuple H,, A, A, P, δ c, (I ) N, g, where: 1. H s a set of hstores, along wth a bnary relaton on H that represents precedence. (a) s a partal order, and (H, ) form an arborescence. (b) h denotes h H : h : h h (c) H has bounded depth,.e.: k N : h H : {h H : h h} k (2) (d) Z {h H : h : h h } (e) σ(h) denotes the set of mmedate successors of h. 2. A s a set of actons. 3. A : H \ h A labels each non-ntal hstory wth the last acton taken to reach t. (a) A s one-to-one on σ(h). (b) A(h) denotes the actons avalable at h. A(h) h σ(h) A(h ) (3) 10 For nstance, nether ascendng clock auctons nor second-prce sealed-bd auctons are O-solvable. 10

11 4. P s a player functon. P : H \ Z N c 5. δ c s the chance functon. It specfes a probablty measure over chance moves. d c denotes some realzaton of chance moves: For any h where P (h) = c, d c (h) A(h) I s a partton of {h : P (h) = } such that: (a) A(h) = A(h ) whenever h and h are n the same cell of the partton. (b) For any I I, we denote: P (I ) P (h) for any h I. A(I ) A(h) for any h I. (c) Each acton s avalable at only one nformaton set: If a A(I ), a A(I j ), I I j then a a. 7. g s an outcome functon. It assocates each termnal hstory wth an outcome. g : Z X Addtonally we denote I I f there exst h, h such that: 1. h h 2. h I 3. h I We use to denote the correspondng weak order. A strategy S for agent n game G specfes what agent does at every one of her nformaton sets. S (I ) A(I ). A strategy profle S = (S ) N s a set of strateges, one for each agent. When we want to refer to the strateges used by dfferent types of, we use S θ to denote the strategy assgned to type θ. Let z G (h, S, δ c ) be the lottery over termnal hstores that results n game form G when we start from h and play proceeds accordng to (S, δ c ). z G (h, S, d c ) s the result of one realzaton of the chance moves under δ c. We sometmes wrte ths as z G (h, S, S, d c ). Let u G (h, S, S, d c, θ ) u (g(z G (h, S, S, d c )), θ ). Ths s the utlty to agent n game G, when we start at hstory h, play proceeds accordng to 11 We could make the addton assumpton that δ c has full support on the avalable moves A(h) when t s called to play. Ths ensures a pleasng nvarance property: It rules out games wth zero-probablty chance moves that do not affect play, but do affect whether a strategy s obvously domnant. However, a full support assumpton s not necessary for any of the results that follow, so we do not make t here. 11

12 (S, S, d c ), and the resultng outcome s evaluated accordng to preferences θ. Defnton 1. ψ (h) s the experence of agent along hstory h. ψ (h) s an alternatng sequence of nformaton sets and actons. It s constructed as follows: Intalze t = 1, h 1 = h, ψ = { }. 1. If t > 1 and P (h t 1 ) =, add A(h t ) to the end of ψ. 2. If P (h t ) =, add I : h t I to the end of ψ. 3. Termnate f h t = h. 4. Set h t+1 := h H : h σ(h t ) and h h. 5. Set t := t Go to 1. We use Ψ to denote the set {ψ (h) : h H} ψ, where ψ s the empty sequence. 12 An extensve game form has perfect recall f for any nformaton set I, for any two hstores h and h n I, ψ (h) = ψ (h ). We use ψ (I ) to denote ψ (h) : h I. Defnton 2. G s the set of all extensve game forms wth consequences n X and perfect recall. A choce rule s a functon f : Θ X. If we consder stochastc choce rules, then t s a functon f : Θ X. 13 A soluton concept C s a set-valued functon wth doman G Θ. It takes values n the set of strategy profles. Defnton 3. f s C-mplementable f there exsts 1. G G 12 Mandatng the ncluson of the empty sequence has the followng consequence: By lookng at the set Ψ, t s not possble to nfer whether P (h ) =. 13 For readablty, we generally suppress the latter notaton, but the clams that follow hold for both determnstc and stochastc choce rules. Addtonally, the set X could tself be a set of lotteres. The nterpretaton of ths s that the planner can carry out one-tme publc lotteres at the end of the mechansm, where the randomzaton s observable and verfable. 12

13 2. ((S θ ) θ Θ ) N such that, for all θ Θ 1. (S θ ) N C(G, θ). 2. f(θ) = g(z G (, (S θ ) N, δ c )) Notce that each agent s strategy depends just on hs own type. To ease notaton, we abbrebate (S θ ) N S θ and ((S θ ) θ Θ ) N (S θ ) θ Θ. Our concern s wth weak mplementaton: We requre that S θ C(G, θ), not {S θ } = C(G, θ). Ths s to preserve the analogy wth canoncal results for strategy-proofness, many of whch assume weak mplementaton. (Myerson, 1981; Saks and Yu, 2005) We use (G, (S θ ) θ Θ ) C-mplements f to mean that (G, (S θ ) θ Θ ) fulfls the requrements of Defnton 3. We use G C-mplements f to mean that there exsts (S θ ) θ Θ such that (G, (S θ ) θ Θ ) fulfls the requrements of Defnton 3. Defnton 4 (Weakly Domnant). In G for agent wth preferences θ, S s weakly domnant f: S : S : E δc [u G (h, S, S, d c, θ )] E δc [u G (h, S, S, d c, θ )] (4) Let α(s, S ) be the set of earlest ponts of departure for S and S. That s, α(s, S ) contans the nformaton sets where S and S have made dentcal decsons at all pror nformaton sets, but are makng a dfferent decson now. Defnton 5 (Earlest Ponts of Departure). I α(s, S ) f and only f: 1. S (I ) S (I ) 2. There exst h I, S, d c such that h z G (h, S, S, d c ). 3. There exst h I, S, d c such that h z G (h, S, S, d c ). Ths defnton can be extended to deal wth mxed strateges 14, but pure strateges are suffcent for our current purposes. 14 Three modfcatons are necessary: Frst, we change requrement 1 to be that both strateges specfy dfferent probablty measures at I. Second, we adapt requrements 2 and 3 to hold for some realzaton of the mxed strateges. Fnally, we nclude the recursve requrement, There does not exst I I such that I α(s, S ). 13

14 Defnton 6 (Obvously Domnant). In G for agent wth preferences θ, S s obvously domnant f: S : I α(s, S ) : sup u G (h, S, S, d c, θ ) nf u G h I,S,dc h I,S (h, S, S, d c, θ ) (5),dc Compare Defnton 4 and Defnton 6. Weak domnance s defned usng h, the hstory that begns the game. Consequently, f two extensve games have the same normal form, then they have the same weakly domnant strateges. Obvous domnance s defned wth hstores that are n nformaton sets that are earlest ponts of departure. Thus two extensve games wth the same normal form may not have the same obvously domnant strateges. Swtchng to a drect revelaton mechansm may not preserve obvous domnance, so the standard revelaton prncple does not apply. Defnton 7 (Strategy-Proof). S SP(G, θ) f for all, S s weakly domnant. Defnton 8 (Obvously Strategy-Proof). S OSP(G, θ) f for all, S s obvously domnant. A mechansm s weakly group-strategy-proof f there does not exst a coalton that could devate and all be strctly better off ex post. Defnton 9 (Weakly Group-Strategy-Proof). S WGSP(G, θ) f there does not exst a coalton ˆN N, devatng strateges Ŝ ˆN, non-coalton strateges S N\ ˆN and chance moves d c such that: For all ˆN: u G (h, Ŝ ˆN, S N\ ˆN, d c, θ ) > u G (h, S ˆN, S N\ ˆN, d c, θ ) (6) Obvous strategy-proofness mples weak group-strategy-proofness. Proposton 1. If S OSP(G, θ), then S WGSP(G, θ). Proof. We prove the contrapostve. Suppose S / WGSP(G, θ). Then there s a coalton ˆN that could jontly devate to strateges Ŝ ˆN and all be strctly better off. Fx S N\ ˆN and d c such that all agents n the coalton are strctly better off. Along the resultng termnal hstory, there must be a frst agent n the coalton to devate from S to Ŝ. That frst devaton happens at some nformaton set I α(s, Ŝ). Snce agent strctly gans from that devaton, S / OSP(G, θ). 14

15 Corollary 1. If S OSP(G, θ), then S SP(G, θ). Proposton 1 suggests a queston: Is a choce rule OSP-mplementable f and only f t s WGSP-mplementable? Proposton 5 n Subsecton 4.3 shows that ths s not so. 3.1 Cogntve lmtatons In what sense s obvous domnance obvous? Intutvely, to see that S s weakly domnated by S, the agent must understand the entre functon u G, and check that for all opponent strategy profles S, the payoff from S s no better than the payoff from S. By contrast, to see that S s obvously domnated by S, the agent need only know the range of the functons u G (, S, ) and ug (, S, ) at any earlest pont of departure. Thus, obvous domnance can be recognzed even f the agent has a smplfed mental model of the world. We now make ths pont rgorously. We defne an equvalence relaton between mechansms. In words, G and G are -ndstngushable f there exsts a bjecton from s nformaton sets and actons n G onto s nformaton sets and actons n G, such that: 1. ψ s an experence n G ff ψ s bjected partner s an experence n G. 2. Outcome x could follow experence ψ n G ff x could follow ψ s bjected partner n G Defnton 10. Take any G, G G, wth nformaton parttons I, I and experence sets Ψ, Ψ. G and G are -ndstngushable f there exsts a bjecton λ G,G from I A(I ) to 15 I A (I ) such that: 1. ψ Ψ ff λ G,G (ψ ) Ψ 2. z Z : g(z) = x, ψ (z) = ψ ff z Z : g (z ) = x, ψ (z ) = λ G,G (ψ ) where we use λ G,G (ψ ) to denote {λ G,G (ψ k)}t k=1, where T N. 15 Ths defnton entals that λ G,G maps I onto I and A(I ) onto A (I ). If an nformaton set n G was mapped onto an acton n G, then any experence nvolvng that nformaton set would, when passed through the bjecton, result n a sequence that was not an experence, and pso facto not an experence of G. 15

16 For G and G that are -ndstngushable, we defne λ G,G (S ) to be the strategy that, gven nformaton set I n G, plays λ G,G (S (λ 1 G,G (I ))). The next theorem states that obvously domnant strateges are the strateges that can be recognzed as weakly domnant, by an agent who has a smplfed mental model of the world. Theorem 1. For any, θ : S s obvously domnant n G f and only f for every G that s -ndstngushable from G, λ G,G (S ) s weakly domnant n G. The f drecton permts a constructve proof. Suppose S s not obvously domnant n G. We apply a general procedure to construct G that s -ndstngushable from G, such that λ G,G (S ) s not weakly domnant. The only f drecton proceeds as follows: Suppose there exsts some G n the equvalence class of G, where λ G,G (S ) s not weakly domnant. There exsts some earlest nformaton set n G where could gan by devatng. We then use λ 1 G,G to locate an nformaton set n G, and a devaton S, that do not satsfy the obvous domnance nequalty. Appendx A provdes the detals. One nterpretaton of Theorem 1 s that obvously domnant strateges are those that can be recognzed gven only a partal descrpton of the game form. Another nterpretaton of Theorem 1 s that obvously domnant strateges are those that are robust to local msunderstandngs, where the agent could mstake any G for any other -ndstngushable G. 3.2 Supported by blateral commtments Suppose the followng extended game form G wth consequences n X: As before we have a set of agents N, outcomes X, and preference profles N Θ. However, there s one player n addton to N: Player 0, the Planner. The Planner has an arbtrarly rch message space M. At the start of the game, each agent N prvately observes θ. Play proceeds as follows: 1. The Planner chooses one agent N and sends a query m M, along wth a set of acceptable reples R M 2. observes (m, R), and chooses a reply r R. 3. The Planner observes r. 4. The Planner ether selects an outcome x X, or chooses to send another query. 16

17 (a) If the Planner selects an outcome, the game ends. (b) If the Planner chooses to send another query, go to Step 1. For N, s strategy specfes what reply to gve, as a functon of hs preferences, the past sequence of queres and reples between hm and the Planner, and the current (m, R). That s: S (θ, (m k, R k, r k ) t 1 k=1, m t, R t ) R t (7) We use S θ to denote the strategy played by type θ of agent. Agan we abbrevate ( S θ ) N S N θ and (( S θ ) θ Θ ) N ( S N θ ) θ Θ. S 0 denotes a pure strategy for the Planner. We requre that these have bounded length, whch ensures that payoffs are well-defned. Defnton 11. S0 s a pure strategy of bounded length f there exsts k N such that: For all SN : ( S 0, S N ) results n the Planner sendng k or fewer total queres. S 0 denotes the set of all pure strateges of bounded length. The standard full commtment paradgm s equvalent to allowng the Planner to commt to a unque S 0 S 0 (or some probablty measure over S 0 ). Instead, we assume that for each agent, the Planner can commt to a subset Ŝ 0 S 0 that s measurable wth respect to that agent s observatons n the game. Ths s formalzed as follows: Each ( S 0, S N ) results n some observaton o (o C, ox ), consstng of a communcaton sequence between the Planner and agent, o C = (m k, R k, r k ) T k=1 for T N, as well as some outcome o X X. 16 O s the set of all possble observatons (for agent ). We defne φ : S 0 S N O, where φ ( S 0, S N ) s the unque observaton resultng from ( S 0, S N ). Next we defne, for any Ŝ0 S 0 : For any Ô O : Φ (Ŝ0) {o : S 0 Ŝ0 : S N : o = φ ( S 0, S N )} (8) Φ 1 (Ô) { S 0 : S N : φ ( S 0, S N ) Ô} (9) Defnton 12. Ŝ 0 s -measurable f there exsts Ô such that: Ŝ 0 = Φ 1 (Ô) (10) 16 The communcaton sequence mght be empty, whch we represent usng T = 0. 17

18 Intutvely, the -measurable subsets of S 0 are those such that, f the Planner devates, then there exsts an agent strategy profle such that agent detects the devaton. Formally, the -measurable subsets of S 0 are the σ-algebra generated by Φ (where we mpose the dscrete σ-algebra on O ). Defnton 13. A mxed strategy of bounded length over Ŝ0 specfes a probablty measure over a subset S 0 Ŝ0 such that: There exsts k N such that: For all S0 S 0 and all SN : ( S 0, S N ) results n the Planner sendng k or fewer total queres. S 0 We use Ŝ0 to denote the mxed strateges of bounded length over Ŝ0. denotes an element of such a set. Defnton 14. A choce rule f s supported by blateral commtments (Ŝ 0 ) N f there exst S 0, and ( S θ N ) θ Θ such that: 1. For all N: Ŝ 0 s -measurable. 2. For all θ: ( S 0, S N θ ) results n f(θ). 3. S 0 N Ŝ 0 4. For all N, θ, S N\, S 0 Ŝ 0 : S θ s a best response to ( S 0, S N\ ) (gven preferences θ ). The second requrement s that the Planner s mxed strategy and the agent s pure strateges result n the (dstrbuton over) outcomes requred by the choce rule. The thrd requrement s that the Planner s strategy s a (possbly degenerate) mxture over pure strateges compatble wth every blateral commtment. The fourth requrement s that each agent s assgned strategy s weakly domnant, when we consder the Planner as a player restrcted to playng mxtures over strateges n Ŝ 0. Supported by blateral commtments s just one of many partal commtment regmes. Ths one requres that the commtment offered to each agent s measurable wth respect to events that he can observe. In realty, contracts are seldom enforceable unless each party can observe breaches. Thus, supported by blateral commtments s a natural case to study. Theorem 2. f s OSP-mplementable f and only f there exst blateral commtments (Ŝ 0 ) N that support f. 18

19 The ntuton behnd the proof s as follows: A blateral commtment Ŝ 0 s essentally equvalent to the Planner commttng to run only games n some -ndstngushable equvalence class of G. Consequently, we can fnd a set of blateral commtments that support f y f and only f we can fnd some (G, (S θ ) θ Θ ) such that, for every, for every θ, for every G that s - ndstngushable from G, λ G,G (S θ ) s weakly domnant n G. By Theorem 1, ths holds f and only f f y s OSP-mplementable. Appendx A provdes the detals. 3.3 A non-standard revelaton prncple The standard revelaton prncple does not hold for OSP mechansms; convertng an OSP mechansm nto the correspondng drect revelaton mechansm may not preserve obvous domnance. However, there s a weaker prncple that substantally smplfes the analyss. Here we defne the prunng of a mechansm wth respect to a set of strateges (one for each type of each agent). Ths s the new mechansm constructed by deletng all sub-trees that are not reached gven any type profle. Defnton 15 (Prunng). Take any G = H, P, δ c, (I ) N, g, and (S θ ) θ Θ. P(G, (S θ ) θ Θ ) H, P, δ c, (Ĩ) N, g s the prunng of G wth respect to (S θ ) θ Θ, constructed as follows: 1. H = {h H : θ : dc : h s a subhstory of z G (, S θ, d c )} 2. For all, f I I then (I H) Ĩ ( P, δ c, g) are (P, δ c, g) restrcted to doman H. It turns out that, f some mechansm OSP-mplements a choce rule, then the prunng of that mechansm wth respect to the equlbrum strateges OSP-mplements that same choce rule. Thus, whle we cannot restrct our attenton to drect revelaton mechansms, we can restrct our attenton to mnmal mechansms, where no hstores are off the path of play. Proposton 2. Let G P(G, (S θ ) θ Θ ), and ( S θ ) θ Θ be (S θ ) θ Θ restrcted to G. If (G, (S θ ) θ Θ ) OSP-mplements f, then ( G, ( S θ ) θ Θ ) OSP-mplements f. 17 Note that the empty hstory h s dstnct from the empty set. That s to say, (I H) = does not ental that {h } Ĩ. 19

20 4 Applcatons 4.1 Bnary Allocaton Problems We now consder a canoncal envronment, (N, X, Θ, (u ) N ). Let Y 2 N be the set of feasble allocatons, wth representatve element y Y. An outcome conssts of an allocaton y Y and a transfer for each agent, X = Y R n. t (t ) N denotes a profle of transfers. Preferences are quaslnear. Θ = N Θ, where Θ = [θ, θ ], for 0 θ < θ <. For θ Θ u (θ, y, t) = 1 y θ + t (11) For nstance, n a prvate value aucton wth unt demand, y ff agent receves at least one unt of the good under allocaton y. In a procurement aucton, y ff does not ncur costs of provson under allocaton y. θ s agent s cost of provson (equvalently, beneft of non-provson). In a publc goods game, Y = {, N}. An allocaton rule s f y : Θ Y. A choce rule s thus a combnaton of an allocaton rule and a payment rule, f = (f y, f t ), where f t : Θ R n. Smlarly, for each game form G, we dsaggregate the outcome functon, g = (g y, g t ). In ths part, we concern ourselves only wth determnstc allocaton rules and payment rules, and thus suppress notaton nvolvng δ c and d c. Defnton 16. An allocaton rule f y s C-mplementable f there exsts f t such that (f y, f t ) s C-mplementable. G C-mplements f y f there exsts f t such that G C-mplements (f y, f t ) Defnton 17. f y s monotone f for all, for all θ, 1 fy(θ) s weakly ncreasng n θ. In ths envronment, f y s SP-mplementable f and only f f y s monotone. Ths result s mplct n Spence (1974) and Mrrlees (1971), and s proved explctly n Myerson (1981). 18 Moreover, f an allocaton rule f y s SP-mplementable, then the accompanyng payment rule f t s essentally unque. 18 These monotoncty results for are for weak SP-mplementaton rather than full SPmplementaton mplementaton. Weak SP-mplementaton requres S θ SP(G, θ). Full SP-mplementaton requres {S θ } = SP(G, θ). There are monotone allocaton rules for whch the latter requrement cannot be satsfed. For example, suppose two agents wth unt demand. Agent 1 receves one unt ff v 1 >.5. Agent 2 receves one unt ff v 2 > v 1. 20

21 f t, (θ, θ ) = 1 fy(θ) nf{θ : f y (θ, θ )} + r (θ ) (12) where r s some arbtrary determnstc functon of the other agents preferences. Ths follows easly by arguments smlar to those n Green and Laffont (1977) and Holmström (1979). We are nterested n how these results change when we requre OSPmplementaton. In partcular: 1. What condton on f y characterzes the set of OSP-mplementable allocaton rules? 2. For OSP-mplementaton, s there an analogous essental unqueness result on the extensve game form G? We now defne a monotone prce mechansm. Informally, a monotone prce mechansm s such that, for every, 1. Ether: 2. Or: (a) There s a gong transfer assocated wth beng n the allocaton, whch falls monotoncally. (b) Whenever the gong transfer falls, chooses whether to keep bddng or to qut. (c) If quts, then s not n the allocaton and receves a fxed transfer (d) If the game ends:. If s n the allocaton, then receves the gong transfer.. If s not n the allocaton, then receves the fxed transfer. (a) There s a gong transfer assocated wth not beng n the allocaton, whch falls monotoncally. (b) Whenever the gong transfer falls, chooses whether to keep bddng or to qut. (c) If quts, then s n the allocaton and receves a fxed transfer (d) If the game ends:. If s not n the allocaton, then receves the gong transfer.. If s n the allocaton, then receves the fxed transfer. 21

22 The ether clause contans ascendng clock auctons as a specal case. The or clause contans descendng prce procurement auctons; agents that do not wn the contract receve a fxed zero transfer. There s a postve payment assocated wth wnnng the contract (.e. not beng n the allocaton), whch starts hgh and counts downwards. The followng formal defnton pns down a few addtonal detals about monotone prce mechansms. Defnton 18 (Monotone Prce Mechansm). A game G s a monotone prce mechansm f, for every N, at every earlest nformaton set I such that A(I ) > 1: 1. Ether: There exsts a real number t 0, a functon t 1 : {I : I ψ (I )} R, and a set of actons A 0 such that: (a) For all a A 0, for all z such that a ψ (z): / g y (z) and g t, (z) = t 0. (b) A 0 A(I ). (c) For all I, I {I : I I }:. If I I, then t 1 (I ) t 1 (I ).. If I s the penultmate nformaton set n ψ (I ) and t 1 (I ) > t 1 (I ), then A0 A(I ).. If I I and t 1 (I ) > t 1 (I ), then A(I ) \ A0 = 1. v. If A(I ) \ A0 > 1, then there exsts a A(I ) such that: For all z such that a ψ (z): g y (z). (d) For all z where I ψ (z):. Ether: / g y (z) and g t, (z) = t 0.. Or: g y (z) and g t, (z) = nf t 1 (I ) (13) I ψ (z) 2. Or: There exsts a real number t 1, a functon t 0 : {I : I ψ (I )} R, and a set of actons A 1 such that: (a) For all a A 1, for all z such that a ψ (z): g y (z) and g t, (z) = t 1. (b) A 1 A(I ). (c) For all I, I {I : I I }: 22

23 . If I I, then t 0 (I ) t 0 (I ).. If I s the penultmate nformaton set n ψ (I ) and t 1 (I ) > t 1 (I ), then A1 A(I ).. If I I and t 0 (I ) > t 0 (I ), then A(I ) \ A1 = 1. v. If A(I ) \ A1 > 1, then there exsts a A(I ) such that: For all z such that a ψ (z): / g y (z). (d) For all z where I ψ (z):. Ether: g y (z) and g t, (z) = t 1.. Or: / g y (z) and g t, (z) = nf t 0 (I ) (14) I ψ (z) Notce what ths defnton does not requre. The gong transfer need not be equal across agents. Whether and how much one agent s gong transfer changes could depend on other agents actons. Some agents could face a procedure consstent wth the ether clause, and other agents could face a procedure consstent wth the or clause. Indeed, whch procedure an agent faces could depend on other agents actons. Theorem 3. If (G, (S θ ) θ Θ ) OSP-mplements f y, then G P(G, (S θ ) θ Θ ) s a monotone prce mechansm. Theorem 4. If G s a monotone prce mechansm, then there exsts f y such that G OSP-mplements f y. The next theorem characterzes the set of OSP-mplementable allocaton rules. It nvokes two addtonal assumptons. Frst, we assume that f y admts a fnte partton, whch means that we can partton the type space nto a fnte set of N -dmensonal ntervals, wth the allocaton rule constant wthn each nterval. Ths assumpton s largely techncal. It s requred because OSP s defned for extensve game forms such that play proceeds n dscrete steps. OSP s not defned for contnuous-tme auctons, although we can approxmate some of them arbtrarly fnely. 19 Second, we assume that the lowest type of each agent s never n the allocaton, and has a zero transfer. Ths s a substantve restrcton, and rules 19 Smon and Stnchcombe (1989) show that dscrete tme wth a very fne grd can be a good proxy for contnuous tme. However, n ther theory, players have perfect nformaton about past actvty n the system. Adaptng ths to our theory, where G ncludes all dscrete-tme game forms wth mperfect nformaton, s far from straghtforward. 23

24 out, for nstance the subsdzed trade mechansms examned by Myerson and Satterthwate (1983). Defnton 19. f y admts a fnte partton f there exsts K N such that, for each, there exsts {θ k}k k=1 such that: 1. θ = θ 1 < θ2 <... < θk = θ. 2. For all θ, θ, for all θ, f there does not exst k such that θ θ k < θ, then f y (θ, θ ) = f y (θ, θ ) The use of a sngle K for all agents s wthout loss of generalty. All vector nequaltes n the followng theorem are n the product order. That s, v v ff for every ndex, v v. Smlarly, v > v ff for every ndex, v > v. Theorem 5. Assume that: 1. f y admts a fnte partton. 2. For all, for all θ, / f y (θ, θ ). There exsts G and f t such that: 1. G OSP-mplements (f y, f t ) 2. For all, for all θ, f t, (θ, θ ) = 0 f and only f 1. f y s monotone. 2. For all A N, for all θ N\A, for Θ A (θ N\A ) A closure({θ A : θ A\ θ A\ : / f y (θ, θ A\, θ N\A)}) (15) (a) Θ A (θ N\A ) s connected. (b) There exsts A such that, f θ A > sup{ Θ A (θ N\A )}, then f y (θ A, θ N\A ). The sets defned by Equaton 15 are jon-semlattces. 20 Snce ther supremum s also the supremum of a fnte set of partton coordnates, t s well defned. 20 For a proof, see Lemma 6 n Appendx A. 24

25 4.2 Onlne Advertsng Auctons We now study an onlne advertsng envronment, whch generalzes Edelman et al. (2007). There are n bdders, and n 1 advertsng postons. 21 Each poston has an assocated clck-through rate α k, where α 1 α 2... α n 1 > 0. For convenence, we defne poston n wth α n = 0. Each bdder s type s a vector, θ (θ k)n k=1. A bdder wth type θ who receves poston k and transfer t has utlty: u (k, t, θ ) = α k θ k + t (16) The margnal utlty of movng to poston k from poston k, for type θ, s m(k, k, θ ) α k θ k α k θ k (17) We make the followng assumptons on the type space Θ: A1. Fnte: Θ < (18) A2. Hgher slots are better: A3. Sngle-crossng: 22 k n 1 : θ Θ : N : m(k, k + 1, θ ) 0 (19) k n 2 : θ, θ Θ :, j N : If m(k, k + 1, θ ) > m(k, k + 1, θ j), then m(k + 1, k + 2, θ ) > m(k + 1, k + 2, θ j). (20) A1 s a techncal assumpton to accommodate extensve game forms that move n dscrete steps. A2 and A3 are substantve assumptons. Edelman et al. (2007) assume that for all k, k, θ k = θ k, whch entals A2. If α 1 > α 2 >... > α n 1 > 0, then ther assumpton also entals A3. In ths envronment, the Vckrey-Clarke-Groves (VCG) mechansm selects the effcent allocaton. Suppose we number each buyer accordng to the slot he wns. Then bdder has VCG payment: 21 It s trval to extend what follows to fewer than n 1 advertsng postons, but dong so would add notaton. 22 Ths assumpton s not dentcal to the sngle-crossng assumpton n Yenmez (2014). For nstance, Yenmez s condton permts the second nequalty n Equaton 20 to be weak. 25

26 n 1 t = m(k, k + 1, θ k+1 ) (21) k= Edelman et al. (2007) produce a generalzed Englsh aucton that ex post mplements the effcent allocaton rule n onlne advertsng auctons. The generalzed Englsh aucton has a unque perfect Bayesan equlbrum n contnuous strateges. It s not SP, and therefore s not OSP. Here we produce an alternatve ascendng aucton that OSP-mplements the effcent allocaton rule. Proposton 3. Assume A1, A2, A3. There exsts G that OSP-mplements the effcent allocaton rule and the VCG payments. Proof. We construct G. Set p n 1 := 0, A n 1 = N. For l = 1,..., n 1: 1. Start the prce at p n l. 2. Rase the prce n small ncrements. If the current prce s p n l, the next prce s: p n l := nf {m(n l, n l + 1, θ ) : m(n l, n l + 1, θ ) > p n l } (22) θ Θ, N 3. At each prce, query each agent n A n l (n an arbtrary order), gvng her the opton to qut. 4. At any prce p n l, f agent quts, allocate her slot n l+1, and charge every agent n A n l \ the prce p n l. 5. Set p n l 1 := nf {m(n l 1, n l, θ ) : m(n l, n l + 1, θ ) p n l } (23) θ Θ, N A n l 1 := A n l \ (24) It s an obvously domnant strategy for agent to qut ff the prce n round l s weakly greater than m(n l, n l + 1, θ ). 26

27 Consder any round l. Payments from prevous rounds are sunk costs. Quttng yelds slot n l + 1 at no addtonal cost, and removes the agent from future rounds. Consder devatons where the earlest pont of departure nvolves quttng. The current prce p n l s weakly less than m(n l, n l + 1, θ ). If the truth-tellng strategy has the result that quts n round l, ths outcome s at least as good for as quttng now. If the truth-tellng strategy has the result that does not qut n round l, then s charged some amount less than hs margnal value for movng up a slot, and the next startng prce s p n l 1 m(n l 1, n l, θ ), so the argument repeats. Consder devatons where the earlest pont of departure nvolves stayng n. The current prce p n l s weakly greater than m(n l, n l + 1, θ ), so ths ether has the same result as quttng now, or rases s poston at margnal cost weakly above s margnal utlty. Ths s trvally true for the current round. Consder the next round, l + 1. If the startng prce p n l 1 s strctly less m(n l 1, n l, θ ), then there exsts some θ and j such that m(n l 1, n l, θ ) > m(n l 1, n l, θ j ). And m(n l, n l + 1, θ ) p n l m(n l, n l + 1, θ j ), whch contradcts A3. Repeatng the argument suffces to prove the clam for all rounds l l. By nspecton, ths mechansm and the specfed strategy profle result n the effcent allocaton and the VCG payments. Internet transactons conducted by a central auctoneer rase commtment problems, and bdders may be legtmately concerned about shll bddng. If we consder such auctons as repeated games, reputaton can amelorate commtment problems, but the set of equlbra can be very large and prevent tractable analyss. Proposton 3 shows that, even f we do not consder such auctons as repeated games, there sometmes exst robust mechansms that rely only on blateral commtments. In the case of advertsng auctons, the speed of transactons may requre bdders to mplement ther strateges usng automata. 4.3 Top Tradng Cycles We now produce an mpossblty result for OSP-mplementaton n a classc matchng envronment (Shapley and Scarf, 1974). There are n agents n the market, each endowed wth an ndvsble good. An agent s type s a vector θ R n. Θ s the set of all n by n matrces of 27

28 real numbers. An outcome assgns one object to each agent. If agent s assgned object k, he has utlty θ k. There are no money transfers. Followng Roth (1982), we assume that the algorthm n queston has an arbtrary, fxed way of resolvng tes. Gven preferences θ and agents R N, a top tradng cycle s a set R R whose members can be ndexed n a cyclc order: R = { 1, 2,..., r = 0 } (25) such that each agent k lkes k+1 s good more than any other good n R, resolvng tes accordng to the fxed order. Defnton 20. f s a top tradng cycle rule f, for all θ, f(θ) s equal to the output of the followng algorthm: 1. Set R 1 := N 2. For l = 1, 2,,...: (a) Choose some top tradng cycle R R l. (b) Carry out the ndcated trades. (c) Set R l+1 := R l \ R. (d) Termnate f R l+1 =. Proposton 4. If f s a top tradng cycle rule, then there exsts G that SP-mplements f. Ths result s proved n Roth (1982). Proposton 5. If n 3 and f s a top tradng cycle rule, then there does not exst G that OSP-mplements f. Proof. SP-mplementablty s a heredtary property of functons. That s, f f s SP-mplementable gven doman Θ, then the subfuncton f = f wth doman Θ Θ s SP-mplementable. By nspecton, the same s true for OSP-mplementablty. Thus, to prove Proposton 5, t suffces to produce a subfuncton that s not OSP-mplementable. Consder the followng subset Θ Θ. Take agents a, b, c, wth endowed goods A, B, C. a has only two possble types, θ a and θ a, such that Ether B a C a A a... or C a B a A a... (26) 28

29 We make the symmetrc assumpton for b and c. We now argue by contradcton. Take any G pruned wth respect to the truthful strategy profles, such that (by Proposton 2) G OSP-mplements f = f for doman Θ. Consder some hstory h at whch P (h) = a wth a non-sngleton acton set. Ths cannot come before all such hstores for b and c. Suppose not, and suppose B a C. If a chooses the acton correspondng to B a C, and faces opponent strateges correspondng to C b A and B c A, then a receves good A. If a chooses the acton correspondng to C a B, and faces opponent strateges correspondng A c B, then a receves good C. Thus, t s not an obvously domnant strategy to choose the acton correspondng to B a C. So a cannot be the frst to have a non-sngleton acton set. By symmetry, ths argument apples to b and c as well. So all of the acton sets for a, b, and c are sngletons, and G does not OSP-mplement f, a contradcton. Top tradng cycles s weakly group-strategy-proof (Brd, 1984). Consequently, Proposton 5 shows that the OSP-mplementable choce rules are not dentcal to the WGSP-mplementable choce rules. 5 Laboratory Experment Are obvously strategy proof mechansms easer for real people to understand? The followng laboratory experment provdes a straghtforward test: We compare pars of mechansms that mplement the same allocaton rule. One mechansm n each par s SP, but not OSP. The other mechansm s OSP. Standard game theory predcts that both mechansms wll produce the same outcome. We are nterested n whether subjects play the domnant strategy at hgher rates under OSP mechansms. 5.1 Experment Desgn The experment s an across-subjects desgn, comparng three pars of games. There are four players n each game. For the frst par, we compare the second-prce aucton (2P) and the ascendng clock aucton (AC). In both these games, subjects bd for a money prze. Subjects have nduced afflated prvate values; f a subject wns the prze, he earns an amount equal to the value of the prze, mnus hs payments from the aucton. For each subject, hs value for the prze s equal to a group 29

30 draw plus a prvate adjustment. The group draw s unformly dstrbuted between $10 and $110. The prvate adjustment s unformly dstrbuted between $0 and $20. All money amounts n these games are n 25-cent ncrements. Each subject knows hs own value, but not the group draw or the prvate adjustment. 23 2P s SP, but not OSP. In 2P, subjects submt ther bds smultaneously. The hghest bdder wns the prze, and makes a payment equal to the secondhghest bd. Bds are constraned to be between $0 and $ AC s OSP. In AC, the prce starts at a low value (the hghest $25 ncrement that s below the group draw), and counts upwards, up to a maxmum of $150. Each bdder can qut at any pont. When only one bdder s left, that bdder wns the object at the current prce. Prevous studes comparng second-prce auctons to ascendng clock auctons have small sample szes, gven that when the same subjects play a sequence of auctons, these are planly not ndependent observatons. Kagel et al. (1987) compare 2 groups playng second-prce auctons to 2 groups playng ascendng clock auctons. Harstad (2000) compares 5 groups playng second-prce auctons to 3 groups playng ascendng clock auctons. (The comparson s not the man goal of ether experment.) Other studes fnd smlar results for second-prce auctons (Kagel and Levn, 1993) and for ascendng clock auctons (McCabe et al., 1990), but these do not drectly compare the two formats wth the same value dstrbuton and the same subject pool. When we compare 2P and AC, we can see ths as a hghpowered replcaton of Kagel et al. (1987), snce we now observe 18 groups playng 2P and 18 groups playng AC. 25 For the second par, we compare the second-prce plus-x aucton (2P+X) and the ascendng clock plus-x aucton (AC+X). Subjects values are drawn as before. However, there s an addtonal random varable X, whch s unformly dstrbuted between $0 and $3. Subjects are not told the value of X untl after the aucton. 2P+X s SP, but not OSP. In 2P+X, subjects submt ther bds smul- 23 We use afflated prvate values for two reasons. Frst, n strategy-proof auctons wth ndependent prvate values, ncentves for truthful bddng are weak for bdders wth values near the extremes. Afflaton strengthens ncentves for these bdders. Second, Kagel et al. (1987) use afflated prvate values, and the frst part of the experment s desgned to replcate ther results. 24 In both 2P and AC, f there s a te for the hghest bd, then no bdder wns the object. 25 I am not aware of any prevous laboratory experment that drectly compares secondprce and ascendng clock auctons, holdng constant the value dstrbuton and subject pool, wth more than fve groups playng each format. 30

31 taneously. The hghest bdder wns the prze f and only f hs bd exceeds the second-hghest bd plus X. If the hghest bdder wns the prze, then he makes a payment equal to the second-hghest bd plus X. Otherwse, no agent wns the prze, and no payments are made. In ths game, t s a domnant strategy to submt a bd equal to your value. AC+X s OSP. In AC+X, the prce starts at a low value (the hghest $25 ncrement that s below the group draw), and counts upwards. Each bdder can qut at any pont. When only one bdder s left, the prce contnues to rse for another X dollars, and then freezes. If the hghest bdder keeps bddng untl the prce freezes, then she wns the prze at the fnal prce. Otherwse, no agent wns the prze and no payments are made. In ths game, t s an obvously domnant strategy to keep bddng f the prce s strctly below your value, and qut otherwse. Some subjects mght fnd 2P or AC famlar, snce such mechansms occur n some natural economc envronments. Dfferences n subject behavor mght be caused by dfferent degrees of famlarty wth the mechansm. 2P+X and AC+X are novel mechansms that subjects are unlkely to fnd famlar. 2P+X and AC+X can be seen as perturbatons of 2P and AC; the underlyng allocaton rule s made more complex whle preservng the SP-OSP dstncton. Thus, comparng 2P+X and AC+X ndcates whether the dstncton between SP and OSP mechansms holds for novel and more complcated aucton formats. In the thrd par of games, subjects may receve one of four commonvalue money przes. The four prze values are drawn, unformly at random and wthout replacement, from the set: {$0.00, $0.25, $0.50, $0.75, $1.00, $1.25} (27) Subjects observe the values of all four przes at the start of each game. In a strategy-proof random seral dctatorshp (SP-RSD), subjects are nformed of ther prorty score, whch s drawn unformly at random from the ntegers 1 to 10. They then smultaneously submt ranked lsts of the four przes. Players are processed sequentally, from the hghest prorty score to the lowest. Tes n prorty score are broken randomly. Each player s assgned the hghest-ranked prze on hs lst, among the przes that have not yet been assgned. It s a domnant strategy to rank the przes n order of ther money value. SP-RSD s SP, but not OSP. In an obvously strategy-proof random seral dctatorshp (OSP-RSD), subjects are nformed of ther prorty score. Players take turns, from the hghest prorty score to the lowest. When a player takes hs turn, he s 31

32 Table 1: Mechansms n each treatment 10 rounds 10 rounds 10 rounds Treatment 1 AC AC+X OSP-RSD Treatment 2 2P 2P+X SP-RSD Treatment 3 AC AC+X SP-RSD Treatment 4 2P 2P+X OSP-RSD shown the przes that have not yet been taken, and pcks one of them. It s an obvously domnant strategy to pck the avalable prze wth the hghest money value. SP-RSD and OSP-RSD dffer from the auctons n several ways. The auctons are prvate-value games of ncomplete nformaton, whereas SP- RSD and OSP-RSD are common-value games of complete nformaton. In the auctons, subjects face two sources of strategc uncertanty: They are uncertan about ther opponents valuatons, and they are uncertan about ther opponents strateges (a functon of valuatons). By contrast, n SP- RSD and OSP-RSD, subjects face no uncertanty about ther opponents valuatons. Unlke the auctons, SP-RSD and OSP-RSD are constant-sum games, such that one player s acton cannot affect total player surplus. Any effect that perssts n both the auctons and the seral dctatorshps s dffcult to explan usng socal preferences, snce such theores typcally make dfferent predctons for constant-sum and non-constant-sum games. Thus, n comparng SP-RSD and OSP-RSD, we test whether the SP-OSP dstncton has emprcal support n mechansms that are very dfferent from auctons. At the start of the experment, subjects are randomly assgned nto groups of four. These groups persst throughout the experment. Consequently, each group s play can be regarded as a sngle ndependent observaton n the statstcal analyss. Each group ether plays 10 rounds of AC, followed by 10 rounds of AC+X, or plays 10 rounds of 2P, followed by 10 rounds of 2P+X. 26 At the end of each round, subjects are shown the aucton result, ther own proft from ths round, the wnnng bdder s proft from ths round, and 26 If a stage game wth domnant strateges s repeated fntely many tmes, then the resultng repeated game typcally does not have a domnant strategy. The same holds for obvously domnant strateges. Consequently, n nterpretng these results as nformng us about domnant strategy play, we nvoke an mplct narrow framng assumpton. The same assumpton s made for other experments n ths lterature, such as Kagel et al. (1987) and Kagel and Levn (1993). 32

33 the bds (n order from hghest to lowest). Notce that subjects have 10 rounds of experence wth a standard aucton, before beng presented wth ts unusual +X varant. Thus, the data from +X auctons record moderately experenced bdders grapplng wth a new aucton format. Next, groups are re-randomzed nto ether 10 rounds of OSP-RSD or 10 rounds of SP-RSD. At the end of each round, subjects see whch prze they have obtaned, and whether ther prorty score was the hghest, or second-hghest, and so on. Table 1 summarzes the desgn. Subjects had prnted copes of the nstructons, and the expermenter read aloud the part pertanng to each 10-round segment just before that segment began. The nstructons (correctly) nformed subjects that ther play n earler segments would not affect the games n later segments. The nstructons dd not menton domnant strateges or provde recommendatons for how to play, so as to prevent confounds from the expermenter demand effect. Instructons for both SP and OSP mechansms are of smlar length and smlar readng levels 27, and can be found n Appendx C. In every SP mechansm, each subject had 90 seconds to make hs choce. Each subject could revse hs choce as many tmes as he desred durng the 90 seconds, and only hs fnal choce would count. For OSP mechansms, mean tme to completon was seconds n AC, seconds n AC+X, and 40.5 seconds n RSD-OSP. However, the rules of the OSP mechansms mply that not every subject was actvely choosng throughout that tme. 5.2 Admnstratve detals Subjects were pad $20 for partcpatng, n addton to ther profts or losses from every round of the experment. On average, subjects made $ Subjects who made negatve total profts receved just the $20 partcpaton payment. I conducted the experment at the Oho State Unversty Expermental Economcs Laboratory n August 2015, usng z-tree (Fschbacher, 2007). I recruted subjects from the student populaton usng an onlne system. I admnstered 16 sessons, where each sesson nvolved 1 to 3 groups. Each sesson lasted about 90 mnutes. In total, the data nclude 144 subjects n 36 groups of 4 (wth 9 groups n each treatment) % of subjects are 27 Both sets of nstructons are approxmately at a ffth-grade readng level accordng to the Flesch-Kncad readablty test, whch s a standard measure for how dffcult a pece of text s to read (Kncad et al., 1975). 28 In two cases, network errors caused crashes whch prevented a group from contnung 33

34 male, and 21% self-report as beng economcs majors. 5.3 Statstcal Analyss The data nclude 4 dfferent aucton formats, wth 180 auctons per format, for a total of 720 auctons. 29 One natural summary statstc for each aucton s the dfference between the second-hghest bd and the second-hghest value. Ths s, equvalently, the dfference between that aucton s closng prce, and the closng prce that would have occurred f all bdders played the domnant strategy. Fgure 2 dsplays hstograms of the second-hghest bd mnus the second-hghest value, for AC and 2P. Fgure 3 does the same for AC+X and 2P+X. If all agents are playng the domnant strategy n an aucton, then the hstogram for that aucton wll be a pont mass at zero. There s a substantal dfference between the emprcal dstrbutons for OSP and SP mechansms. If we choose a random aucton from the data, how lkely s t to have a closng prce wthn $2.00 of the domnant strategy prce? An aucton s 31 percentage ponts more lkely to have a closng prce wthn $2.00 of the domnant strategy prce under AC (OSP) compared to 2P (SP). An aucton s 28 percentage ponts more lkely to have a closng prce wthn $2.00 of the domnant strategy prce under AC+X (OSP) compared to 2P+X (SP). Closng prces under 2P+X are systematcally based upwards. (p =.0031) 30 Table 2 dsplays the mean absolute dfference between the second-hghest bd and the second-hghest value, for the frst 5 rounds and the last 5 rounds of each aucton. Ths measures the magntude of errors under each mechansm. (Alternatve measures of errors are n Appendx B.) Errors are systematcally larger under SP than under OSP, and ths dfference s sgnfcant both n the standard auctons and n the novel +X auctons, and n both early and late rounds. To buld ntuton for effect szes, consder that the expected proft of the wnnng bdder n 2P and AC s about $4.00 (gven domnant strategy play). Thus, the average errors under 2P are larger than the theoretcal predcton for total bdder surplus. n the experment. I recruted new subjects to replace these groups. 29 In 2 out of 720 auctons, computer errors prevented bdders from correctly enterng ther bds. We omt these 2 observatons, but ncludng them does not change any of the results that follow. 30 For each group, we take the mean dfference between the second-hghest bd and the second-hghest value. Ths produces one observaton per group playng 2P+X, for a total of 18 observatons, and we use a t-test for the null that these have zero mean. 34

35 Fgure 2: Hstogram: 2nd-hghest bd mnus 2nd-hghest value for AC and 2P Fgure 3: Hstogram: 2nd-hghest bd mnus 2nd-hghest value for AC+X and 2P+X 35

36 Table 2: mean(abs(2nd bd - 2nd value)) Format Rounds SP OSP p-value Aucton +X Aucton (1.25) (1.05) (1.18) (0.33) (0.60) (0.41) (0.87) (0.33) For each group, we take the mean absolute dfference over each 5-round block. We then compute standard errors countng each group s 5-round mean as a sngle observaton. (18 observatons per cell, standard errors n parentheses.) p-values are computed usng a two-sample t-test, allowng for unequal varances. Other emprcal strateges yeld smlar results; see Appendx B for detals. There s some evdence of learnng n 2P; errors are smaller n the last fve rounds compared to the frst fve rounds. (p =.045, pared t-test) For the other three aucton formats, there s no sgnfcant evdence of learnng. 31 To compare subject behavor under SP-RSD and OSP-RSD, we compute the proporton of games that do not end n the domnant strategy outcome. Under SP-RSD, 36.1% of games do not end n the domnant strategy outcome. Under OSP-RSD, 7.2% of games do not end n the domnant strategy outcome. Table 3 dsplays the emprcal frequency of non-domnant strategy outcomes, by aucton format and by 5-round blocks. Devatons from the domnant strategy outcome happen more frequently under SP-RSD than under OSP-RSD, and these dfferences are hghly sgnfcant n both early and late rounds. In SP-RSD, 29.0% of submtted preference lsts contan errors. The most common error under SP s to swap the ranks of the hghest and secondhghest przes, and report the lst n order 2 nd -1 st -3 rd -4 th. Ths accounts for 38 out of 209 ncorrect preference lsts. However, errors are dverse: No permutaton of {1 st, 2 nd, 3 rd, 4 th } accounts for more than a ffth of the ncorrect preference lsts. In summary, subjects play the domnant strategy at hgher rates n OSP mechansms, as compared to SP mechansms that should (accordng to standard theory) mplement the same allocaton rule. Ths dfference s sgnf- 31 p =.173 for AC, p =.694 for 2P+X, and p =.290 for AC+X. 36

37 Table 3: Proporton of seral dctatorshps not endng n domnant strategy outcome SP OSP p-value Rounds % 7.8%.0002 (7.3%) (3.3%) Rounds % 6.7%.0011 (5.2%) (3.2%) p-value For each group, for each 5-round block, we record the error rate. We then compute standard errors countng each group s observed error rate as a sngle observaton. (18 observatons per cell, standard errors n parentheses.) When comparng SP to OSP, we compute p-values usng a two-sample t-test, allowng for unequal varances. (Alternatve emprcal strateges yeld smlar results. See Appendx B for detals.) When comparng early to late rounds of the same game, we compute p-values usng a pared t-test. cant and substantal across all three pars of mechansms, and perssts even after subjects gan experence. 6 Dscusson In ths paper, we produced a compact defnton of obvously strategy-proof mechansms. We proved that a strategy s obvously domnant f and only f t can be recognzed as weakly domnant by cogntvely lmted agent. We proved that a choce rule s OSP-mplementable f and only f t can be supported by blateral commtments. For bnary allocaton problems, we characterzed the OSP mechansms and the OSP-mplementable allocaton rules. We produced one possblty result for a case wth mult-mnded bdders, and one mpossblty result for a classc matchng algorthm. A formal standard of cogntve smplcty s valuable for several reasons. Frstly, a formal standard helps us to make smplcty an explct desgn goal, by askng, What s the optmal smple mechansm for ths settng? Secondly, a formal standard allows us to quantfy trade-offs between smplcty and other desgn goals. For nstance, one justfcaton for usng a complex mechansm s that no smple mechansm performs well for the problem at hand. Thrdly, a formal standard ads mutual understandng, snce our defnton of smplcty can be common knowledge, rather than relyng on dsparate ndvdual ntutons. Mechansm desgn typcally assumes that the planner can make bndng and credble promses, even about events that the promsee does not observe. 37

38 Sometmes the full commtment assumpton s justfed, and the lterature contans many excellent results for that case. However, sometmes the full commtment assumpton s not justfed, and we must make do wth only partal commtment power. By studyng OSP-mplementaton, we dscover whch standard results n mechansm desgn rely senstvely on the assumpton of full commtment power, and learn how to desgn mechansms that rely only on blateral commtments. Much remans to be done. There are many classc results for SP-mplementaton, where OSP-mplementaton s an open queston. For nstance, n combnatoral auctons, the Vckrey-Clarke-Groves mechansm delvers frst-best expected welfare but s not obvously strategy-proof (Vckrey, 1961; Clarke, 1971; Groves, 1973). However, when agent preferences are fractonally subaddtve, there always exsts an obvously strategy-proof mechansm that delvers at least half of frst-best expected welfare (Feldman et al., 2014). 32 A natural open queston s: What s the welfare-maxmzng obvously strategyproof mechansm for combnatoral auctons? References Alls, L. (1998). A knowledge-based approach of Connect Four: The game s over, whte to move wns. Master s thess, Vrje Unverstet, Amsterdam. Ausubel, L. M. (2004). An effcent ascendng-bd aucton for multple objects. Amercan Economc Revew, pages Bartal, Y., Gonen, R., and Nsan, N. (2003). Incentve compatble mult unt combnatoral auctons. In Proceedngs of the 9th Conference on Theoretcal Aspects of Ratonalty and Knowledge, pages ACM. Barthe, G., Gaboard, M., Aras, E. J. G., Hsu, J., Roth, A., and Strub, P.-Y. (2015). Computer-aded verfcaton n mechansm desgn. arxv preprnt arxv: Brd, C. G. (1984). Group ncentve compatblty n a market wth ndvsble goods. Economcs Letters, 14(4): Brânze, S. and Procacca, A. D. (2015). Verfably truthful mechansms. In Proceedngs of the 2015 Conference on Innovatons n Theoretcal Computer Scence, pages ACM. 32 Ths statement s entaled by Lemma 3.4 of Feldman et al. (2014). The class of fractonally subaddtve preferences ncludes all submodular preferences (Lehmann et al., 2006). 38

39 Camerer, C. F., Ho, T.-H., and Chong, J.-K. (2004). A cogntve herarchy model of games. The Quarterly Journal of Economcs, pages Charness, G. and Levn, D. (2009). The orgn of the wnner s curse: a laboratory study. Amercan Economc Journal: Mcroeconomcs, pages Clarke, E. H. (1971). Multpart prcng of publc goods. Publc Choce, 11(1): Cramton, P. (1998). Ascendng auctons. European Economc Revew, 42(3): Crawford, V. P. and Irberr, N. (2007a). Fatal attracton: Salence, navete, and sophstcaton n expermental hde-and-seek games. The Amercan Economc Revew, pages Crawford, V. P. and Irberr, N. (2007b). Level-k auctons: Can a nonequlbrum model of strategc thnkng explan the wnner s curse and overbddng n prvate-value auctons? Econometrca, 75(6): Edelman, B., Ostrovsky, M., and Schwarz, M. (2007). Internet advertsng and the generalzed second-prce aucton: Sellng bllons of dollars worth of keywords. The Amercan Economc Revew, pages Esponda, I. (2008). Behavoral equlbrum n economes wth adverse selecton. The Amercan Economc Revew, 98(4): Esponda, I. and Vespa, E. (2014). Hypothetcal thnkng and nformaton extracton n the laboratory. Amercan Economc Journal: Mcroeconomcs, 6(4): Eyster, E. and Rabn, M. (2005). Cursed equlbrum. Econometrca, 73(5): Feldman, M., Gravn, N., and Lucer, B. (2014). Combnatoral auctons va posted prces. CoRR, abs/ Fschbacher, U. (2007). z-tree: Zurch toolbox for ready-made economc experments. Expermental Economcs, 10(2): Fredman, E. J. (2002). Strategc propertes of heterogeneous seral cost sharng. Mathematcal Socal Scences, 44(2):

40 Fredman, E. J. (2004). Asynchronous learnng n decentralzed envronments: A game-theoretc approach. In Collectves and the Desgn of Complex Systems, pages Sprnger. Green, J. and Laffont, J.-J. (1977). Characterzaton of satsfactory mechansms for the revelaton of preferences for publc goods. Econometrca, pages Groves, T. (1973). Incentves n teams. Econometrca: Journal of the Econometrc Socety, pages Harstad, R. M. (2000). Domnant strategy adopton and bdders experence wth prcng rules. Expermental economcs, 3(3): Hassdm, A., Marcano-Romm, D., Romm, A., and Shorrer, R. I. (2015). Strategc behavor n a strategy-proof envronment. workng paper. Holmström, B. (1979). Moral hazard and observablty. The Bell Journal of Economcs, pages Kagel, J. H., Harstad, R. M., and Levn, D. (1987). Informaton mpact and allocaton rules n auctons wth afflated prvate values: A laboratory study. Econometrca, 55(6):pp Kagel, J. H. and Levn, D. (1993). Independent prvate value auctons: Bdder behavour n frst-, second-and thrd-prce auctons wth varyng numbers of bdders. The Economc Journal, pages Kncad, J. P., Fshburne Jr, R. P., Rogers, R. L., and Chssom, B. S. (1975). Dervaton of new readablty formulas (automated readablty ndex, fog count and flesch readng ease formula) for navy enlsted personnel. Techncal report, DTIC Document. Lehmann, B., Lehmann, D., and Nsan, N. (2006). Combnatoral auctons wth decreasng margnal utltes. Games and Economc Behavor, 55(2): McCabe, K. A., Rassent, S. J., and Smth, V. L. (1990). Aucton nsttutonal desgn: Theory and behavor of smultaneous multple-unt generalzatons of the Dutch and Englsh auctons. The Amercan Economc Revew, pages Mlgrom, P. and Segal, I. (2015). Deferred-acceptance auctons and rado spectrum reallocaton. workng paper. 40

41 Mrrlees, J. A. (1971). An exploraton n the theory of optmum ncome taxaton. The Revew of Economc Studes, pages Myerson, R. B. (1981). Optmal aucton desgn. Mathematcs of Operatons Research, 6(1): Myerson, R. B. and Satterthwate, M. A. (1983). Effcent mechansms for blateral tradng. Journal of Economc Theory, 29(2): Nagel, R. (1995). Unravelng n guessng games: An expermental study. The Amercan Economc Revew, pages Pathak, P. A. and Sönmez, T. (2008). Levelng the playng feld: Sncere and sophstcated players n the Boston mechansm. The Amercan Economc Revew, 98(4): Rees-Jones, A. (2015). Suboptmal behavor n strategy-proof mechansms: Evdence from the resdency match. SSRN workng paper. Roth, A. E. (1982). Incentve compatblty n a market wth ndvsble goods. Economcs Letters, 9(2): Rothkopf, M. H., Tesberg, T. J., and Kahn, E. P. (1990). Why are Vckrey auctons rare? Journal of Poltcal Economy, pages Saks, M. and Yu, L. (2005). Weak monotoncty suffces for truthfulness on convex domans. In Proceedngs of the 6th ACM Conference on Electronc Commerce, pages ACM. Shafr, E. and Tversky, A. (1992). Thnkng through uncertanty: Nonconsequental reasonng and choce. Cogntve Psychology, 24(4): Shapley, L. and Scarf, H. (1974). On cores and ndvsblty. Journal of mathematcal economcs, 1(1): Smon, L. K. and Stnchcombe, M. B. (1989). Extensve form games n contnuous tme: Pure strateges. Econometrca, pages Spence, M. (1974). Compettve and optmal responses to sgnals: An analyss of effcency and dstrbuton. Journal of Economc Theory, 7(3): Stahl, D. O. and Wlson, P. W. (1994). Expermental evdence on players models of other players. Journal of Economc Behavor & Organzaton, 25(3):

42 Stahl, D. O. and Wlson, P. W. (1995). On players models of other players: Theory and expermental evdence. Games and Economc Behavor, 10(1): Vckrey, W. (1961). Counterspeculaton, auctons, and compettve sealed tenders. The Journal of Fnance, 16(1):8 37. Wlson, R. (1987). Game theoretc analyss of tradng processes. In Bewley, T., edtor, Adv. Econ. Theory. Cambrdge Unversty Press. Yenmez, M. B. (2014). Prcng n poston auctons and onlne advertsng. Economc Theory, 55(1): A Proofs omtted from the man text A.1 Theorem 1 Proof. Frst we prove the f drecton. Fx agent 1 and preferences θ 1. Suppose that S 1 s not obvously domnant n G = H,, A, A, P, δ c, (I ) N, g. We need to demonstrate that there exsts G that s -ndstngushable from G, such that λ G, G(S 1 ) s not weakly domnant n G. We proceed by constructon. Let (S 1, I 1, h sup, S sup 1, dsup c, h nf, S 1 nf, dnf c ) be such that I 1 α(s 1, S 1 ), h nf I 1, h sup I 1, and u G 1 (h sup, S 1, S sup 1, dsup c, θ 1 ) > u G 1 (h nf, S 1, S nf 1, d nf c, θ 1 ) (28) Snce G s a game of perfect recall, we can pck (S 1 nf, dnf c ) such that h nf z G (h, S 1, S 1 nf, dnf c ), by specfyng that (S 1 nf, dnf c ) plays n a way consstent wth h nf at any h h nf. Lkewse for h sup and (S sup 1, dsup c ). Suppose we have so done. We now defne another game G s 1-ndstngushable from G. Intutvely, we construct ths as follows: 1. We add a chance move at the start of the game; chance can play L or R. 2. Agent 1 does not at any hstory know whether chance played L or R. 3. If chance plays L, then the game proceeds as n G. 42

43 4. If chance plays R, then the game proceeds mechancally as though all players n N \ 1 and chance played accordng to S sup 1, dsup c n G, wth one excepton: 5. If chance played R, we reach the nformaton set correspondng to I 1, and agent 1 plays S 1 (I 1 ), then the game henceforth proceeds mechancally as though all players n N \ 1 and chance played accordng to S 1 nf, dnf c n G. Formally, the constructon proceeds as such: Ã = A {L, R}, where A {L, R} =. There s a new startng hstory h, wth two successors σ( h ) = { h L, h R }, Ã( h L ) = L, Ã( h R ) = R, P ( h ) = c. The subtree H L H startng from h L ordered by s the same as the arborescence (H, ). (Ã, P, δ c, g) are defned on H L exactly as (A, P, δ c, g) are on H. For j 1, Ĩ j s defned as on H. We now construct the subtree startng from h R. Let h be such that h σ(h sup ), h z G (h sup, S 1, S sup 1, dsup c ). H {h H : S 1 : h z G (h, S 1, S sup 1, dsup c )}} [{h H : P (h) = 1} {h Z}] \{h H : h h} (29) In words, these are the hstores that can be reached by some S 1 when facng S sup 1, dsup c, where ether agent 1 s called to play or that hstory s termnal, and such that those hstores are not h or ts successors. Let h be such that h σ(h nf ), h z G (h nf, S 1, S 1 nf, dnf c ). H {h H : S 1 : h z G (h, S 1, S nf 1, d nf c )}} [{h H : P (h) = 1} {h Z}] {h H : h h} (30) In words, these are the hstores that can be reached by some S 1 when facng S 1 nf, dnf c, where ether agent 1 s called to play or that hstory s termnal, and such that those hstores are h or ts successors. We now paste these together. Let H R be the rooted subtree ordered by, for some bjecton γ : H R H H, such that for all h, h H R, h h f and only f 1. EITHER: γ( h), γ( h ) H and γ( h) γ( h ) 43

44 2. OR: γ( h), γ( h ) H and γ( h) γ( h ) 3. OR: γ( h) h and h γ( h ) The root of ths subtree exsts and s unque; t corresponds to γ 1 (h), where h s the earlest hstory precedng z G (h, S 1, S sup 1, dsup c )} where 1 s called to play. Let h R be the root of HR. Ths completes the specfcaton of H. For all h H R, we defne: 1. g( h) = g(γ( h)) f h s a termnal hstory. 2. P ( h) = 1 f h s not a termnal hstory. For all h H R \ h R, we defne Ã( h) = A(h), for the unque ( h, h) such that: 1. h σ( h ) 2. h σ(γ( h )) 3. h γ( h) We now specfy the nformaton sets for agent 1. Every h H L corresponds to a unque hstory n H. We use γ L to denote the bjecton from H L to H. Let ˆγ be defned as γ L on H L and γ on H R. 1 s nformaton partton Ĩ1 s defned as such: h, h H : f and only f Ĩ 1 Ĩ1 : h, h Ĩ 1 (31) I 1 I 1 : ˆγ( h), ˆγ( h ) Ĩ 1 (32) All that remans s to defne δ c ; we need only specfy that at h, c plays R wth certanty. 33 G = H,, Ã, Ã, P, δ c, (Ĩ) N, g s 1-ndstngushable from G. Every experence at to some hstory n H L corresponds to some experence n G, and vce versa. Moreover, any experence at to some hstory n H R could also be produced by some hstory n H L. 33 If one prefers to avod δ c wthout full support, an alternatve proof for games wth N > 2 s to assgn P ( h ) = 2. 44

45 Let λ G, G be the approprate bjecton from 1 s nformaton sets and actons n G onto 1 s nformaton sets and actons n G. Take arbtrary S 1. Observe that snce I 1 α(s 1, S 1 ), λ G, G (S 1) and λ G, G(S 1 ) result n the same hstores followng h R, untl they reach nformaton set λ G, G(I 1 ). Havng reached that pont, λ G, G(S 1 ) leads to outcome g(z G (h nf, S 1, S 1 nf, dnf c )) and λ G, G(S 1 ) leads to outcome g(zg (h sup, S 1, Ssup )). Thus, 1, dsup c E δc [u G 1 ( h, λ G, G(S 1), S 1, d c, θ 1 )] = u G 1 (h sup, S 1, S sup 1, dsup c, θ 1 ) > u G 1 (h nf, S 1, S 1, nf d nf c, θ 1 ) = E δc [u G 1 ( h, λ G, G(S 1 ), S 1, d c, θ 1 )] (33) So λ G, G(S 1 ) s not weakly domnant n G. We now prove the only f drecton. Take arbtrary G. Suppose λ G, G(S 1 ) S 1 s not weakly domnant n G. We want to show that S 1 s not obvously domnant n G. There exst θ 1, S 1 and S 1 such that: E δc [u G 1 ( h, S 1, S 1, d c, θ 1 )] > E δc [u G 1 ( h, S 1, S 1, d c, θ 1 )] (34) Ths nequalty must hold for some realzaton of the chance functon, so there exsts d c such that: Fx ( S 1, S 1, d c, θ 1 ). Defne: u G 1 ( h, S 1, S 1, d c, θ 1 ) > u G 1 ( h, S 1, S 1, d c, θ 1 ) (35) z G( h, S 1, S 1, d c ) z G( h, S 1, S 1, d c ) (36) H { h H : h z G( h, S 1, S 1, d c ) and h z G( h, S 1, S 1, d c )} (37) h h H : h H : h h (38) Snce the opponent strateges and chance moves are held constant across both sdes of Equaton 36, P ( h ) = 1 and h Ĩ1, where S 1 (Ĩ1) S 1 (Ĩ1). 45

46 Moreover, Ĩ1 α( S 1, S 1 ) and λ G,G (Ĩ1) α(s 1, S 1 ), where we denote S 1 λ G,G ( S 1 ). Snce G and G are 1-ndstngushable, consder the experences λ G,G (ψ 1 (z G( h, S 1, S 1, d c ))) and λ G,G (ψ 1 (z G( h, S 1, S 1, d c ))). In G, λ G,G (ψ 1 (z G( h, S 1, S 1, d c ))) could lead to outcome g(z G( h, S 1, S 1, d c )). We use (S 1 nf, dnf c ) to denote the correspondng opponent strateges and chance realzatons that lead to that outcome. We denote h nf h λ G,G (Ĩ1) : h z G (h, S 1, S 1 nf, dnf c ). In G, λ G,G (ψ 1 (z G( h, S 1, S 1, d c ))) could lead to outcome g(z G( h, S 1, S 1, d c )). We use (S sup 1, dsup c ) to denote the correspondng opponent strateges and chance realzatons that lead to that outcome. We denote h sup h λ G,G (Ĩ1) : h z G (h, S 1, Ssup 1, dsup c ). u G 1 (h sup, S 1, S sup 1, dsup c, θ 1 ) = u G 1 ( h, S 1, S 1, d c, θ 1 ) > u G 1 ( h, S 1, S 1, d c, θ 1 ) = u G 1 (h nf, S 1, S nf 1, d nf c, θ 1 ) (39) where h sup, h nf λ G,G (Ĩ1) and λ G,G (Ĩ1) α(s 1, S 1 ). Thus S 1 s not obvously domnant n G. A.2 Theorem 2 Proof. The key s to see that, for every G G, there s a correspondng S 0, and vce versa. We use S 0 to denote the support of S 0. In partcular, observe the followng somorphsm: Informaton sets n G are equvalent to sequences of past communcaton ((m k, R k, r k ) t 1 k=1, m t, R t ) under S 0. Avalable actons at some nformaton set A(I ) are equvalent to acceptable responses R t. Thus, for any strategy n some game G, we can construct an equvalent strategy gven approprate S 0, and vce versa. Furthermore, fxng a chance realzaton d c and agent strateges S N unquely results n some outcome. Smlarly, fxng a realzaton of the planner s mxed strategy S 0 S 0 and agent strateges S N unquely determnes some outcome. Consequently, for any G G, there exsts S 0 wth the 46

47 Table 4: Equvalence between extensve game forms and Planner mxed strateges G S 0 d c S0 S 0 δ c the probablty measure specfed by S 0 g(z) for z Z the Planner s choce of outcome when she ends the game I ((m k, R k, r k ) t 1 k=1, m t, R t ) consstent wth some S 0 S 0 A(I ) R t ψ (z) o c consstent wth some S 0 S 0 and S N same strateges avalable for each agent and the same resultng (probablty measure over) outcomes, and vce versa. 34 The next step s to see that a blateral commtment Ŝ 0 s equvalent to the Planner promsng to run only games n some equvalence class that s -ndstngushable. Suppose that there s some G that OSP-mplements f. Pck some equvalent S 0 wth support S 0. For each N, specfy the blateral commtment Ŝ0 Φ 1 (Φ ( S 0 )). These blateral commtments support f. To see ths, take any S 0 Ŝ 0, wth support S 0. For any S 0 S 0, for any S N, there exsts S 0 S 0 and S N such that φ ( S 0, S N ) = φ ( S 0, S N ). By constructon, G s such that: There exsts z Z where ψ (z) and g(z) are equvalent to φ ( S 0, S N ). Thus, for G that s equvalent to S 0, every termnal hstory n G results n the same experence for and the same outcome as some termnal hstory n G. Consequently, G and G are - ndstngushable. Thus, by Theorem 1, the strategy assgned to agent wth type θ s weakly domnant n G, whch mples that t s a best response to S 0 and any S N n the blateral commtment game. Thus, f f s OSPmplementable, then f can be supported by blateral commtments. Suppose that f can be supported by blateral commtments (Ŝ 0 ) N, wth requste S 0 (wth support S 0 ) and ( S N θ ) θ Θ. Wthout loss of generalty, let us suppose these are mnmal blateral commtments,.e. Ŝ0 = Φ 1 (Φ ( S 0 )). Pck G that s equvalent to S 0. G OSP-mplements f. To see ths, consder any G such that G and G are -ndstngushable. Let S 0 denote the Planner strategy that corresponds to G. At any termnal hstory z n G, the resultng experence ψ (z ) and outcome g (z ) 34 Implctly, ths reles on the requrement that both G and S 0 have bounded length. If one had bounded length but the other could be unbounded, the resultng outcome would not be well defned and the equvalence would not hold. 47

48 are equvalent to the experence ψ (z) and outcome g(z) for some termnal hstory z n G. These n turn correspond to some observaton o Φ ( S 0 ). Thus S 0 Ŝ 0. Snce f s supported by (Ŝj 0 ) j N, S θ s a best response (for type θ ) to S 0 and any S N\. Thus, the equvalent strategy S θ s weakly domnant n G. Snce ths argument holds for all -ndstngushable G, by Theorem 1, S θ s obvously domnant n G. Thus, f f can be supported by blateral commtments, then f s OSP-mplementable. A.3 Proposton 2 Proof. We prove the contrapostve. Suppose ( G, ( S θ ) θ Θ ) does not OSP-mplement f. some (, θ, S θ, S, Ĩ) such that Ĩ α( S θ, S ) and Then there exsts u G ( h, S θ, S, d c, θ, ) < u G ( h, S, S, d c, θ, ) (40) for some ( h, S, d c ) and ( h, S, d c). Notce that h and h correspond to hstores h and h n G. Moreover, we can defne S = S at nformaton sets contanng hstores that are shared by G and G, and specfy S arbtrarly elsewhere. We do the same for ( S, d c ) and ( S, d c), to construct (S, d c ) and (S, d c ). But, startng from h and h respectvely, these result n the same outcomes as ther partners n G. Thus, u G (h, S θ, S, d c, θ, ) < u G (h, S, S, d c, θ, ) (41) We now show that h, h I, for I α(s θ, S ). Ths needs us to establsh that the two strateges dsagree at the nformaton set n queston, that they do not rule out reachng that nformaton set, and that there s no earler pont of departure. By nspecton, they dsagree at I. Snce S θ and S do not rule out reachng Ĩ, nether do S θ and S, snce any opponent strateges and realzatons of chance that enable us to reach I under S θ and S can be trvally extended to do the same under Sθ and S. If S θ and S dsagreed at some earler nformaton set, then there s some h I wth proper subhstory h I, for I α(s θ ). But for, S any nformaton set n G contanng a proper subhstory of some hstory n Ĩ, S θ and S do not dsagree. Thus, I ψ (h ), but I / ψ (h), whch contradcts the perfect recall assumpton. 48

49 Consequently, h, h I, for I α(s θ, S ). Thus, (G, (Sθ ) θ Θ ) does not OSP-mplement f. A.4 Theorem 3 Proof. Take any (G, (S θ ) θ Θ ) that mplements (f y, f t ). For any hstory h, we defne Θ h {θ Θ : h s a subhstory of z G (, S θ )} (42) For nformaton set I, we defne Θ h, {θ : θ : (θ, θ ) Θ h } (43) Θ I h I Θ h (44) Θ I, {θ : θ : (θ, θ ) Θ I } (45) Θ 1 I, {θ : θ : (θ, θ ) Θ I and f y (θ, θ )} (46) Θ 0 I, {θ : θ : (θ, θ ) Θ I and / f y (θ, θ )} (47) Some observatons about ths constructon: 1. Snce player s strategy depends only on hs own type, Θ I, = Θ h, for all h I. 2. Θ I, = Θ 1 I, Θ0 I, 3. Snce SP requres 1 fy(θ) weakly ncreasng n θ, Θ 1 I, domnates Θ0 I, n the strong set order. Lemma 1. Suppose (G, (S θ ) θ Θ ) OSP-mplements (f y, f t ), where G = H, P, (I ) N, g. For all, for all I I, f: 1. θ < θ 2. θ Θ 1 I, 3. θ Θ0 I, 49

50 then S θ (I ) = S θ (I ). Equvalently, for any I, there exsts a I such that for all θ Θ 1 I, Θ0 I,, S θ (I ) = a I. Suppose not. Take (, I, θ, θ ) consttutng a counterexample to Lemma 1. Snce θ Θ 1 I,, there exsts h I and S such that g y (z G (h, S θ, S )). Fx t g t, (z G (h, S θ, S )). Snce θ Θ0 I,, there exsts h I and S such that / g y (z G (h, S θ, S )). Fx t g t, (z G (h, S θ, S )). Snce S θ (I ) S θ (I ) and θ θ Θ I,, I α(s θ that, Sθ ). Thus, OSP requres whch mples u (θ, h, S θ, S ) u (θ, h, S θ, S ) (48) θ + t t (49) and whch mples u (θ, h, S θ, S ) u (θ, h, S θ, S ) (50) θ + t t (51) But θ > θ, so θ + t > t (52) a contradcton. Ths proves Lemma 1. The last statement follows as a corollary of the rest. Lemma 2. Suppose (G, (S θ ) θ Θ ) OSP-mplements (f y, f t ) and P(G, (S θ ) θ Θ ) = G. Take any I such that Θ 1 I, Θ0 I,, and assocated a I. 1. If there exsts θ Θ 0 I, such that Sθ (I ) a I, then there exsts t 0 such that: (a) For all θ Θ 0 I, such that Sθ (I ) a I, for all h I, for all S, g t, (z G (h, S θ, S )) = t 0. (b) For all θ Θ I, such that S θ (I ) = a I, for all h I, for all S, f / g y (z G (h, S θ, S )), then g t, (z G (h, S θ, S )) = t 0. 50

51 2. If there exsts θ Θ 1 I, such that Sθ (I ) a I, then there exsts t 1 such that: (a) For all θ Θ 1 I, such that Sθ (I ) a I, for all h I, for all S, g t, (z G (h, S θ, S )) = t 1. (b) For all θ Θ I, such that S θ (I ) = a I, for all h I, for all S, f g y (z G (h, S θ, S )), then g t, (z G (h, S θ, S )) = t 1. Take any type θ Θ0 I, such that Sθ (I ) a I. Take any type θ Θ0 I, such that S θ (I ) = a I. (By Θ 1 I, Θ0 I, there exsts at least one such type.) Notce that I α(s θ, Sθ ). By Lemma 1, θ / Θ1 I,, and the game s pruned. Thus, Snce θ Θ0 I,, h I : S : / g y (z G (h, S θ, S )) (53) h I : S : / g y (z G (h, S θ, S )) (54) OSP requres that type θ does not want to (nf-sup) devate. Thus, nf h I,S g t, (z G (h, S θ, S )) sup {g t, (z G (h, S θ, S )) : / g y (z G (h, S θ, S ))} h I,S (55) OSP also requres that type θ mples does not want to (nf-sup) devate. Ths nf {g t, (z G (h, S θ, S )) : / g y (z G (h, S θ, S ))} h I,S sup g t, (z G (h, S θ, S )) h I,S (56) The RHS of Equaton 55 s weakly greater than the LHS of Equaton 56. The RHS of Equaton 56 s weakly greater than the LHS of Equaton 55. Consequently all four terms are equal. Moreover, ths argument apples to every θ Θ0 I, such that Sθ (I ) a I, and every θ Θ 0 I, such that S θ (I ) = a I. Snce the game s pruned, θ satsfes (1b) ff θ Θ 0 I, and 51

52 (I ) = a I. Ths proves part 1 of Lemma 2. Part 2 follows by symmetry; we omt the detals snce they nvolve only small notatonal changes to the above argument. S θ Lemma 3. Suppose (G, (S θ ) θ Θ ) OSP-mplements (f y, f t ) and P(G, (S θ ) θ Θ ) = G. Take any I such that Θ 1 I, Θ0 I,, and assocated a I. Let t 1 and t0 be defned as before. 1. If there exsts θ Θ 0 I, such that Sθ (I ) a I, then for all (h I, S, S ), f g y (z G (h, S S )), then g t, (z G (h, S, S )) t 0 sup{θ Θ 0 I, : Sθ (I ) a I }. 2. If there exsts θ Θ 1 I, such that Sθ (I ) a I, then for all (h I, S, S ), f / g y (z G (h, S S )), then g t, (z G (h, S, S )) nf{θ Θ 1 I, : Sθ (I ) a I } + t 1. Suppose that part 1 of Lemma 3 does not hold. Fx (h I, S, S ) such that g y (z G (h, S S )) and g t, (z G (h, S, S )) > t 0 sup{θ Θ 0 I, : S θ (I ) a I }. Snce G s pruned, we can fnd some θ Θ I, such that for every Ĩ {I I : I occurs n ψ(i )}, Sθ (Ĩ) = S (Ĩ). Fx that θ. Fx θ Θ0 I, such that Sθ (I ) a I and θ sup{θ Θ 0 I, : Sθ (I ) a I } ɛ. Snce G s pruned and θ / Θ 1 I, (by Lemma 1), t must be that S θ (I ) S θ (I ). By constructon, I α(s θ, Sθ ). OSP requres that, for all h I, S : whch entals whch entals u (θ, h, S θ, S ) u (θ, h, S θ, S ) (57) t 0 θ + g t, (z G (h, S, S )) (58) t 0 sup{θ Θ 0 I, : S θ (I ) a I } + ɛ g t, (z G (h, S, S )) (59) But, by hypothess, t 0 sup{θ Θ 0 I, : S θ (I ) a I } < g t, (z G (h, S, S )) (60) 52

53 Snce ths argument holds for all ɛ > 0, we can pck ɛ small enough to create a contradcton. Ths proves part 1 of Lemma 3. Part 2 follows by symmetry. Lemma 4. Suppose (G, (S θ ) θ Θ ) OSP-mplements (f y, f t ) and P(G, (S θ ) θ Θ ) = G. Take any I such that Θ 1 I, Θ0 I, > 1 and assocated a I. 1. If there exsts θ Θ 0 I, such that Sθ (I ) a I, then for all θ Θ1 I,, S θ (I ) = a I. 2. (Equvalently) If there exsts θ Θ 1 I, such that Sθ (I ) a I, then for all θ Θ0 I,, Sθ (I ) = a I. Suppose Part 1 of Lemma 4 does not hold. Fx I, and choose θ < θ such that {θ } {θ } Θ1 I, Θ0 I,. Fx θ Θ 1 I, such that Sθ (I ) a I. By Lemma 1, f θ Θ 0 I,, then Sθ (I ) = a I, a contradcton. Thus, θ Θ 1 I, \ Θ0 I,, and snce Θ1 I, domnates Θ0 I, n the strong set order, θ < θ. Snce θ Θ1 I,, there exsts h I and θ such that (θ, θ ) Θ I and g y (z G (h, S θ, Sθ )). By Lemma 2, there exsts a A (I ) such that a a I and choosng a ensures / y and t = t 0. Thus, by G SP θ + g t, (z G (h, S θ, Sθ )) t0 (61) By θ Θ0 I,, there exsts h I and θ such that / g y(z G (h, S θ, S θ )). By Lemma 2 g t, (z G (h, S θ By G SP, g y (z G (h, S θ, S θ Notce that I α(s θ to (nf-sup) devate to θ g t, (z G (h, S θ, S θ )) = t0 (62), S θ )) = g t,(z G (h, S θ, Sθ )) and g t,(z G (h, S θ, S θ ). Thus, OSP requres that θ does not want s strategy, whch entals:, S θ )) θ + g t, (z G (h, S θ, S θ )) (63) )). t 0 θ + g t, (z G (h, S θ, S θ )) (64) > θ + g t, (z G (h, S θ, Sθ )) 53

54 whch contradcts Equaton 61. Part 2 s the contrapostve of Part 1. Ths proves Lemma 4. Lemma 5. Suppose (G, (S θ ) θ Θ ) OSP-mplements (f y, f t ) and P(G, (S θ ) θ Θ ) = G. For all I, f Θ 1 I, Θ0 I, 1 and A(I ) 2, then there exsts t 1 and such that: t 0 1. For all θ Θ I,, h I, S : (a) If / g y (z G (h, S θ, S )) then g t, (z G (h, S θ, S )) = t 0 (b) If g y (z G (h, S θ, S )) then g t, (z G (h, S θ, S )) = t 1 2. If Θ 1 I, > 0 and Θ0 I, > 0, then t1 = nf{θ Θ 1 I, } + t0 By G pruned, Θ I,. Consder the case where Θ 1 I, =. Pck some θ Θ 0 I, and some h I, S. Fx t0 g t, (z G (h, S θ, S )). Suppose there exsts some (θ, θ ) Θ I such that f t, (θ, θ ) = t0 t 0. Pck h I n the termnal hstory z G (, S θ, S θ ). By Equaton 12, for all θ Θ I,, f t, (θ, θ ) = t0. By G pruned and A(I ) 2, we can pck θ Θ 0 I, such that Sθ (I ) S θ (I ). Notce that I α(s θ (I ), S θ ). If t0 > t 0, then u (θ, h, S θ, S ) = t 0 < t 0 = u (θ, h, S θ, S θ ) (65) so S θ s not obvously domnant for (, θ ). If t0 < t 0, then u (θ, h, S θ, S ) = t 0 > t 0 = u (θ, h, S θ, S θ ) (66) so S θ s not obvously domnant for (, θ ). proves Lemma 5 for ths case. A symmetrc argument proves Lemma 5 for the case where Θ 0 I, =. Note that, f Lemma 5 holds at some nformaton set I, t holds at all By contradcton, ths nformaton sets I that follow I. Thus, we need only consder some earlest nformaton set I at whch Θ 1 I, Θ0 I, 1 and A(I ) 2. Now we consder the case where Θ 1 I, and Θ0 I,. At every pror nformaton set I pror to I, Θ1 I, Θ0 I, Θ 1 I, and Θ0 I,, by Lemma 4, I > 1. Snce s reached by some nterval of types all takng the same acton. Thus sup{θ Θ 0 I,} = nf{θ Θ 1 I,}. 54

55 Fx ˆθ Θ 0 I, such that ˆθ sup{θ Θ 0 I,} ɛ. Choose correspondng ĥ I and ˆθ Θ I, such that / g y (z G (ĥ, S ˆθ, S ˆθ )). Defne t0 g t, (z G (ĥ, S ˆθ, S ˆθ )). Fx ˇθ Θ 1 I, such that ˇθ nf{θ Θ 1 I,} + ɛ. Choose correspondng ȟ I and ˇθ Θ I, such that g y (z G (ȟ, S ˇθ, S ˇθ )). Defne t1 g t, (z G (ȟ, S ˇθ, S ˇθ )). Suppose there exsts some (θ, θ ) Θ I such that / f y (θ, θ ) and f t, (θ, θ ) = t0 t 0. Snce sup{θ Θ 0 I,} = nf{θ Θ 1 I,}, t follows that for all θ Θ I,, nf{θ : f y (θ, θ )} = nf{θ Θ 1 I,}. Thus, by Equaton 12, for all θ Θ I,: f t, (θ, θ ) = 1 f y(θ,θ ) nf{θ Θ 1 I,} + t 0. Fx h I, n the termnal hstory zg (, S θ, Sθ By A(I ) 2, we can pck some θ Notce that I α(s θ, S ˆθ ). Ether θ θ Θ0 I. Suppose t0 > t 0. By OSP, ). Θ I, such that S θ Θ 0 I or θ Θ 1 I (I ) S ˆθ (I ). \ Θ 0 I. Suppose whch entals u (ˆθ, ĥ, S ˆθ, S ˆθ ) u (ˆθ, h, S θ, S θ ) (67) t 0 t 0 t fy(θ,θ ) ( ˆθ nf{θ Θ 1 I,}) t fy(θ,θ ) ( ɛ) (68) t 0 ɛ and we can pck ɛ small enough to consttute a contradcton. Suppose < t 0. By OSP whch entals u (θ, ĥ, S ˆθ, S ˆθ ) u (θ, h, S θ, S θ ) (69) t 0 t fy(θ,θ ) (θ nf{θ Θ 1 I,}) (70) = t fy(θ,θ ) (θ sup{θ Θ 0 I,}) t0 whch s a contradcton. 55

56 The case that remans s θ Θ 1 I \ Θ 0 I. Then f y(θ, θ ) and f t, (θ, θ ) = nf{θ Θ 1 I,} + t0. Suppose t 0 > t 0. OSP requres: whch entals u (ˆθ, ĥ, S ˆθ, S ˆθ ) u (ˆθ, h, S θ, S θ ) (71) t 0 ˆθ nf{θ Θ 1 I,} + t0 (72) t 0 ɛ and we can pck ɛ small enough to consttute a contradcton. Suppose t 0 < t 0. Snce Sθ (I ) S ˆθ (I ), ether S ˇθ (I ) S ˆθ (I ) or (I ) S θ (I ). Moreover, f t, (ˇθ, θ ) = 1 f y(ˇθ,θ ) nf{θ Θ 1 I,} + t0. S ˇθ Suppose S ˇθ (I ) S ˆθ (I ). OSP requres: whch entals u (ˇθ, h, S ˇθ, Sθ ) u (ˇθ, ĥ, S ˆθ, S ˆθ ) (73) 1 fy(ˇθ,θ )( ˇθ nf{θ Θ 1 I,}) + t0 t 0 (74) whch entals 1 fy(ˇθ,θ )ɛ + t0 t 0 (75) and we can pck ɛ small enough to yeld a contradcton. Suppose S ˇθ (I ) S θ (I ). By Equaton 12, f t, (ˇθ, θ ) = 1 f y(ˇθ,θ ) nf{θ Θ 1 I,} + t0, and ft, (θ, ˆθ ) = nf{θ Θ 1 I,} + t0. OSP requres whch entals u (ˇθ, h, S ˇθ, Sθ ) u (ˇθ, ĥ, Sθ, S ˆθ ) (76) 1 fy(ˇθ,θ )(ˇθ nf{θ Θ 1 I,}) + t0 (ˇθ nf{θ Θ 1 I,}) + t0 (77) whch entals whch entals 1 fy(ˇθ,θ )ɛ + t0 ɛ + t 0 (78) 56

57 t 0 t 0 (79) a contradcton. By the above argument, for all I satsfyng the assumptons of Lemma 5, there s a unque transfer t 0 for all termnal hstores z passng through I such that / g y (z). Equaton 12 thus mples that there s a unque transfer t 1 for all termnal hstores z passng through I such that g y (z). Moreover, t 1 = nf{θ Θ 1 I, } + t0. Ths proves Lemma 5. Now to brng ths all together. We leave showng parts (1.c.v) and (2.c.v) of Defnton 18 to the last. Take any (G, (S θ ) θ Θ ) that OSPmplements (f y, f t ). Defne G P(G, (S θ ) θ Θ ) and ( S θ ) θ Θ as (S θ ) θ Θ restrcted to G. By Proposton 2, ( G, ( S θ ) θ Θ ) OSP-mplements (f y, f t ). We now characterze ( G, ( S θ ) θ Θ ). For any player, consder any nformaton set I such that A(I ) 2, and for all pror nformaton sets I ψ (I )\I, A(I ) = 1. By Lemma 1, there s a unque acton a I taken by all types n Θ 1 I, Θ0 I,. Ether Θ 1 I, Θ0 I, > 1 or Θ1 I, Θ0 I, 1. If Θ 1 I, Θ0 I, > 1, then by Lemma 4, G pruned and A(I ) 2, 1. EITHER: There exsts θ Θ 0 I, such that Sθ (I ) a I, and for all θ Θ1 I,, Sθ (I ) = a I. 2. OR: There exsts θ Θ 1 I, such that Sθ (I ) a I, and for all θ Θ 0 I,, Sθ (I ) = a I. In the frst case, then by Lemma 2, there s some t 0 such that, for all (S, S ), for all h I, f / g y (z G (h, S, S ), then g t, (z G (h, S, S ) = t 0. Moreover, we can defne a gong transfer at all nformaton sets I such that I ψ (I ): t 1 (I ) mn I ψ (I )[t0 sup{θ Θ 0 I, : Sθ (I ) a I }] (80) Notce that ths functon falls monotoncally as we move along the game tree; for any I, I such that I ψ (I ), t 1 (I ) t 1 (I ). Moreover, by constructon, at any I, I such that I s the mmedate successor of I n s experence, f t 1 (I ) > t 1 (I ), then there exsts a A(I ) that yelds / y, and by Lemma 2 ths yelds transfer t 0. We defne A0 to nclude all such quttng actons;.e. A 0 s the set of all actons such that: 57

58 1. a I for some I I 2. For all z such that a ψ (z): / g y (z) and g t, (z) = t 0 Lemma 3 and SP together mply that, at any termnal hstory z, f g y (z), then g t, (z) = nf t 1 (I ) (81) I ψ (z) Ths holds because, f g t, (z) < nf I ψ (z) t 1 (I ), then type θ such that t 0 nf I ψ (z) t 1 (I )) < θ < t 0 g t,(z) could proftably devate to play a A 0 at nformaton set I. In the second case, then by Lemma 2, there s some t 1 such that, for all (S, S ), for all h I, f g y(z G (h, S, S ), then g t, (z G (h, S, S ) = t 1. Moreover, we can defne a gong transfer at all nformaton sets I such that I ψ (I ): Once more, t 0 (I ) mn I ψ (I )[t1 + nf{θ Θ 0 I, : Sθ (I ) a I }] (82) 1. Ths functon falls monotoncally as we move along the game tree. 2. At any I, I such that I s the mmedate successor of I n s experence, f t 0 (I ) > t 0 (I ), then there exsts a A(I ) that yelds y, and transfer t For any z, f / g y (z), then g t, (z) = nf t 0 (I ) (83) I ψ (z) We defne A 1 symmetrcally for ths second case. Part (1.c.) and (2.c.) of Defnton 18 follow from Lemma 5. The above constructons suffce to prove Theorem 3 for cases where Θ 1 I, Θ 0 I, > 1. Cases where Θ1 I, Θ0 I, 1 are dealt wth by Lemma 5. Now for the last pece: We prove that parts (1.c.v) and (2.c.v) of Defnton 18 hold. The proof of part (1.c.v) s as follows: Suppose we are facng the ether clause of Defnton 18, and for some I, A(I ) \ A0 > 1. By part (1.c.), we know that the gong transfer t 1 can fall no further. Snce G s pruned and A(I ) \ A0 > 1, there exst two dstnct types of, 58

59 θ, θ Θ I,, who do not qut at I, and take dfferent actons. Snce nether quts at I and the gong transfer falls no further, there exst θ, θ Θ I, such that f y (θ, θ ) and f y (θ, θ ). So there exst (h I, S ) and (h I, S ) such that g y (h, S θ, S ) (84) g y (h, S θ, S ) (85) g t, (h, S θ, S ) = g t, (h, S θ, S ) = t 1 (I ) (86) WLOG suppose θ < θ. Suppose that there does not exst a A(I ) such that, for all z such that a ψ (z), g y (z). Then there must exst (h I, S ) such that / g y (h, S θ, S ) (87) Note that I α(sθ contradcton, snce, Sθ g t, (h, S θ, S ) = t 0 (88) ). But then Sθ s not obvously domnant, a u G (h, S θ, S, θ ) = t 0 θ + t 1 (I ) < θ + t 1 (I ) = u G (h, S θ, S, θ ) (89) (The frst nequalty holds because of type θ s ncentve constrant.) Ths shows that part (1.c.v) of Defnton 18 holds. Part (2.c.v) s proved symmetrcally. A.5 Theorem 4 Proof. Take any monotone prce mechansm G. strategy S s obvously domnant: For any, the followng 1. If encounters an nformaton set consstent wth Clause 1 of Defnton 18, then, from that pont forward: (a) If θ + t 1 (I ) > t 0 A(I ) \ A 0. and there exsts a A(I ) \ A 0, play a 59

60 . If A(I ) \ A 0 > 1, then play a A(I ) such that: For all z such that a ψ (z): g y (z). (b) Else play some a A If encounters an nformaton set consstent wth Clause 2 of Defnton 18, then, from that pont forward: (a) If θ + t 1 < t 0 (I ) and there exsts a A(I ) \ A 1, play a A(I ) \ A 1.. If A(I ) \ A 1 > 1, then play a A(I ) such that: For all z such that a ψ (z): / g y (z). (b) Else play some a A 1. The above strategy s well-defned for any agent n any monotone prce mechansm, by nspecton of Defnton 18. Consder any devatng strategy S. At any earlest pont of departure, the agent wll have encountered an nformaton set consstent wth ether Clause 1 or Clause 2 of Defnton 18. Suppose that the agent has encountered an nformaton set covered by Clause 1. Take some earlest pont of departure I α(s, S ). Notce that, by (1.d) of Defnton 18, no matter what strategy plays, condtonal on reachng I, ether agent s not n the allocaton and receves t 0, or agent s n the allocaton and receves a transfer ˆt t 1 (I ). Suppose θ + t 1 (I ) > t 0. Note that under S, condtonal on reachng I, the agent ether s not n the allocaton and receves t 0, or s n the allocaton and receves a transfer strctly above t 0 θ. If S (I ) A 0 (.e. f agent quts), then the best outcome under S s no better than the worst outcome under S. If S (I ) / A 0, then, snce S (I ) S (I ), A(I ) \ A 0 > 1. Then, by (1.c.) of Defnton 18, t 1 wll fall no further. So S (I ) guarantees that wll be n the allocaton and receve transfer t 1 (I ). But, by (1.d) of Defnton 18, the best possble outcome under S condtonal on reachng I s no better, so the obvous domnance nequalty holds. Suppose θ + t 1 (I ) t 0. Then, under S, condtonal on reachng I, agent s not n the allocaton and has transfer t 0.36 However, under S, ether the outcome s the same, or agent s n the allocaton for some transfer ˆt t 1 (I ) t 0 θ. Thus, the best possble outcome under S s no 35 If A 0 A(I ) > 1, the agent chooses determnstcally but arbtrarly. 36 By (1.c.) of Defnton 18, ether wll have qut n the past, or wll have an opportunty to qut now, whch he exercses. 60

61 better than the worst possble outcome under S, and the obvous domnance nequalty holds. The argument proceeds symmetrcally for Clause 2. Notce that the above strateges result n some allocaton and some payments, as a functon of the type profle. We defne these to be (f y, f t ), such that G OSP-mplements (f y, f t ). A.6 Theorem 5 Proof. Consder the sets used to construct Θ A (θ N\A ). {θ A : θ A\ θ A\ : / f y (θ, θ A\, θ N\A)} (90) These are the type profles θ A = (θ, θ A\ ) such that, f all agents n A \ have types at least as hgh as θ A\ and all agents n N \ A have types θ N\A, then the allocaton rule requres that type θ s not satsfed. Lemma 6. For all A N, for all θ N\A, Θ A (θ N\A ) s a jon-semlattce wth respect to the product order on Θ A. Take any θ A, θ A Θ A (θ N\A ). We want to show that θ A θ A Θ A (θ N\A ). For all A, θ A, θ A closure({θ A : θ A\ θ A\ : / f y (θ, θ A\, θ N\A)}) (91) The set on the RHS s upward-closed wth respect to the product order on θ A\. Consder ˇθ A θ A θ A. Its th element has the property: ˇθ = max{θ, θ }. WLOG, suppose θ θ. Then, snce ˇθ A\ θ A\, ˇθ A closure({θ A : θ A\ θ A\ : / f y (θ, θ A\, θ N\A)}) (92) Snce the above argument holds for all A, θ A θ A Θ A (θ N\A ). Ths concludes the proof of Lemma 6. Frst we prove the f drecton. We do ths by constructng G (and the correspondng strategy profles). f t s specfed mplctly. Fx, for each, the partton ponts {θ k }K k=1. Intalze k0 := (1, 1,..., 1), where k 0 denotes the th element of ths vector. Each agent chooses whether to stay n the aucton, at prce θ k0. quts ff has type θ k0. Set A 0 to be the agents that do not qut. (These are the actve bdders.) Set S 0 :=. (These are the satsfed bdders.) 61

62 At each stage, we defne θ Q {θ kl 1 N\A l 1 } N\A l 1. These are the recorded type (ntervals) of the agents who are no longer actve. For l = 1, 2,...: 1. If A l 1 =, then termnate the algorthm at allocaton y = S l If then (θ kl 1 ) A l 1 = sup{ Θ A l 1(θ Q )} (93) N\A l 1 (a) Choose agent A l 1 such that, f θ A l 1 f y (θ A l 1, θ Q N\A l 1 ). > (θ kl 1 j j ) j A l 1, then (b) Charge that agent the prce θ kl 1. (c) Ask that agent to report ˆk > k l 1 (kj l ) j N such that: such that θ (θˆk 1, θˆk ]. Set k l j := {ˆk k l 1 j f j = otherwse. (94) (d) Set A l := A l 1 \ (e) Set S l := S l 1 (f) Skp to stage l Choose A l 1 such that (k l j ) j N satsfes k l j := { k l 1 j + 1 f j = otherwse. k l 1 j (95) and (θ kl j j ) j A l 1 Θ A l 1(θ Q N\A l 1 ) (96) 4. Offer agent the opton to qut. Agent quts ff hs type s less than or equal to θ kl. 5. If agent does not qut, set A l := A l If agent quts, set A l := A l 1 \. 62

63 7. Set S l := S l 1 8. Go to stage l + 1. The above algorthm defnes an aucton wth monotoncally ascendng prces, where an agent has the opton to qut (for a transfer of zero) whenever her prce rses. makes a payment equal to her gong prce at the frst pont where she clnches the object -.e. when she s guaranteed to be n the fnal allocaton. When clnches the object, s also asked to report her type - ths may affect the payoffs of the agents that reman actve, but does not affect her. 37 By nspecton, t s an obvously domnant strategy for to stay n the aucton f the prce s less than her type, to qut f the prce s greater than or equal to her type and to report her type truthfully at the pont when she clnches the object. It remans to show that the algorthm s well-defned for all type profles, and that whenever t termnates, the resultng allocaton agrees wth f y. In partcular, we must show that n Steps (2a) and (3), we can pck agent satsfyng the requrements of the algorthm. Step (2a) s well-defned by assumpton. Lemma 7. Under the above algorthm, for all l, (θ kl ) A l Θ A l(θ Q N\A l ) We prove Lemma 7 by nducton. It hold for l = 0 by the assumpton that for all, for all θ, / f y (θ, θ ). Suppose t holds for l 1. We now prove that t holds for l (assumng, of course, that the algorthm does not termnate n Step 1 of teraton l). Suppose (θ kl 1 ) A l 1 = sup{ Θ A l 1(θ Q )}, so that Step 2 of the al- N\A l 1 gorthm s trggered n teraton l. Snce (θ kl 1 ) A l 1 Θ A l 1(θ Q ), we N\A l 1 know that for all j A l 1 : (θ kl 1 ) A l 1 closure({θ A l 1 : θ A l 1 \j (θkl 1 ) A l 1 \j : j / f y (θ kl 1 j j, θ A l 1 \j, θq )}) N\A l 1 (97) 37 We could remove ths feature by restrctng attenton to non-bossy allocaton rules, where f changng s type changes the allocaton, then t also changes whether s satsfed. However, the canoncal results for SP do not assume non-bossness of f y, and we do not do so here. Alternatvely, we could rule out such OSP mechansms by nstead requrng full mplementaton. However, the canoncal monotoncty results for SP hold only for weak mplementaton. 63

64 The set on the RHS of Equaton 97 s upward closed wth respect to the product order on θ A l 1 \j. A l A l 1 and (θ kl ) A l = (θkl 1 ) A l. Moreover, for the agent who just clnched the object, θ kl all j A l > θ kl 1. Consequently, for (θ kl ) A l closure({θ A l : θ A l \j (θkl ) A l \j : j / f y (θ kl j j, θ A l \j, θq N\A l )}) (98) Snce ths holds for each set n the ntersecton that defnes Θ A l(θ Q N\A l ), ths entals that (θ kl ) A l Θ A l(θ Q N\A l ). Suppose (θ kl 1 ) A l 1 sup{ Θ A l 1(θ Q )}, so that we reach Step 3 N\A l 1 of the algorthm n teraton l. Then, provded Step 3 s well-defned (.e. we can pck satsfyng our requrements), (θ kl j j ) j A l 1 Θ A l 1(θ Q N\A l 1 ) (99) A l A l 1, and f N \ A l, then θ Q = θ kl. Thus, (θ kl j j ) j A l Θ A l(θ Q N\A l ) (100) So we need only show that Step 3 s well-defned for teraton l, gven that Lemma 7 holds for l 1. Ths wll smultaneously prove Lemma 7, and demonstrate that Step 3 s well-defned throughout. We know that and (θ kl 1 j j ) j A l 1 Θ A l 1(θ Q ) (101) N\A l 1 (θ kl 1 j j ) j A l 1 sup{ Θ A l 1(θ Q )} (102) N\A l 1 Let  be the set of all agents n Al 1 such that θ kl 1 j j element of sup{ Θ A l 1(θ Q )}. N\A l 1 Now, we defne (θ j ) j Â. For each j Â, θ j = (θkl 1 j j we defne two dsjont open sets: s less than the jth + θ kl 1 j +1 j )/2. Now 64

65 Θ L A l 1 = {θ A l 1 : θâ < (θ j) j  } (103) Θ H A l 1 = [closure(θ L A l 1 )] C (104) The sets Θ A l 1(θ Q N\A l 1 ) Θ L A l 1 and Θ A l 1(θ Q N\A l 1 ) Θ H A l 1 are dsjont nonempty sets, and are open n the subspace topology. By connectedness, there exsts some θ A l 1 Θ A l 1(θ Q N\A l 1 ) \ (Θ L A l 1 Θ H A l 1 ). Fx some θ A l 1. Ths has at least one dmenson  such that θ Defne θ A l 1 = (θkl 1 + θ kl 1 +1 )/2. = (θ kl 1 j j ) j A l 1 θ. By Lemma 6, Θ A l 1 A l 1(θ Q ) s a N\A l 1 jon-semlattce. Thus, θ A l 1 Θ A l 1(θ Q N\A l 1 ). By constructon, (θ kl 1 j j ) j  θ θ (θkl 1 A l 1 \ = j (θkl 1  < j +1 j ) j  (105) j ) j (A l 1 \Â) (106) Moreover, θ has at least one dmenson  such that A l 1 θ = (θ kl 1 + θ kl 1 +1 )/2. Snce f y admts a fnte partton and Θ A l 1(θ Q ) s closed, N\A l 1 t follows that for { kj l kj l f j = := (107) otherwse. k l 1 j (θ kl j j ) j A l 1 Θ A l 1(θ Q N\A l 1 ) (108) Ths proves Lemma 7. Now we show that whenever the algorthm termnates, t agrees wth f y. By the assumpton that for all, for all θ, / f y (θ, θ ), t follows that all the agents that qut at prce θ 1 = θ are never satsfed (.e. / f y (θ ) for the true type profle θ ). By constructon, after any teraton l, the bdders that reman actve A l have true types (θ ) A l that strctly exceed the gong prces (θkl ) Al. The bdders that are nactve have ther types recorded (as accurately as we need gven the fnte partton) n the vector θ Q. N\A l 65

66 Suppose Step 1 s not actvated and Step 2 s actvated, n teraton l. Then, based on the nformaton revealed up to teraton l 1, we know that the chosen bdder s such that f y (θ ). Thus, for all l, S l f y (θ ). Suppose nether Step 1 nor Step 2 s actvated n teraton l. By Lemma 7, (θ kl 1 j j ) j A l 1 Θ A l 1(θ Q ). Consder the chosen bdder whose prce N\A l 1 s ncremented. The new prce vector satsfes: { k l 1 and Defne (θ j ) j A k l j := j + 1 f j = otherwse. k l 1 j (109) (θ kl j j ) j A l 1 Θ A l 1(θ Q N\A l 1 ) (110) j l 1.5(θkl 1 By (θ kl j j ) j A l 1 Θ A l 1(θ Q N\A l 1 ), j ) j A l 1 +.5(θ kl j j ) j A l 1. (θ kl j j ) j A l 1 closure({θ A l 1 : θ A l 1 \ θ A l 1 \ : / f y (θ, θ A l 1 \, θ N\A l 1)}) (111) The set on the RHS s (by f y monotone) downward-closed wth respect to Θ. Thus, (θ j ) j A l 1 closure({θ A l 1 : θ A l 1 \ θ A l 1 \ : / f y (θ, θ A l 1 \, θ N\A l 1)}) (112) Thus, we can choose (θ j ) j A l 1 {θ A l 1 : θ θ A l 1 \ A l 1 \ : / f y (θ, θ A l 1 \, θq )} such that (θ N\A l 1 j ) j Al 1 (θ j ) j Al 1 < ɛ, where ɛ s strctly less than half of the length of the smallest nterval n the fnte partton. {θ A l 1 : θ A l 1 \ θ A l 1 \ : / f y (θ, θ A l 1 \, θq )} s upward closed N\A l 1 wth respect to the product order on Θ A l 1 \. Thus, from the propertes of (θ j ) j A l 1, and the assumpton that f y admts a fnte partton, we conclude that, for all θ (θ kl 1, θ kl (θkl 1 ], for all θ A l 1 \ > j j ) j A l 1 \, / f y (θ, θ A l 1 \, θq ). N\A l 1 66

67 Thus, whenever some bdder s gong prce rses n teraton l, the types who qut are those that, based on the nformaton revealed so far, are requred by the allocaton rule not to be satsfed. For all l, for the true type profle θ, (A l ) C (S l ) C f y (θ ) C. Gatherng results: For all l, S l = (A l ) C S l f y (θ ) and (A t ) C (S l ) C f y (θ ) C. Thus, whenever A l =, f y (θ ) = S l. Ths completes the proof of the f drecton. Now the only f drecton. G OSP-mplements (f y, f t ), so f y s SPmplementable. Thus, f y s monotone. For all, type θ s never satsfed, and always has a zero transfer. Thus, by Theorem 3, we can restrct our attenton to monotone prce mechansms that satsfy the Ether clause n Defnton 18 -.e. every agent faces an ascendng prce assocated wth beng satsfed, and a fxed outsde opton (call ths an ascendng prce mechansm, or APM). Suppose we have some G that OSP-mplements (f y, f t ). Moreover, suppose G s pruned, so that G s an APM. Take any A N and θ N\A. We now show that Θ A (θ N\A ) s connected. Let p : [0, 1] Θ A be the prce path under G faced by agents n A, when the type profle for the agents n A s sup{ Θ A (θ N\A )} and the type profle for the agents n N \ A s θ N\A. Let z be the termnal hstory that results from that type profle, and let l be the number of elements of that sequence. Let h 1, h 2,..., h l be the subhstores of z. (If z s nfntely long, nstead let l be the ndex of some fnte hstory such that all agents have only sngleton acton sets afterwards.) Formally, p s defned as follows: Start f(0) = (θ ) A. For each subhstory h m, let p( m l ) be equal to the prces faced by agents n A at h m. For all ponts n r ( m 1 l, m l ), p(r) = (1 β)p(m 1) + βp(m), for β = (r m 1 l )/(1/l). By nspecton, p s a contnuous functon. Moreover, snce at any pont when an agent quts under G, / f y (θ ) based on the nformaton revealed so far, for all r, p(r) Θ A (θ N\A ). Thus, p s a path from Θ A (θ N\A ) to sup{ Θ A (θ N\A )}. By Lemma 6, Θ A (θ N\A ) s a jon sem-lattce. We can generate a path p from any θ A Θ A (θ N\A ) to sup{ Θ A (θ N\A )}, by defnng p (r) θ A p(r). Thus, Θ A (θ N\A ) s path-connected, whch mples that t s connected. We now show that there exsts A such that, f θ A > sup{ Θ A (θ N\A )}, then f y (θ A, θ N\A ). If we cannot choose some θ A > sup{ Θ A (θ N\A )}, then ths holds vacuously. Thus, fx some θ A > sup{ Θ A (θ N\A )}. Let z be the termnal hstory n G when the type profle s (θ A, θ N\A). 67

68 Suppose there does not exst A such that such that, for all θ A > sup{ Θ A (θ N\A )}, f y (θ A, θ N\A ). By defnton of sup{ Θ A (θ N\A )}, there also does not exst A such that such that, for all θ A > sup{ Θ A (θ N\A )}, / f y (θ A, θ N\A ). Thus, the prce path for agents n A along hstory z (defned as before) s not such that p(r) sup{ Θ A (θ N\A )} for all r [0, 1]. Thus, there must be a frst pont along z where the prce path s not n { Θ A (θ N\A )}. Consder the agent whose prce was ncremented at that pont. For all j A, the relevant set n the ntersecton that defnes { Θ A (θ N\A )} s upward-closed wth respect to the product order on A \ j. Thus, when the prce frst leaves { Θ A (θ N\A )} at some subhstory h t, t must be that p( t l ) / closure({θ A : θ A\ θ A\ : / f y (θ, θ A\, θ N\A)}) (113) The complement of the set on the RHS of Equaton 113 s open. Thus, for some ɛ > 0, an open ɛ-ball around p( t l ) s a subset of the complement of the RHS. Consequently, we can choose some θ strctly greater than s old prce, and strctly less than s new gong prce, and some θ A\ strctly greater (n the product order) that the gong prces for A\, such that f y (θ, θ A\, θ N\A). Snce G s an APM, the actons of types (θ, θ A\ ) and the actons of types θ A are ndstngushable pror to that pont. Thus, G does not result n the prescrbed outcome for type profle (θ, θ A\, θ N\A), a contradcton. Ths completes the proof of the only-f drecton. B Alternatve Emprcal Specfcatons Here we report alternatve emprcal specfcatons for the experment. Table 5 and 6 are dentcal to Table 2 except that they compute p-values and standard errors usng alternatve methods. A natural measure of errors would be to take the sum, for k = 1, 2, 3, 4, of the absolute dfference between the kth hghest bd and the kth hghest value. However we do not observe the hghest bd under AC, and we often do not observe the hghest bd under AC+X. We could nstead take the sum for k = 2, 3, 4 of the absolute dfference between the kth hghest bd and the kth hghest value, averaged as before n fve-round blocks. Table 7 reports the results. 68

69 Table 5: mean(abs(2nd bd - 2nd value), p-values calculated usng Wlcoxon rank-sum test Format Rounds SP OSP p-value <.001 Aucton X Aucton <.001 Ths s the same as Table 2, except that the p-values are calculated usng the Wlcoxon rank-sum test. Table 6: mean(abs(2nd bd - 2nd value), p-values and standard errors calculated usng clustered regresson (clustered by groups) Format Rounds SP OSP p-value Aucton +X Aucton (1.26) (1.06) (1.20) (0.34) (0.61) (0.41) (0.88) (0.33) Ths s the same as Table 2, except that the p-values and standard errors are calculated by runnng a sngle regresson (wth approprate ndcator varables) and clusterng by groups. 69

70 Table 7: mean(sum(abs(kth bd - kth value))), for k = 2, 3, 4 Format Rounds SP OSP p-value <.001 Aucton (4.64) (1.89) (2.73) (0.91) X Aucton (3.70) (1.06) (3.70) (0.75) For each aucton, we sum the absolute dfferences between the kth bd and the kth value, for k = 2, 3, 4. We then take the mean of ths over each 5-round block. We then compute standard errors countng each group s 5-round mean as a sngle observaton. (18 observatons per cell.) p-values are computed usng a two-sample t-test, allowng for unequal varances. Another measure of errors would be to take the sum of the absolute dfference between each bdder s bd and that bdder s value, droppng all hghest bdders for symmetry. Table 8 reports the results. Table 9 reports the results of Table 3, except that the p-values are calculated usng the Wlcoxon rank-sum test. 29.0% preference lsts are ncorrect under SP-RSD. 2.6% of choces are ncorrect under OSP-RSD. However, ths s not a far comparson; preference lsts mechancally allow us to spot more errors than sngle choces. To compare lke wth lke, we compute the proporton of ncorrect choces we would have observed, f subjects played OSP-RSD as though they were mplementng the submtted preference lsts for SP-RSD. Ths s a cautous measure; t counts errors under SP-RSD only f they would have altered the outcome under OSP-RSD. Table 10 reports the results. 70

71 Table 8: mean(sum(abs( s bd - s value))), droppng hghest bdders Format Rounds SP OSP p-value <.001 Aucton (5.20) (1.88) (2.85) (0.72) X Aucton (4.10) (1.01) (3.72) (0.83) For each aucton, we sum the absolute dfferences between each bdder s bd and ther value, droppng the hghest bdder. We then take the mean of ths over each 5-round block. We then compute standard errors countng each group s 5-round mean as a sngle observaton. (18 observatons per cell.) p-values are computed usng a two-sample t-test, allowng for unequal varances. Table 9: Proporton of seral dctatorshps not endng n domnant strategy outcome, p-values calculated usng Wlcoxon rank-sum test SP OSP p-value Rounds % 7.8%.0001 Rounds % 6.7%.0010 Ths s the same as Table 3, except that the p-values are calculated usng the Wlcoxon rank-sum test. 71

72 C Experment nstructons 72

73 WELCOME Ths s a study about decson-makng. Money earned wll be pad to you n cash at the end of the experment. Ths study s about 90 mnutes long. We wll pay you $5 for showng up, and $15 for completng the experment. Addtonally, you wll be pad n cash your earnngs from the experment. If you make choces n ths experment that lose money, we wll deduct ths from your total payment. However, your total payment (ncludng your show-up payment and completon payment) wll always be at least $20. You have been randomly assgned nto groups of 4. Ths experment nvolves 3 games played for real money. You wll play each game 10 tmes wth the other people n your group. We wll gve you nstructons about each game just before you begn to play t. Your choces n one game wll not affect what happens n other games. There s no decepton n ths experment. Every game wll be exactly as specfed n the nstructons. Anythng else would volate the IRB protocol under whch we run ths study. (IRB Protocol 34876) Please do not use electronc devces or talk wth other volunteers durng ths study. If we do fnd you usng electronc devces or talkng wth other volunteers, the rules of the study requre us to deduct $20 from your earnngs. If you have questons at any pont, please rase your hand and we wll answer your questons prvately. 1 of 6 BB

74 GAME 1 In ths game, you wll bd n an aucton for a money prze. The prze may have a dfferent dollar value for each person n your group. You wll play ths game for 10 rounds. All dollar amounts n ths game are n 25 cent ncrements. At the start of each round, we dsplay your value for ths round s prze. If you wn the prze, you wll earn the value of the prze, mnus any payments from the aucton. Your value for the prze wll be calculated as follows: 1. For each group we wll draw a common value, whch wll be between $10.00 and $ Every number between $10.00 and $ s equally lkely to be drawn. 2. For each person, we wll also draw a prvate adjustment, whch wll be between $0.00 and $ Every number between $0.00 and $20.00 s equally lkely to be drawn. In each round, your value for the prze s equal to the common value plus your prvate adjustment. At the start of each round, you wll learn your total value for the prze, but not the common value or the prvate adjustment. Ths means that each person n your group may have a dfferent value for the prze. However, when you have a hgh value, t s more lkely that other people n your group have a hgh value. The aucton proceeds as follows: Frst, you wll learn your value for the prze. Then you can choose a bd n the aucton. Each person n your group wll submt ther bds prvately and at the same tme. You do ths by typng your bd nto a text box and clckng confrm bd. You wll have 90 seconds to make your decson, and can revse your bd as many tmes as you lke. At the end of 90 seconds, your fnal bd wll be the one that counts. 2 of 6 BB

75 All bds must be between $0.00 and $150.00, and n 25 cent ncrements. The hghest bdder wll wn the prze, and make a payment equal to the secondhghest bd. Ths means that we wll add to her earnngs her value for the prze, and subtract from her earnngs the second-hghest bd. All other bdders earnngs wll not change. At the end of each aucton, we wll show you the bds, ranked from hghest to lowest, and the wnnng bdder s profts. If there s a te for the hghest bdder, no bdder wll wn the object. 3 of 6 BB

76 GAME 2 In ths game, you wll bd n an aucton for a money prze. You wll play ths game for 10 rounds. Your value for the prze wll be generated as before. However, each round, we wll also draw a new number, X, for each group. The rules of the aucton are dfferent, as follows: All bdders wll submt ther bds prvately and at once. However, the hghest bdder wll wn the prze f and only f ther bd exceeds the second-hghest bd by more than X. If the hghest bdder wns the prze, she wll make a payment equal to the secondhghest bd plus X. Ths means that we wll add to her earnngs her value for the prze, and subtract from her earnngs the second-hghest bd plus X. All other bdders earnngs wll not change. If the hghest bd does not exceed the second-hghest bd by more than X, then no bdder wll wn the prze. In that case, no bdder s earnngs wll change. X wll be between $0.00 and $3.00, wth every 25 cent ncrement equally lkely to be drawn. You wll be told your value for the prze at the start of each round, but wll not be told X. At the end of each round, we wll tell you the value of X. 4 of 6 BB

77 GAME 3 You wll play ths game for 10 rounds. In each round of ths game, there are four przes, labeled A, B, C, and D. Przes wll be worth between $0.00 and $1.25. For each prze, ts value wll be the same for all the players n your group. At the start of each round, you wll learn the value of each prze. You wll also learn your prorty score, whch s a random number between 1 and 10. Every whole number between 1 and 10 s equally lkely to be chosen. The game proceeds as follows: We wll ask you to lst the przes, n any order of your choce. All players wll submt ther lsts prvately and at the same tme. After all the lsts have been submtted, we wll assgn przes usng the followng rule: 1. The player wth the hghest prorty score wll be assgned the top prze on hs lst. 2. The player wth the second-hghest prorty score wll be assgned the top prze on hs lst, among the przes that reman. 3. The player wth the thrd-hghest prorty score wll be assgned the top prze on hs lst, among the przes that reman. 4. The player wth the lowest prorty score wll be assgned whatever prze remans. If two players have the same prorty score, we wll break the te randomly. You wll have 90 seconds to form your lst. You do ths by typng a number, from 1 to 4, next to each prze, and then clckng the button that says Confrm Choces. Each prze must be assgned a dfferent number, from 1 (top) to 4 (bottom). Your choces wll not count unless you clck the button that says Confrm Choces. 5 of 6 BB

78 If you do not produce a lst by the end of 90 seconds, we wll assgn przes as though you reported the lst n order A-B-C-D. At the end of each round, we wll add to your earnngs the value of the prze you were assgned. 6 of 6 BB

79 WELCOME Ths s a study about decson-makng. Money earned wll be pad to you n cash at the end of the experment. Ths study s about 90 mnutes long. We wll pay you $5 for showng up, and $15 for completng the experment. Addtonally, you wll be pad n cash your earnngs from the experment. If you make choces n ths experment that lose money, we wll deduct ths from your total payment. However, your total payment (ncludng your show-up payment and completon payment) wll always be at least $20. You have been randomly assgned nto groups of 4. Ths experment nvolves 3 games played for real money. You wll play each game 10 tmes wth the other people n your group. We wll gve you nstructons about each game just before you begn to play t. Your choces n one game wll not affect what happens n other games. There s no decepton n ths experment. Every game wll be exactly as specfed n the nstructons. Anythng else would volate the IRB protocol under whch we run ths study. (IRB Protocol 34876) Please do not use electronc devces or talk wth other volunteers durng ths study. If we do fnd you usng electronc devces or talkng wth other volunteers, the rules of the study requre us to deduct $20 from your earnngs. If you have questons at any pont, please rase your hand and we wll answer your questons prvately. 1 of 5 AA

80 GAME 1 In ths game, you wll bd n an aucton for a money prze. The prze may have a dfferent dollar value for each person n your group. You wll play ths game for 10 rounds. All dollar amounts n ths game are n 25 cent ncrements. At the start of each round, we dsplay your value for ths round s prze. If you wn the prze, you wll earn the value of the prze, mnus any payments from the aucton. Your value for the prze wll be calculated as follows: 1. For each group we wll draw a common value, whch wll be between $10.00 and $ Every number between $10.00 and $ s equally lkely to be drawn. 2. For each person, we wll also draw a prvate adjustment, whch wll be between $0.00 and $ Every number between $0.00 and $20.00 s equally lkely to be drawn. In each round, your value for the prze s equal to the common value plus your prvate adjustment. At the start of each round, you wll learn your total value for the prze, but not the common value or the prvate adjustment. Ths means that each person n your group may have a dfferent value for the prze. However, when you have a hgh value, t s more lkely that other people n your group have a hgh value. The aucton proceeds as follows: Frst, you wll learn your value for the prze. Then, the aucton wll start. We wll dsplay a prce to everyone n your group, that starts low and counts upwards n 25 cent ncrements, up to a maxmum of $ At any pont, you can choose to leave the aucton, by clckng the button that says Stop Bddng. 2 of 5 AA

81 When there s only one bdder left n the aucton, that bdder wll wn the prze at the current prce. Ths means that we wll add to her earnngs her value for the prze, and subtract from her earnngs the current prce. All other bdders earnngs wll not change. At the end of each aucton, we wll show you the prces where bdders stopped, and the wnnng bdder s profts. If there s a te for the hghest bdder, no bdder wll wn the object. 3 of 5 AA

82 GAME 2 In ths game, you wll bd n an aucton for a money prze. You wll play ths game for 10 rounds. Your value for the prze wll be generated as before. However, each round, we wll also draw a new number, X, for each group. The rules of the aucton are dfferent, as follows: The prce wll count up from a low value, and you can choose to leave the aucton at any pont, by clckng the button that says Stop Bddng. When there s only one bdder left n the aucton, the prce wll contnue to rse for another X dollars, and then freeze. If the last bdder stays n the aucton untl the prce freezes, then she wll wn the prze at the fnal prce. Ths means that we wll add to her earnngs her value for the prze, and subtract from her earnngs the fnal prce. All other bdders earnngs wll not change. If the last bdder stops bddng before the prce freezes, then no bdder wll wn the prze. In that case, no bdder s earnngs wll change. X wll be between $0.00 and $3.00, wth every 25 cent ncrement equally lkely to be drawn. You wll be told your value for the prze at the start of each round, but wll not be told X. At the end of each round, we wll tell you the value of X. 4 of 5 AA

83 GAME 3 You wll play ths game for 10 rounds. In each round of ths game, there are four przes, labeled A, B, C, and D. Przes wll be worth between $0.00 and $1.25. For each prze, ts value wll be the same for all the players n your group. At the start of each round, you wll learn the value of each prze. You wll also learn your prorty score, whch s a random number between 1 and 10. Every whole number between 1 and 10 s equally lkely to be chosen. The game proceeds as follows: 1. The player wth the hghest prorty score wll pck one prze. 2. The player wth the second-hghest prorty score wll pck one of the przes that remans. 3. The player wth the thrd-hghest prorty score wll pck one of the przes that remans. 4. The player wth the lowest prorty score wll be assgned whatever prze remans. If two players have the same prorty score, we wll break the te randomly. When t s your turn to pck, you wll have 30 seconds to make your choce. You do ths by selectng a prze and then clckng the button that says Confrm Choce. Your choces wll not count unless you clck the button that says Confrm Choce. If you do not make a choce by the end of 30 seconds, we wll assgn przes as though you pcked whchever prze s earlest n the alphabet. At the end of each round, we wll add to your earnngs the value of the prze you pcked. 5 of 5 AA

Obviously Strategy-Proof Mechanisms

Obviously Strategy-Proof Mechanisms Ths work s dstrbuted as a Dscusson Paper by the STANFORD INSTITUTE FOR ECONOMIC POLICY RESEARCH SIEPR Dscusson Paper No. 16-015 Obvously Strategy-Proof Mechansms By Shengwu L Stanford Insttute for Economc

More information

Obviously Strategy-Proof Mechanisms

Obviously Strategy-Proof Mechanisms Obvously Strategy-Proof Mechansms Shengwu L Workng Paper Frst uploaded: 3 Feb 2015. Ths verson: 17 July 2015. 1 Abstract In mechansm desgn, strategy-proofness s often sad to be desrable because t makes

More information

Obviously Strategy-Proof Mechanisms

Obviously Strategy-Proof Mechanisms Obvously Strategy-Proof Mechansms Shengwu L http://ssrn.com/abstract=2560028 Frst uploaded: 3 February 2015. Ths verson: 19 Jun 2017. Abstract A strategy s obvously domnant f, for any devaton, at any nformaton

More information

Obviously Strategy-Proof Mechanisms

Obviously Strategy-Proof Mechanisms MPRA Munch Personal RePEc Archve Obvously Strategy-Proof Mechansms Shengwu L Harvard Unversty 2017 Onlne at https://mpra.ub.un-muenchen.de/78930/ MPRA Paper No. 78930, posted 4 May 2017 17:24 UTC Obvously

More information

Vickrey Auction VCG Combinatorial Auctions. Mechanism Design. Algorithms and Data Structures. Winter 2016

Vickrey Auction VCG Combinatorial Auctions. Mechanism Design. Algorithms and Data Structures. Winter 2016 Mechansm Desgn Algorthms and Data Structures Wnter 2016 1 / 39 Vckrey Aucton Vckrey-Clarke-Groves Mechansms Sngle-Mnded Combnatoral Auctons 2 / 39 Mechansm Desgn (wth Money) Set A of outcomes to choose

More information

Economics 101. Lecture 4 - Equilibrium and Efficiency

Economics 101. Lecture 4 - Equilibrium and Efficiency Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

CS286r Assign One. Answer Key

CS286r Assign One. Answer Key CS286r Assgn One Answer Key 1 Game theory 1.1 1.1.1 Let off-equlbrum strateges also be that people contnue to play n Nash equlbrum. Devatng from any Nash equlbrum s a weakly domnated strategy. That s,

More information

Implementation and Detection

Implementation and Detection 1 December 18 2014 Implementaton and Detecton Htosh Matsushma Department of Economcs Unversty of Tokyo 2 Ths paper consders mplementaton of scf: Mechansm Desgn wth Unqueness CP attempts to mplement scf

More information

Edge Isoperimetric Inequalities

Edge Isoperimetric Inequalities November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Online Appendix for Obviously Strategy-Proof Mechanisms

Online Appendix for Obviously Strategy-Proof Mechanisms Onlne Appendx for Obvously Strategy-Proof Mechansms Shengwu L June 19, 2017 1 Proofs omtted from the man text 1.1 Proof of Theorem 1 Proof. Frst we prove the f drecton. Fx agent 1 and preferences 1. Suppose

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

(1 ) (1 ) 0 (1 ) (1 ) 0

(1 ) (1 ) 0 (1 ) (1 ) 0 Appendx A Appendx A contans proofs for resubmsson "Contractng Informaton Securty n the Presence of Double oral Hazard" Proof of Lemma 1: Assume that, to the contrary, BS efforts are achevable under a blateral

More information

Module 17: Mechanism Design & Optimal Auctions

Module 17: Mechanism Design & Optimal Auctions Module 7: Mechansm Desgn & Optmal Auctons Informaton Economcs (Ec 55) George Georgads Examples: Auctons Blateral trade Producton and dstrbuton n socety General Setup N agents Each agent has prvate nformaton

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Game Theory Course: Jackson, Leyton-Brown & Shoham. Vickrey-Clarke-Groves Mechanisms: Definitions

Game Theory Course: Jackson, Leyton-Brown & Shoham. Vickrey-Clarke-Groves Mechanisms: Definitions Vckrey-Clarke-Groves Mechansms: Defntons Game Theory Course: Jackson, Leyton-Brown & Shoham A postve result Recall that n the quaslnear utlty settng, a drect mechansm conssts of a choce rule and a payment

More information

Abstract Single Crossing and the Value Dimension

Abstract Single Crossing and the Value Dimension Abstract Sngle Crossng and the Value Dmenson Davd Rahman September 24, 2007 Abstract When auctonng an ndvsble good wthout consumpton externaltes, abstract sngle crossng s necessary and suffcent to mplement

More information

Axiomatizations of Pareto Equilibria in Multicriteria Games

Axiomatizations of Pareto Equilibria in Multicriteria Games ames and Economc Behavor 28, 146154 1999. Artcle ID game.1998.0680, avalable onlne at http:www.dealbrary.com on Axomatzatons of Pareto Equlbra n Multcrtera ames Mark Voorneveld,* Dres Vermeulen, and Peter

More information

Maximizing the number of nonnegative subsets

Maximizing the number of nonnegative subsets Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

A note on the one-deviation property in extensive form games

A note on the one-deviation property in extensive form games Games and Economc Behavor 40 (2002) 322 338 www.academcpress.com Note A note on the one-devaton property n extensve form games Andrés Perea Departamento de Economía, Unversdad Carlos III de Madrd, Calle

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Learning Dynamics for Mechanism Design

Learning Dynamics for Mechanism Design Learnng Dynamcs for Mechansm Desgn An Expermental Comparson of Publc Goods Mechansms Paul J. Healy Calforna Insttute of Technology Overvew Insttuton (mechansm) desgn Publc goods Experments Equlbrum, ratonalty,

More information

Pricing Network Services by Jun Shu, Pravin Varaiya

Pricing Network Services by Jun Shu, Pravin Varaiya Prcng Network Servces by Jun Shu, Pravn Varaya Presented by Hayden So September 25, 2003 Introducton: Two Network Problems Engneerng: A game theoretcal sound congeston control mechansm that s ncentve compatble

More information

Perfect Competition and the Nash Bargaining Solution

Perfect Competition and the Nash Bargaining Solution Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange

More information

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper Games of Threats Elon Kohlberg Abraham Neyman Workng Paper 18-023 Games of Threats Elon Kohlberg Harvard Busness School Abraham Neyman The Hebrew Unversty of Jerusalem Workng Paper 18-023 Copyrght 2017

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

ON THE EQUIVALENCE OF ORDINAL BAYESIAN INCENTIVE COMPATIBILITY AND DOMINANT STRATEGY INCENTIVE COMPATIBILITY FOR RANDOM RULES

ON THE EQUIVALENCE OF ORDINAL BAYESIAN INCENTIVE COMPATIBILITY AND DOMINANT STRATEGY INCENTIVE COMPATIBILITY FOR RANDOM RULES ON THE EQUIVALENCE OF ORDINAL BAYESIAN INCENTIVE COMPATIBILITY AND DOMINANT STRATEGY INCENTIVE COMPATIBILITY FOR RANDOM RULES Madhuparna Karmokar 1 and Souvk Roy 1 1 Economc Research Unt, Indan Statstcal

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

Tit-For-Tat Equilibria in Discounted Repeated Games with. Private Monitoring

Tit-For-Tat Equilibria in Discounted Repeated Games with. Private Monitoring 1 Tt-For-Tat Equlbra n Dscounted Repeated Games wth Prvate Montorng Htosh Matsushma 1 Department of Economcs, Unversty of Tokyo 2 Aprl 24, 2007 Abstract We nvestgate nfntely repeated games wth mperfect

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

k t+1 + c t A t k t, t=0

k t+1 + c t A t k t, t=0 Macro II (UC3M, MA/PhD Econ) Professor: Matthas Kredler Fnal Exam 6 May 208 You have 50 mnutes to complete the exam There are 80 ponts n total The exam has 4 pages If somethng n the queston s unclear,

More information

Endogenous timing in a mixed oligopoly consisting of a single public firm and foreign competitors. Abstract

Endogenous timing in a mixed oligopoly consisting of a single public firm and foreign competitors. Abstract Endogenous tmng n a mxed olgopoly consstng o a sngle publc rm and oregn compettors Yuanzhu Lu Chna Economcs and Management Academy, Central Unversty o Fnance and Economcs Abstract We nvestgate endogenous

More information

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India February 2008

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India February 2008 Game Theory Lecture Notes By Y. Narahar Department of Computer Scence and Automaton Indan Insttute of Scence Bangalore, Inda February 2008 Chapter 10: Two Person Zero Sum Games Note: Ths s a only a draft

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Online Appendix. t=1 (p t w)q t. Then the first order condition shows that

Online Appendix. t=1 (p t w)q t. Then the first order condition shows that Artcle forthcomng to ; manuscrpt no (Please, provde the manuscrpt number!) 1 Onlne Appendx Appendx E: Proofs Proof of Proposton 1 Frst we derve the equlbrum when the manufacturer does not vertcally ntegrate

More information

Ex post implementation in environments with private goods

Ex post implementation in environments with private goods Theoretcal Economcs 1 (2006), 369 393 1555-7561/20060369 Ex post mplementaton n envronments wth prvate goods SUSHIL BIKHCHANDANI Anderson School of Management, Unversty of Calforna, Los Angeles We prove

More information

Subjective Uncertainty Over Behavior Strategies: A Correction

Subjective Uncertainty Over Behavior Strategies: A Correction Subjectve Uncertanty Over Behavor Strateges: A Correcton The Harvard communty has made ths artcle openly avalable. Please share how ths access benefts you. Your story matters. Ctaton Publshed Verson Accessed

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium?

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium? APPLIED WELFARE ECONOMICS AND POLICY ANALYSIS Welfare Propertes of General Equlbrum What can be sad about optmalty propertes of resource allocaton mpled by general equlbrum? Any crteron used to compare

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

Equilibrium with Complete Markets. Instructor: Dmytro Hryshko

Equilibrium with Complete Markets. Instructor: Dmytro Hryshko Equlbrum wth Complete Markets Instructor: Dmytro Hryshko 1 / 33 Readngs Ljungqvst and Sargent. Recursve Macroeconomc Theory. MIT Press. Chapter 8. 2 / 33 Equlbrum n pure exchange, nfnte horzon economes,

More information

Bayesian epistemology II: Arguments for Probabilism

Bayesian epistemology II: Arguments for Probabilism Bayesan epstemology II: Arguments for Probablsm Rchard Pettgrew May 9, 2012 1 The model Represent an agent s credal state at a gven tme t by a credence functon c t : F [0, 1]. where F s the algebra of

More information

Robust Implementation: The Role of Large Type Spaces

Robust Implementation: The Role of Large Type Spaces Robust Implementaton: The Role of Large Type Spaces Drk Bergemann y Stephen Morrs z Frst Verson: March 2003 Ths Verson: Aprl 2004 Abstract We analyze the problem of fully mplementng a socal choce functon

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space. Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +

More information

Online Appendix: Reciprocity with Many Goods

Online Appendix: Reciprocity with Many Goods T D T A : O A Kyle Bagwell Stanford Unversty and NBER Robert W. Stager Dartmouth College and NBER March 2016 Abstract Ths onlne Appendx extends to a many-good settng the man features of recprocty emphaszed

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

Vapnik-Chervonenkis theory

Vapnik-Chervonenkis theory Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

a b a In case b 0, a being divisible by b is the same as to say that

a b a In case b 0, a being divisible by b is the same as to say that Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :

More information

Genericity of Critical Types

Genericity of Critical Types Genercty of Crtcal Types Y-Chun Chen Alfredo D Tllo Eduardo Fangold Syang Xong September 2008 Abstract Ely and Pesk 2008 offers an nsghtful characterzaton of crtcal types: a type s crtcal f and only f

More information

Lecture 14: Bandits with Budget Constraints

Lecture 14: Bandits with Budget Constraints IEOR 8100-001: Learnng and Optmzaton for Sequental Decson Makng 03/07/16 Lecture 14: andts wth udget Constrants Instructor: Shpra Agrawal Scrbed by: Zhpeng Lu 1 Problem defnton In the regular Mult-armed

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Microeconomics: Auctions

Microeconomics: Auctions Mcroeconomcs: Auctons Frédérc Robert-coud ovember 8, Abstract We rst characterze the PBE n a smple rst prce and second prce sealed bd aucton wth prvate values. The key result s that the expected revenue

More information

Pricing and Resource Allocation Game Theoretic Models

Pricing and Resource Allocation Game Theoretic Models Prcng and Resource Allocaton Game Theoretc Models Zhy Huang Changbn Lu Q Zhang Computer and Informaton Scence December 8, 2009 Z. Huang, C. Lu, and Q. Zhang (CIS) Game Theoretc Models December 8, 2009

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

One-sided finite-difference approximations suitable for use with Richardson extrapolation

One-sided finite-difference approximations suitable for use with Richardson extrapolation Journal of Computatonal Physcs 219 (2006) 13 20 Short note One-sded fnte-dfference approxmatons sutable for use wth Rchardson extrapolaton Kumar Rahul, S.N. Bhattacharyya * Department of Mechancal Engneerng,

More information

Online Appendix to: Axiomatization and measurement of Quasi-hyperbolic Discounting

Online Appendix to: Axiomatization and measurement of Quasi-hyperbolic Discounting Onlne Appendx to: Axomatzaton and measurement of Quas-hyperbolc Dscountng José Lus Montel Olea Tomasz Strzaleck 1 Sample Selecton As dscussed before our ntal sample conssts of two groups of subjects. Group

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Appendix B. Criterion of Riemann-Stieltjes Integrability

Appendix B. Criterion of Riemann-Stieltjes Integrability Appendx B. Crteron of Remann-Steltes Integrablty Ths note s complementary to [R, Ch. 6] and [T, Sec. 3.5]. The man result of ths note s Theorem B.3, whch provdes the necessary and suffcent condtons for

More information

REAL ANALYSIS I HOMEWORK 1

REAL ANALYSIS I HOMEWORK 1 REAL ANALYSIS I HOMEWORK CİHAN BAHRAN The questons are from Tao s text. Exercse 0.0.. If (x α ) α A s a collecton of numbers x α [0, + ] such that x α

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

Incentive Compatible Market Design with an Application to. Matching with Wages

Incentive Compatible Market Design with an Application to. Matching with Wages Incentve Compatble Market Desgn wth an Applcaton to Matchng wth Wages M. Bumn Yenmez Stanford Graduate School of Busness Job Market Paper November 1, 2009 Abstract: Ths paper studes markets for heterogeneous

More information

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013 COS 511: heoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 15 Scrbe: Jemng Mao Aprl 1, 013 1 Bref revew 1.1 Learnng wth expert advce Last tme, we started to talk about learnng wth expert advce.

More information

Commitment and Robustness in Mechanisms with Evidence 1

Commitment and Robustness in Mechanisms with Evidence 1 Commtment and Robustness n Mechansms wth Evdence 1 Elchanan Ben-Porath 2 Edde Dekel 3 Barton L. Lpman 4 Frst Draft June 2016 1 We thank the Natonal Scence Foundaton, grant SES 0820333 (Dekel), and the

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,

More information

Mechanisms with Evidence: Commitment and Robustness 1

Mechanisms with Evidence: Commitment and Robustness 1 Mechansms wth Evdence: Commtment and Robustness 1 Elchanan Ben-Porath 2 Edde Dekel 3 Barton L. Lpman 4 Frst Draft January 2017 1 We thank the Natonal Scence Foundaton, grant SES 0820333 (Dekel), and the

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Introductory Cardinality Theory Alan Kaylor Cline

Introductory Cardinality Theory Alan Kaylor Cline Introductory Cardnalty Theory lan Kaylor Clne lthough by name the theory of set cardnalty may seem to be an offshoot of combnatorcs, the central nterest s actually nfnte sets. Combnatorcs deals wth fnte

More information

Smooth Games, Price of Anarchy and Composability of Auctions - a Quick Tutorial

Smooth Games, Price of Anarchy and Composability of Auctions - a Quick Tutorial Smooth Games, Prce of Anarchy and Composablty of Auctons - a Quck Tutoral Abhshek Snha Laboratory for Informaton and Decson Systems, Massachusetts Insttute of Technology, Cambrdge, MA 02139 Emal: snhaa@mt.edu

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Implementation in Mixed Nash Equilibrium

Implementation in Mixed Nash Equilibrium Department of Economcs Workng Paper Seres Implementaton n Mxed Nash Equlbrum Claudo Mezzett & Ludovc Renou May 2012 Research Paper Number 1146 ISSN: 0819-2642 ISBN: 978 0 7340 4496 9 Department of Economcs

More information

Approximate Smallest Enclosing Balls

Approximate Smallest Enclosing Balls Chapter 5 Approxmate Smallest Enclosng Balls 5. Boundng Volumes A boundng volume for a set S R d s a superset of S wth a smple shape, for example a box, a ball, or an ellpsod. Fgure 5.: Boundng boxes Q(P

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Optimal Allocation with Costly Verification 1

Optimal Allocation with Costly Verification 1 Optmal Allocaton wth Costly Verfcaton 1 Elchanan Ben-Porath 2 Edde Dekel 3 Barton L. Lpman 4 Frst Draft August 2012 1 We thank Rcky Vohra and numerous semnar audences for helpful comments. We also thank

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

The Number of Ways to Write n as a Sum of ` Regular Figurate Numbers

The Number of Ways to Write n as a Sum of ` Regular Figurate Numbers Syracuse Unversty SURFACE Syracuse Unversty Honors Program Capstone Projects Syracuse Unversty Honors Program Capstone Projects Sprng 5-1-01 The Number of Ways to Wrte n as a Sum of ` Regular Fgurate Numbers

More information

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013 ISSN: 2277-375 Constructon of Trend Free Run Orders for Orthogonal rrays Usng Codes bstract: Sometmes when the expermental runs are carred out n a tme order sequence, the response can depend on the run

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

arxiv: v1 [cs.gt] 14 Mar 2019

arxiv: v1 [cs.gt] 14 Mar 2019 Stable Roommates wth Narcssstc, Sngle-Peaked, and Sngle-Crossng Preferences Robert Bredereck 1, Jehua Chen 2, Ugo Paavo Fnnendahl 1, and Rolf Nedermeer 1 arxv:1903.05975v1 [cs.gt] 14 Mar 2019 1 TU Berln,

More information

Infinitely Split Nash Equilibrium Problems in Repeated Games

Infinitely Split Nash Equilibrium Problems in Repeated Games Infntely Splt ash Equlbrum Problems n Repeated Games Jnlu L Department of Mathematcs Shawnee State Unversty Portsmouth, Oho 4566 USA Abstract In ths paper, we ntroduce the concept of nfntely splt ash equlbrum

More information