Finite State Dynamic Games with Asymmetric Information: A Framework for Applied Work By. Chaim Fershtman and Ariel Pakes. Working Paper No.

Size: px
Start display at page:

Download "Finite State Dynamic Games with Asymmetric Information: A Framework for Applied Work By. Chaim Fershtman and Ariel Pakes. Working Paper No."

Transcription

1 Finite State Dynamic Games with Asymmetric Information: A Framework for Applied Work By Chaim Fershtman and Ariel Pakes Working Paper No April, 2005 The Foerder Institute for Economic Research and The Sackler Institute of Economic Studies

2 Finite State Dynamic Games with Asymmetric Information: A Framework for Applied Work. Chaim Fershtman and Ariel Pakes March, Abstract This paper presents a framework for the applied analysis of dynamic games with asymmetric information. The framework consists of a de nition of equilibrium, and an algorithm to compute it. Our de nition of Applied Markov Perfect equilibrium is an extension of the de nition of Markov Perfect equilibrium for games with asymmetric information; an extension chosen for its usefulness to applied research. Each agent conditions its strategy on the payo or informationally relevant variables that are observed by that particular agent. The strategies are optimal given the beliefs on the evolution of these observed variables, and the rules governing the evolution of the observables are consistent with the equilibrium strategies. We then provide a simple algorithm for computing this equilibrium. The algorithm is both easy to program and does not require; computation of posterior distributions, explicit integration over possible future states, or information from all possible points in the state space. For speci city, we present our results in the context of a dynamic oligopoly game with collusion in which the outcome of rms investments are random and only observed by the investing agent. We then use this example to illustrate the computational properties of the algorithm. We would like to thank Susan Athey, Kyle Bagwell, Eddie Dekel, Uhlrich Doraszelski, Drew Fudenberg, Ady Pausner and Jean Tirole for helpful discussions. We also thank Dmitri Byzalov for superb research assistance and the owners of co ee shops in Boston, Chicago, Florence, London, New York, Positano and Tel Aviv whose hospitality enabled us to write this paper. 1

3 1 Introduction. Dynamic games are games in which either the payo function, or the set of feasible actions, change endogenously over time. These games have become central to the analysis of the dynamics of imperfectly competitive markets in Industrial Organization. The equilibrium concept used in the vast majority of the applications of dynamic games is Markov Perfect Equilibrium (MPE, see Maskin and Tirole,1988a and b, and 2001). In games with complete information MPE restricts strategies to be functions of payo relevant variables and insures that these variables evolve as a Markov process 1. This is useful to applied researchers as it both directs them to a particular set of state variables and provides a framework for analyzing transition probabilities; see for e.g., Ericson and Pakes (1995) who provide the framework which our model will build on. Empirically relevant speci cations for the primitives of these games typically lead to models with no analytic solution. As a result there has been signi cant interest in the numerical analysis of dynamic games 2. Though the numerical papers have extended the Markov Perfect framework in many ways, none of them have allowed for asymmetric information in the form of serially correlated state variables that are observed by one agent but not another. This paper presents a framework for the applied analysis of dynamic games with asymmetric information of this form. We consider a class of dynamic games in which there are n t active players in each period, each characterized by a vector of state variables. Some of these state variables are publicly observable while others are private information. State variables evolve over time with the outcome of the players investment process. In each period players strategies consist of a set of continuous controls (e.g. investment, output choice, prices etc.) and a set of discrete controls (these could represent entry, exit, signal sending, etc.). Players expected payo at each period depend on the players state variables in that period and their choice of controls. We let the choice of continuous controls a ect the 1 Actually, following Seim,2004, recent applied work allows for private information in the form of a set of independent disturbances which impact on the current pro ts of a single agent, but not its competitors; for a review see Ackerberg et.al., forthcoming. 2 See for e.g. Benkard (2003), Cheong and Judd (2003), de Roos (2000a), Pakes and McGuire (1994), Markovich (2000), Besanko and Doraszelski (2004), Gowrisankaran (1999), and Gowrisankaran and Town (1997). 2

4 probability distribution of state variables that take values on a discrete space, thus enabling us to keep the state space nite and enabling an equilibrium that can be computed. We focus on games in which each agent conditions its strategy on both the current values of its payo relevant variables and on the informationally relevant variables that help predict the actions (or states) of the agent s competitors. Our equilibrium notion, Applied Markov Perfect Equilibrium, requires that the strategies are optimal given each player s beliefs on the evolution of the variables it observes, and that the rules governing the evolution of the observables are consistent with the equilibrium strategies. As will be clari ed in what follows this notion has the applied advantages of the notion of Markov Perfect Equilibrium in complete information games; in particular it identi es the state variables that the researcher ought to condition on when analyzing strategies or computing equilibrium. Moreover it allows the applied researcher to do this without the complexities involved in computing Bayesian posteriors. We provide an algorithm for computing this equilibrium. The algorithm makes use of the fact that in games of this form there are simple su cient statistics such that if agents maximize with respect to them, then their actions will be consistent with an Applied Markov Perfect equilibrium. These su cient statistics are the expected discounted value of future net cash ows given the possible outcomes of their choice of controls. The required expectations are conditioned on the player s information sets, and these, in turn, de ne the possible states of the game. If one knew the empirical distribution for the transitions of the information sets one could compute the needed su cient statistics directly from that distribution and the primitives of the problem. As a result any agent playing the game who had a history of outcomes at its disposal could calculate these statistics by computing averages. So players with access to such histories can determine their optimal behavior from a relatively simple set of calculations; in particular the players never have to bother with computing posterior beliefs for the states of their competitors. We show that when this empirical distribution is unknown there is a reinforcement learning (or stochastic approximation) algorithm which enables one to compute the equilibrium by a procedure which only requires updating averages (in particular it never requires one to compute posterior distribu- 3

5 tions or to integrate out over possible future values) 3. Thus either the players themselves, or a computer algorithm, could use random outcomes and a set of simple calculations to compute the su cient statistics which determine equilibrium strategies. I.e. the reinforcement learning algorithm converts the seemingly intractable problem of computing a Perfect Bayesian equilibria into a relatively simple problem of updating averages (so simple that we have computed our examples on a vintage 2000 laptop computer). We only expect the algorithm to provide the correct su cient statistics as the number of iterations increases without bound. However the paper provides a stopping rule which checks whether the algorithm has produced su cient statistics within any given accuracy of their equilibrium values, and hence can be used to determine when an adequate approximation has been found 4. The test has the same computational advantages as the algorithm itself; that is it never requires computation of posteriors or integration over possible future states. There is an alternative reason for interest in the algorithm, as one can view its rules as a description of how the players learn the statistics needed to choose their policies, and justify the output of the algorithm in that way. For speci city, we present our analysis in the context of a simple dynamic oligopoly game with collusion in which rms can sign binding contracts on the quantities marketed (or otherwise enforce collusive agreements) 5. Output is determined cooperatively, but investments (including those required for entry and exit) are done non-cooperatively. Firms invest to improve their cost position, and the stochastic outcomes of a rm s investments are not observed by the rm s competitors. Consequently the rms types evolve 3 The stochastic approximation literature dates back to the classic paper of Robbins and Monroe,1951, and has been used extensively for calculating solutions to single agent dynamic programming problems (see Bertsekas and Tsitsiklis,1996 and the literature they cite). Pakes and McGuire,2001, show that it has signi cant computational advantages when applied to full information dynamic games, but as we will show the advantages in using it to compute the solution to asymmetric information dynamic games are much larger. 4 Thus formally we compute and AMPE. 5 Though this is the example that motivated us, there are many other issues that can be studied with our framework. One, that is closely related, is an industry subject to regulation. Firms invest over time in their capabilities but prices and quantities are determined by a regulatory committee and can be changed only after that committee performs a cost analysis. Either the regulated rms, or the regulator, could initiate the (costly) regulatory review. 4

6 over time and the transitions are unobserved by its competitors. As a result regardless of whether there was complete revelation of information in one period, behavior in subsequent periods requires beliefs on the likely types of its competitors in those periods. Outputs can only be realigned in costly meetings. The meeting itself is modeled in a reduced form way assuming only that all rms cost positions are revealed (for e.g. the industry might hire an outside expert to verify the rms cost positions, as occurs in some regulatory settings), and that the new allocations insure that rm s that have improved their relative cost positions increase their pro ts. Meetings may be initiated by a rm that is no longer satis ed with the market allocation agreed upon in the last meeting; say because its investment activities were particularly successful indicating that it would be likely to be allocated higher pro ts in a new allocation. Meetings are also used to reallocate output following an entry or exit decision 6. We then use our algorithm to numerically analyze how markets with these features behave. Our base case numerical results describe an industry with one to four rms active, but is a duopoly 92% of the time. There are long periods in which the same two active rms share the market; so in this sense the market structure is stable (we see either entry or exit in only 3% of the periods). However this hides an intense investment competition between the two incumbents, a competition which results in demands for renegotiating quantity allocations in about a third of the periods. We illustrate comparative static analysis by comparing our base case to a case with an increased cost of a meeting (say because of increased supervision by regulatory authorities), and we also compare the base case to results from two di erent institutional settings; (i) a full information (FI) case in which rms observed the cost position of their rivals, and (ii) a model with asymmetric information but in which side payments are feasible and in meetings the total production is allocated to the most e cient rm. When we increase the cost of a meeting we increase the cost of doing business, and hence tend to generate equilibria with less rms active. Since our model is one of cost reducing investments, larger market shares increase the incentives to invest and lower prices. Thus an increase meeting cost can generate 6 In a previous paper (Fershtman and Pakes, 2000) we used numerical analysis to study a complete information collusive industry. One of our conclusions was that collusive behavior may be bene cial also to consumers as it triggers larger investment, more variety and less concentrated market structures. 5

7 increased consumer surplus Comparing the benchmark case with the FI equilibrium we nd that the FI equilibrium has much less renegotiation. This re ects the fact that rms now only call meetings when they know they would bene t from them, a fact which generates higher producer surplus in the FI environment. However consumer surplus is higher in the model with Asymmetric Information (AI). This because in the AI model prices tend to be set after a rm has positive investment shocks, and therefore when there are lower costs of production. In the FI model rms demand a meeting whenever their relative position is improved and this can occur despite high costs if its competitors costs are yet higher. The comparison between the base model and the model with side payments which allocates total production to the e cient rm reveals a large di erence in renegotiation and investment incentives. In the case where only the e cient rm produces the costs and bene ts of changes in cost positions in the periods between meetings fall entirely on the rm producing, which, consequently increases its incentives to invest. Unlike in the base case, this rm will tend to call meetings when its investments are not successful, for otherwise it must produce a large quantity at a loss. Still there is a potential for a large gain in both producer and consumer surplus as a result of more e cient production allocation, at least provided it does not change the number of rms typically active. Related Literature on Collusion. There is a large literature in I.O. on price setting arrangements in oligopolistic markets. The theoretical literature has mainly focused on the trade-o between short run gains from deviations from a collusive agreements and the costs, or penalty, that can be imposed on deviants. A number of observers have noted that the temptations of short run gains is not the only problem a cartel must deal with. Apparently the reconciliation of di erences in demands among member rms, often related to di erences in the rms perceived cost positions, are an additional source of cartel instability. Posner (1976, p.65), for example, states that "Among the obstacles to xing a mutually satisfactory price are the con icting interests of sellers having di erent costs". 7 Levenstein s (1997) 7 See also Scherer (1980, p.199) who writes "di erent sellers are likely to have at least 6

8 study of collusive behavior among Bromine producers pointed out that half of the price wars, and in particular the more severe ones, occurred when one of the rms demanded a better allocation of the collusive market shares and the other s disagreed. More recently studies of the collusive agreements in Lysine came to a similar conclusion; an agreement could not be reached until the productive capabilities of ADM were veri ed (De Roos, 2000b). In the Bromine and Lysine cases the price cuts were announced ahead of time, so there was no attempt to gain short run pro ts by deviating from the prescribed prices and price wars were not a punishment for deviant prices. Disagreements are more likely to be a recurrent feature of a cartel s environment if rms are asymmetric, the extent of the asymmetry changes over time, and their is incomplete information on the current positions of the member rms. There have been a number of studies of collusion among asymmetric rms. For the most part the studies assumed an exogenously source of asymmetry that stayed constant over time and focused on how these asymmetries impacted collusive possibilities (e.g. Schmalensee (1987), Harrington (1989), and Compte, Jenny and Rey (2002)). Our earlier work (Fershtman and Pakes, 2000) allows for endogenously evolving asymmetries which result from entry, exit, and investment processes. As a result, in addition to studying how asymmetries e ect collusive possibilities, it also studies how collusion can e ect the extent of asymmetry 8. Still it assumes that all rms states are public information and that the source of cartel instability is the short run gains from deviation, rather than a desire for a reallocation of power or pro ts among cartel members. In two recent papers Athey and Bagwell (2001) and Athey, Bagwell and Sanchirico (2004) studied optimal collusive behavior in an in nitely repeated Bertrand game with private information. Each rm receives a privately observed i.i.d. cost shock every period. The cartel would like to ensure that production is done each period by the low cost rm, but this is di cult to achieve when cost positions are private information. Closest to our work is their recent paper, Athey and Bagwell (2004), where they allow for persistent cost shocks whose evolution is exogenous (rather than determined by endogenous investments). They use beliefs on competitor s types as a state slightly divergent notions about the most advantageous price. Especially with homogeneous products, these con icting views must be reconciled if joint pro ts are to be held near the potential maximum". 8 See also Fershtman and Gandal, 2000, for a two period analysis of a disadvantageous semi-collusive market. 7

9 variable, and provide conditions that insure that one can obtain best collusive equilibria with arbitrary history dependent strategies 9. 2 An Endogenously Asymmetric Cartel. We consider an industry in which there are n t incumbent rms which di er in! i;t, a characteristic of the rm s cost function which evolves over time with the outcomes of an investment process. All rms in the industry are part of the cartel. The cartel has periodic meetings in which the quantities each rm markets, say q i;t, are determined. Investment, entry, and exit decisions are, however, made independently. While in force rms do not deviate from their allocated quantities, but each rm reserves the right to call a meeting in which the current allocation is challenged. The quantities, together with the rms state variables determine the pro ts of each active rm, say i (!; q) = (! i;t ; q i;t ; q i;t ). Note that the rm s pro t does not depend on! i;t. This simpli es the computations below and is consistent with a model of Cournot quantity competition in which! is a determinant of costs 10, but is not necessary for the general framework. In the alternative case pro ts would be an infomationally relevant observable variable. Investments are made to improve the rm s physical state, their! value.! evolves over time with the outcome of the investment process and an industry speci c exogenous common shock. We assume that neither the investment itself, nor the output of the investment activity, are public information. So in a typical period each rm knows its own! but does not know the! s of the other rms. The game is therefore a dynamic game with asymmetric information. In each period rms can decide whether to abide by the existent quantity agreement or initiate a renegotiation by calling a costly meetings. In the meetings the cost positions of all rms are revealed and a new quantity 9 None of these papers consider the role of the antitrust authorities in determining how our contracts can be written and/or enforced. In contrast, in a recent paper Harringtom (2002) studies optimal collusive price dynamics when price changes a ect the probability of a cartel being detected by an antitrust authority. Then the antitrust policy a ects cartels behavior. 10 This would happen in di erentiated product models also, if there were many state variables per rm, and current pro ts did not depend on the state variable for which there is asymmetric information. 8

10 agreement is formed. Entry and exit also induce a costly realignment of market shares but possibly at a di erent cost. Potential entrants who do enter pay a sunk cost of entry and enter at a particular state in the following period. Firms who exit receive the scrap value. Evolution of States. Letting i index rms we assume that! i;t evolves over time with the outcomes of the rm s investment process, say i;t, and an exogenous process that a ects the! s of all rms in a given period, say t, both of which takes values in a subset of Z +, say in (); () respectively, so! i;t+1 = F (! i;t ; i;t+1 ; t+1 ); (1) where F : z () z () z! z. The distribution of is determined by the family P = fp (:jx;!); x 2 R + ;! 2 z g; (2) while the distribution of is given exogenously. Here x is a control of the rm and we let c(x) : R +! R + be the cost of x. Note that, at least in this formulation of our problem, we do not allow the investment of a rm s competitors to e ect the evolution of its state variables. Our computed example is a simple version of this with! i;t+1 =! i;t + i;t+1 t+1 with both and taking values in f0; 1g, while p(x i;t ;! i;t ) = A(! i;t)x i;t 1 + A(! i;t )x i;t : Note that the distribution of i;t+1 is better, in the stochastic dominance sense, the larger is investment, x i;t. If an incumbent decides to exit it gets a sell-o value of dollars, exits in the next period and never reappears again. We let i;t 2 f0; 1g indicate whether a rm exits ( i;t = 0) or continues ( i;t = 1). Potential entrants decide whether to enter or not. To enter they must pay a sunk cost of x e which is uniformly distributed over [x e l ; xe h ]. The realization of exe is observed 9

11 by the potential entrant prior to the entry decision, but is not observable by other players. An entrant appears in the following period as an incumbent at an! i;t+1 =! e t+1 where! e is given. For simplicity we assume there is at most one entrant in every period, and indicate whether entry occurs by the indicator function e = f0; 1g, e = 1 indicating entry. There are two types of meetings depending on the circumstances that trigger them. If a continuing incumbent calls a meeting, or their is entry, then all rms bear the cost of F K plus F K for the rms that called the meeting. Entrants pay F K plus the entry costs. Meetings also occur when a rm exits, but we assume that the meeting has a cost of K to each continuing incumbent with K F K 11. All meetings are followed by a new collusive agreement (see below). Timing of Decisions. The timing of decisions are as follows. At the beginning of the period there is a realization of the outcome of the investment processes of the last period and realizations of the entry cost and the exogenous shocks. If either; (i) a meeting was called in the previous period, or if (ii) an entry decision was taken during the previous period, or if (iii) one or more rms decide to exit during the previous period, then there is a meeting. The meeting allocates quantities according to the rules above. If there is no meetings quantities are the same as in the previous period. At this point in the period all incumbents and potential entrants have the information required to make the decisions needed for this period. When we refer to the information set of period t we will be referring to the information available at this point in the period. Simultaneously, pro ts are allocated and all decisions are made (this includes entry, exit, and investment decisions, as well as decisions on whether to call a meeting for the following period). Information Sets. The information set of each player at period t is the whole observable history up to that period. We would like to focus on a class of strategies that 11 Of course if there is only a single rm active there is no need for coordination and the rm may adjust its output without the costs of a meeting. 10

12 are "Markovian", but the strategy space must include, in addition to "payo relevant" variables (in the sense of variables that a ect current pro ts), "informational relevant" variables. These are variables that do not a ect payo s directly but they do provide information on the unobserved variables that determine their competitor s actions and, as a result, their own likely future pro ts. Each rm s period t s information set will be denoted by J i;t 2 J Z l where l l < 1, and Z is a nite subset of Z. Note that the fact that the state space is nite dimensional precludes general history dependent strategies. Indeed, as will become clear, for our computational algorithm to make sense we require that there be a subset of states that are visited in nitely often (and an entire history never repeats itself). It is convenient to divide the (payo relevant) information available to each agent into public and private information. If we let J p t be the information that is available to all incumbents and potential entrants at the time quantity decisions were made (i.e. after a meeting if there was one), then J p t = f! t ^(t) ; (t); ((t))g 2 J p ; where variables are de ned as follows. Let ^(t) be the number of periods that have passed since the meeting (if there was a meeting at period t then ^(t) = 0). To insure that the state space is nite we assume imperfect recall; i.e. rms can recall information from at most periods. Thus we set (t) minf^(t); g 12. So at period t rms only recall the precise date of the last meeting if that meeting was less than periods before, otherwise they just realize it was more than periods ago. Note however that! t ^(t) is in the information set in every period since it was always used in period t 1. ((t)) ( t (t)+1 ; : : : ; t ), with the understanding that if a meeting was called in the current period () is empty. As above rms recall at most the common shocks over the last periods. J p = N f1; : : : ; g f0; 1g, where! 2 and N is the maximum number of rms ever active. We invoke regularity conditions similar to 12 This is not the only way to insure a nite dimensional state space; any rule which insures that a meeting will be called in nite time with probability one will do. Also the computational burden of the algorithm increases with multiplicatively with. 11

13 those in Ericson and Pakes (1995), to insure that both N and # are nite. J i;t will be rm i s information set in t (i.e., the public information and the rm s own state). If the rm is an incumbent then J i;t = f! i;t ; J p t g 2 J p J ; while if it is a potential entrant J e(t) = (J p t ; x e ), where x e is its entry cost. Strategies and Their Costs. Recall that the following decisions are made simultaneously exit or i;t : (J i;t )! f0; 1g, entry or e t : (J p t ; x e )! f0; 1g, calling a meeting, or m i;t : (J i;t )! f0; 1g, and investment or x i;t : (J i;t ) :! R +. We have already speci ed the costs of entry and exit. The costs of investment, c(x) = x. An incumbent rm who calls a meeting must pay (F K + F K), all other participants only pay F K, as does a new entrant. That is the cost of calling a meeting whenever there is more than one rms are given by c(m i ; m i ; e ) = m i (F K + F K) + f X j6=i m j > 0 or e = 1g[1 m i ]F K where fg is the indicator function which takes the value one if the condition is satis ed. We have assumed, as is done in our computations, that the cost of a meeting which reallocates quantities after an exit is zero. Of course if there is only one rm in the industry it can change its output without incurring the costs of calling a meeting. 12

14 The Pro t Function. We assume a linear market demand function; i.e. P = a bq where P is market price and Q is the total quantity produced by all rms. Marginal costs do not change in quantity, but do vary with the rm s productivity or! i.e., mc(! i ; q i ) = mc(! i )q i. We want the quantity vector agreed to in meetings to re ect the rms relative position in the market. We therefore use Schmalensee s (1987) suggestion, and assume that relative market shares are determined by the noncooperative Nash equilibrium of that period, but total output maximizes the sum of pro ts conditional on these shares 13. Thus if we let q N (!) be the Cournot equilibrium output vector, and s N (!) be the Cournot equilibrium market shares, then total output (Q c = P q i ) is determined as X Q c (!) = argmax Q (! i ; s N (!)Q) where, (! i ; s N (!); Q) = (a bq)s N i (!)Q mc(! i )s N i (!)Q. All shares are positive if a=n >, and in this case Q c (!) = a P i sn i mc(! i) : 14 2b We assume that rms wish to maximize their expected discounted pro ts and we let ; 0 < < 1; be the common discount factor for all rms. A Comment on the General Case. There are many ways in which this model can be generalized without a ecting either our de nition of equilibrium or our computational algorithm (see Fershtman and Pakes, 2005). For example, though implicitly the fact that m i = 0 constitutes a signal about the likely! of rm i, both the model and the computational algorithm could incorporate more explicit signaling provided that the domain of the signals are nite. Indeed the e ective limitation of the models we can deal with is the nite state space requirement of the computational algorithm An alternative is to assume a bargaining solution as in Fershtman and Pakes (2000). The only disadvantage of this is that it is computationally more burdensome. 14 If s N i (!) < 0, we assume that the rm does not produce and then we recalculate the market shares. Also we note that though it is not necessarily true that our assumptions guarantees that collusive pro ts are higher than the Nash equilibrium pro ts, this condition is satis ed for the range of parameters that we use in our analysis. 15 Thus if there are additional continuous controls, their e ect must be to change the probabilities of occurrence of a nite number of states. 13 i

15 3 Equilibrium. Let S = N J p and let s be its generic element. Note that an s 2 S de nes a tuple (J p ; J 1 ; : : : ; J n(j) ), where J i = J p! i provides the information available to the di erent incumbents, and (J p ; x e ) provides the information available to the potential entrant. We will say that J i (or J p ) is a component of s if there is a (J p ; J i ) such that (J p ; J i ; J i ) = s. De nition: Applied Markov Perfect Equilibrium. An Applied Markov Perfect Equilibrium is a couple f(w (jj; m); 8 J 2 J ; 2 f0; 1g; m 2 f0; 1gg which will represent the expected discounted value of future net cash ow conditional on the realization of and a choice of m, and a V e (J p ); 8J p 2 J p which will represent the value of entry, strategies, (x(j); (J); m(j); e (J p ; x e )), for each J p and J which is a component of an s 2 S, such that C1. Strategies are optimal given W (), that is they solve ( " X max (1 ) + max m sup x W (jj; m)p(jx;!) x #) ; and e = 1 only if V e (J p ) x e, and C2. the W 0 s and the V e s are the expected discounted value of future net cash ows when the players play the equilibrium strategies, i.e. if V (J) = (J)+max ( (1 ) + max m sup x " X W (jj; m)p(jx;!) x #) ; then and W (jj; m) = E[V (J 0 ) c(m; m i ; e )jj; m]; V e (J) = E F K + V (J 0!(e))jJ p ; 14

16 where!(e) is the! at which a new entrant begins operations, and the expectation, which is over (J 0 ; m i ; e ), is consistent with the probability distribution of competitors actions and future locations induced by the rules of motion and the equilibrium strategies (obtained from C1).. Issues of existence and uniqueness for complete information models within the Ericson and Pakes (1995) framework are discussed in Doraszelski and Satterthwaite (2003). Here we simply assume existence and consider issues associated with analyzing the equilibrium. As noted S has a nite number of elements. Consequently any equilibrium generates a nite state Markov chain on S. Remarks. First note that the de nition does not require beliefs about the actual state of competitors, rather all we require is our continuation values, i.e. the fw (jj i;t ; m)g. These are conditional expectations conditioned on (J p ;!) for the incumbents, and (J p ; x e ) for the potential entrant. The conditioning set is the smallest set of variables observed by the agents that are either payo relevant, and/or informationally relevant in the sense that they help predict future outcomes of their competitors. Note that all the variables we condition on were realized at or after the last meeting. Of course were we to explicitly construct posteriors on the distribution of a rm s competitors states (or types), then we might nd that those distributions can be written as functions of a smaller set of su cient statistics then those in our conditioning set. However these su cient statistics are not directly observed 16. The reason for the term Applied in our de nition of equilibrium can now be clari ed. The de nition tells the applied researcher what states one must distinguish between without calling on that researcher to do any auxiliary 16 Note that were we to construct a perfect Bayesian equilibrium that included a consistent set of posteriors on competitors! s, then that equilibrium can be show to be consistent with our de nition of equilibrium. This because every set of beliefs about competitors! s together with the associated equilibrium strategies, will generate a unique set of continuation values, i.e. of W s, that satisfy our consistency conditions. On the other hand it is possible to have two states (say (J p ;!) and (J p0 ;!)) that are associated with the same belief s about competitors! s and policies in a perfect Bayesian equilibrium that generate di erent policies in the Applied Markov Perfect equilibrium. 15

17 calculations. Once the researcher knows what states must be distinguished, there are a number of non and semi-parametric ways of estimating continuation values (see the last section of Ackerberg et. al., forthcoming), and once continuation values are estimated the empirical implications of the model are easy to simulate. We have just de ned our equilibrium and we are now going to provide an algorithm that teaches itself the statistics needed for equilibrium play. We should note, however, that an alternative approach would have been to view the algorithm itself as the way players learn the statistics needed to choose their policies, and justify the output of the algorithm in that way. A reader who subscribes to the latter approach may be less interested in the section of the paper on testing (which follows the computational section) Computational Algorithm. We begin by pointing out some implications of our equilibrium concept that we will use intensively in building out computational algorithm. First note that the sequence f(j 1;t ; : : : ; J n;t )g 1 t=1 actually observed is a realization from a nite state Markov chain. It follows that it will eventually wander into a recurrent class of points say R J n, and once in R will remain in it forever (with probability one; see Freedman,1983). So to analyze equilibrium for sub-games starting at a point in R we need only know equilibrium policies on R. Second given equilibrium play, the objective (or empirical) distribution of outcomes, say p e;i (JijJ 0 i ; i ), are random draws from the equilibrium p(jijj 0 i ; ; m(j i )). This implies that the empirical distribution from points that are visited in nitely often converges to p(jijj 0 i ; ; m(j i )), the equilibrium distribution of outcomes. Finally note that were we to know the fw (jj i;t ; m)g we would have enough information to compute equilibrium policies for the i th agent in time period t. 17 On the other hand, there are several other issues that arise were one to take this approach seriously, among them; the question of whether (and how) an agent can learn from the experience of other agents, and how much information an agent gains about its value in a particular state from the agent s experience in related states. 16

18 Together the rst and third points imply that to analyze sub-games from a point in R we only need to know the fw (jj i ; m(j i ))g for each J i component of s 2 R. Further the second point enables us to test whether a candidate sequence of fw (jj i ; m(j i ))g on a subset of the space obtained from the algorithm are in fact equilibrium fw (jj i ; m(j i ))g. We simply substitute the empirical distribution of outcomes for the p(j 0 ijj i ; ; m) on the right hand side of conditions C1 and C2 above and test for the equality signs in those conditions (taking into account the variance in the estimated values) Overview of the Algorithm Consider the Bellman equation with the information available just before all decisions are made and pro ts allocated. This will de ne equilibrium values and policies. If we let (J i;t ) (! i;t ; q(!(t ^t )), and omit the (i; t) index for notational convenience, then the Bellman equation " #) (; max m2f0;1gsup x0 V (J) = (J)+max where x + X W (jj; m)p(jx;!) W ( jj; m) E fv (J 0 ) c(m; m i ; e )j 0 = ; J; m; = 1g : (4) Writing the Bellman equation in this way makes it easy to see that were we (or the rm s) to know the values for fw (jj; m)g, they would be suf- cient for calculating optimal policies and the values associated with them. That is the fw (jj; m)g are su cient statistics for the decision problem. Consequently our algorithm will look for an e cient way of computing them. Note, however, that were we to compute the xed point de ning equilibrium behavior iteratively, i.e. if we were to compute V () at each iteration of a successive approximation routine, we would have to evaluate E (J 0 ) fv (J 0 )j 0 = ; J; m; = 1g 18 Strictly speaking this can only be done for points that are interior points of the recurrent class, where, as in Pakes and McGuire, 2001, interior points are points at which agents cannot communicate to points outside of the recurrent class no matter which among the feasible strategies are played (by de nition they can only transit to points inside the recurrent class if they play equilibrium strategies). Under our assumptions the data itself should enable us to separate the recurrent points into interior and boundary points. 17 (3)

19 explicitly at each point at each iteration. This would require explicit calculation of posterior probabilities of the form P r(! i;t =! ijj i;t ; m i;t ; i;t = 1) at each point. These probabilities would have to be calculated recursively and kept in memory. Moreover the cardinality of the set of J i;t (and hence the number of distributions that would have to be kept in memory) is subject to the curse of dimensionality (it would increase exponentially in, the number of active rms, the cardinality of, etc.). We present a computational algorithm which never requires us to calculate this expectation, and hence never requires us to calculate and retain posterior probabilities. Instead we will compute the W (jj; m) in the equation above iteratively using techniques analogous to those used in the stochastic approximation (or reinforcement learning) literature (see, for e.g. Bertsekas and Tsikilis, and the literature cited their.). The stochastic approximation algorithm has an initiation procedure which provides initial values of fw (jj; m)g for all and m 2 f0; 1g for di erent locations (say L) of the algorithm, where a location designates the public and the private information sets of all rms active. Starting at any L 0 the algorithm calculates policies for all agents by assuming that the fw 0 (jj; m)g are the true continuation values or fw (jj; m)g. Note that this avoids the need to compute an integral over possible future values in order to obtain policies. Given these policies, and computer generated draws for the realizations of all random variables which determine outcomes conditional on the policies, the algorithm moves to a new location, say L 1. The draws are then treated as if they are random draws from the true distribution of outcomes given the policy. Since W (jj 0 ; m) can be constructed as an expectation over these distributions, the draws themselves are used to update our estimate of them, i.e. to obtain W 1 (jj 0 ; m). The process is then continued iteratively from L 1. Since by updating our estimates of W (jj 0 ; m) we are updating a mean, if the policies converge so should our estimates of the fw ()g (and vice versa). Moreover since the estimate of the mean is a sample average we expect increasingly accurate estimates of the W () from a particular location the more times we visit that location. The location s visited repeatedly will eventually converge to a recurrent class of locations of the Markov Process generating W (). If the recurrent 18

20 class is the entire state space, there is no further conceptual problems. On the other hand in applied problems the recurrent classes are often much smaller than the entire state space (see Pakes and McGuire, 2001). This is a computational advantage, as our algorithm need not repeatedly visit nor store information from points not in R, but it does raise a theoretical issue. If R is smaller S then there are points in R which could communicate with points outside of R if feasible (but in-optimal) policies were followed 19. Pakes and McGuire (2001) call such points boundary points and note that in order for the algorithm to obtain accurate estimates of the W () associated with these points it would need to visit points outside of the recurrent class repeatedly (something the stochastic algorithm will not do without intervention). As a result if we do not impose any further conditions the algorithm will identify a self con rming equilibrium in the sense of Fudenberg and Levine (1993); that is an equilibrium where agents never observe outcomes which are inconsistent with their belief s about future evaluations. Alternatively, as shown by Pakes and McGuire (2001), if we have upper bound estimates to the true W () and we insure that the W () outputted by the algorithm is greater then the upper bound, then we insure that the equilibrium is an applied Markov perfect equilibrium in the sense de ned above. This upper bound could be obtained from exogenous information (e.g. the monopolist s evaluation), or by restarting repeatedly at points outside of R. Numerically we have found that, provided we initiate the algorithm with W () s that are large enough, the boundary points are hit so infrequently that the way one treats them does not a ect the test statistic introduced below and therefor could not have much of an impact on our nal estimates of the W (). We now provide a formalization of the algorithm and then come back to the issue of testing whether a candidate set of W () outputted by our algorithm do in fact satisfy the equilibrium conditions. 4.2 Notation for the Stochastic Algorithm. The iterative stochastic algorithm makes repeated use of the relationship between continuation values and the fw (jj; m)g. Consequently it will be helpful if we begin with those relationships. Rewrite the Bellman equation 19 The de nition of the recurrent class insures that points in the recurrent class only communicate with other points in that class if optimal policies are followed. 19

21 as n o V (J i;t ) = max + (! i;t ; q(!(t ^t )); max mi;t 2f0;1gV c (J i;t ; m i;t ) ; (5) where and V c (J i;t ; m i;t = 1) = (! i;t ; q(!(t ^t ))) (6) +max x2r +fe[ x c(m i = 1; m i ; e ) + V (J i;t+1 )]jj i;t ; x; m i;t = 1g; V c (J i;t ; m i;t = 0) = (! i;t ; q(!(t ^t )) (7) +max x2r + fe [ x c(m i = 0; m i ; e ) + V (J i;t+1 )jj i;t ; x; m i;t = 0]g : Note that if W (jj i;t ; m i;t = 1) E[(V (J i;t+1 ) c(m i = 1; m i ; e )j i;t+1 = ; J i;t ; m i;t = 1] for 2 f0; 1g, then V c (J i;t ; m i;t = 1) = (! i;t ; q(!(t ^t ))) (8) +max x2r +[ x+w (1jJ i;t ; m i;t = 1)p(x;! i )+W (0jJ i;t ; m i;t = 1)(1 p(x;! i ))]: Similarly if E ( for 2 f0; 1g, then W (jj i;t ; m i;t = 0) c(m i = 0; m i ; e ) + V (J i;t+1 ))j i;t+1 = ; J i;t ; m i;t = 0 V c (J i;t ; m i;t = 0) = (! i;t ; q(!(t ^t ))) (9) +max x2r +[ x+w (1jJ i;t ; m i;t = 0)p(x;! i )+W (0jJ i;t ; m i;t = 0)(1 p(x;! i ))]: Finally the Bellman equation for a potential entrant is V e (J t ) = E[V (! e t+1 ; J t+1 ) F Kf# offirms > 0gj e t = 1; J t ]: 20

22 4.3 Details of the Stochastic Algorithm. An iteration, which will be indexed by k, is de ned by a location, say L k, and by a memory, which will be designated M k Storage and Policies at Iteration k The location is de ned as a tuple L k = fj k ;! k 1; : : : ;! k n(j(k))g where! k j is the current! of the j th largest rm in the! t ^(t) speci ed in J k, and as before, J k = f! k ; t ^(t) k t ; k ((t))g: There is the possibility of storage in memory at each possible L. Distinct objects are stored at J. Further for each J there can be items stored at each of the triples, fj; j;!g for j = 1; : : : ; n(j) and! 2 f1; : : : ; g (of course at any iteration some of these will have no information stored). I.e. for each J k we begin with the largest rm in the! tuple de ning J k, nd out its current! and list objects under the triple (J k ; 1;!), then continue and store di erent objects under the triples (J k ; 2;!) and so on. The items stored are as follows. For each J k we will have M(J k ) stored at J k where M(J k ) contains The number of times we have visited J k, or h k (J k ), and if h k (J k ) > 0 V k e (J) the k th iteration s estimate of the value of entry at J q(j k ; j) for j = 1; : : : ; n(j k ). Note that if h k (J k ) = 0, nothing is in memory for that point. For each fj k ; j;!g, we have M(J k ; j;!), or the information stored in memory at (J k ; j;!) as h k (J k ; j;!) the number of times we have hit (J k ; j;!), and if h k (J k ; j;!) > 0 W k (jj k ; j;!; m) for m 2 f0; 1g and 2 f0; 1g. 21

23 (J k ; j;!) Updating and Initialization. We require the following updates. Update L k = fj k ;! k 1; : : : ;! k n(j(k)) g! Lk+1 = fj k+1 ;! k+1 1 ; : : : ;! k+1 n(j(k+1)) g Update M(J k ). Here we update only V k e (J k ) and h k (J k ), see below. Update M(J k ; j;!), for j = (1; : : : ; n(j k )). Here we update W k (jj k ; j;!; m)! W k+1 (jj k ; j;!; m) for 2 f0; 1g and m 2 f0; 1g, and h k (J k ; j;!), see below. In doing these updates we will use the operator V (jw ) de ned as where V (J; j;!jw ) max f + (J; j;!); V c (J; j;!jw )g ; V c (J; j;!jw ) (J; j;!)+max m2f0;1g max x [ x+w (1jJ; j;!; m)p(x;!)+w (0jJ; j;!; m)(1 p(x;!))]: While we do the update we initialize if required. That is If h k (J k+1 ) = 0 compute q(! k+1 ) = s N (!)Q c (!) (from equations 2 and 4) and put it in memory for J k+1. Also if we have to initialize we set V k e (J k ) = W k (0jJ k + I(! e ); j(! e ; J k + I(! e )));! e ; m = 1); where here and below! k+1 + I(z) adds an! = z to the! k+1 vector, and then reorders it in the natural order, and where j(z; J) provides the order of the! = z element in the! vector de ned by J. If W k (0jJ k +I(! e ); j(! e ; J k +I(! e )));! e ; m = 1) initialize it as speci ed below. 20 It might also be e cient to store V k (J k ; j;!), and x k (J k ; j; w); k (J k ; j; w); m k (J k ; j; w), rather than compute them, as needed from the equations above. 22

24 If h k (J k+1 ; j;!) = 0, calculate (J; j;!) = (!; s N (!)Q c (!)) (from equation 3) and put in memory. Also initialize W 1 (jj; j;!; m = 0) = (J; j;! + )=([1 ]; for = f0; 1g; and W 1 (jj; j;!; m = 1) = (J I(j; J) + I(! + ); j(j I(j; J) + I(! + ));! + )=([1 ]; for = f0; 1g. After initializing W-s, compute optimal policies given the W-s Update 1: Policies. Get the realization of ex e. Determine whether k e(j k ) = 1, V k e (J k ) x e. Choose x(j k ; j;!; m) as argmax x [ x + X W k (jj k ; j;!; m)p(jx;!)]; for m 2 f0; 1g; Calculate V c (J k ; j;!; m) = (J k ; j;!) x k (J k ; j;!; m) + X W k (jj k ; j;!; m)p(jx(j k ; j;!; m);!) for m 2 f0; 1g. Calculate m(j k ; j;!) = argmax m2f0;1g V c (J k ; j;!; m) and V c (J k ; j;!) = max m2f0;1g V c (J k ; j;!; m) Calculate (J k ; j;!) = 0, V c (J k ; j;!) + (J k ; j;!) Now we have all the policies. For each (J k ; j;!) we have calculated x(j k ; j;!; m) and then V c (J k ; j;!; m) for m 2 f0; 1g; then m(j k ; j;!), and nally (J k ; j;!). 23

25 4.3.4 Update 2: Finding The New Location. To obtain the new location we need the draws from distributions determined by the policies just calculated. So we make the draws rst, keep them in the working le, and use them below. Draw k+1. Here k+1 = 1 with probability and k+1 = 0 with probability 1. For each (J k ; j;!) such that (J k ; j;!) = 1, use x(j k ; j;!; m(j k ; j;!)) and! to draw k+1 j and calculate! k+1 (J; j) =! k (J; j) + k+1 j k+1 (note k (Jj k ) = 0 )! k+1 j = 0). Note k+1 j = 1 with probability A(! j )x(m) j =(1 + A(! j )x(m) j ) and k+1 j = 0 with probability 1=(1 + A(! j )x(m) j ) where A(! j ) is decreasing in! j. Now we have all the needed random draws. This enables us to update L k. If P i [1 (J k i )] = 0 and P j m(j k j ) = 0 and e (J k ) = 0, then J k+1 = f! k ^(k) ; (k + 1) = min[(k) + 1; ]; ((k + 1))g Otherwise form! k+1 by taking! k+1 (J k ; j) for each j at which (J k ; j) = 1 and! e k+1 if e (J k ) = 1, and ordering the result (in the natural order) J k+1 = f! k+1 ; 0; 0g; L k+1 = (J k+1 ;! k+1 ) Update 3: M(J k ). First set h k+1 (J k ) = h k (J k ) + 1: Next set V k+1 e (J k ) V k e (J k ) = [h k (J k ) + 10] 1 24

26 [(V (J e (! k+1 ) Z(J e ; J 0 e; m = 0)); j(! e k+1 ; J e (! k+1 ));! e k+1 )jw k ) V k e (J k )] where if k e = 1 and otherwise. J e (! k+1 ) = J k+1 J e (! k+1 ) = (! k+1 + I(! e k+1 ); 0; 0) Update 4: M(J k ; j;!). First we update h k+1 (J k j ) = h k (J k j ) + 1: Next we have to update W k (jj k ; j;!; m) for 2 f0; 1g and m 2 f0; 1g. The update can di er with k j ; m k j ; k+1 j ; f X m k i > 0g; f X k i > 0g; and k e: i6=j i6=j Thus for each j there are four updates and the way that update is made could di er for each of 2 6 possible outcomes. We show below however that they reduce to four cases, three of which can be calculated from a single formula. All updates will be of the form W k+1 (jj k ; j;!; m) W k (jj k ; j;!; m) = [h k (J k j )+2] 1 [V y () W k (jj k ; j;!; m)]; and what we have to do is provide the form of V y (). Case 1: We evaluate a situation in which there is a meeting, i.e. either m = 1, or P i6=j mk i 1, or k e = 1, or P i6=j [1 k j ] 6= 0. There are two possible J in this case, one for k j = 1 and one for k j = 0, and the V y () will depend on J. If k j = 1, then J () = ((! k+1 I(! k j + k+1 j k+1 ) + I(! k j + k+1 ); 0; 0): If k j = 0, then J () = ((! k+1 + I(! k j + k+1 )); 0; 0): 25

Dynamic Games with Asymmetric Information: A Framework for Empirical Work

Dynamic Games with Asymmetric Information: A Framework for Empirical Work Dynamic Games with Asymmetric Information: A Framework for Empirical Work Chaim Fershtman and Ariel Pakes (Tel Aviv University and Harvard University). January 1, 2012 Abstract We develop a framework for

More information

Lecture 3: Computing Markov Perfect Equilibria

Lecture 3: Computing Markov Perfect Equilibria Lecture 3: Computing Markov Perfect Equilibria April 22, 2015 1 / 19 Numerical solution: Introduction The Ericson-Pakes framework can generate rich patterns of industry dynamics and firm heterogeneity.

More information

8. MARKET POWER: STATIC MODELS

8. MARKET POWER: STATIC MODELS 8. MARKET POWER: STATIC MODELS We have studied competitive markets where there are a large number of rms and each rm takes market prices as given. When a market contain only a few relevant rms, rms may

More information

Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games

Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games Gabriel Y. Weintraub, Lanier Benkard, and Benjamin Van Roy Stanford University {gweintra,lanierb,bvr}@stanford.edu Abstract

More information

Finite State Dynamic Games with Asymmetric Information: A Framework for Applied Work

Finite State Dynamic Games with Asymmetric Information: A Framework for Applied Work Finite State Dynamic Games with Asymmetric Information: A Framework for Applied Work Chaim Fershtman and Ariel Pakes (Tel Aviv University and Harvard University). May 26, 2009 Abstract With applied work

More information

Solving Extensive Form Games

Solving Extensive Form Games Chapter 8 Solving Extensive Form Games 8.1 The Extensive Form of a Game The extensive form of a game contains the following information: (1) the set of players (2) the order of moves (that is, who moves

More information

Limit pricing models and PBE 1

Limit pricing models and PBE 1 EconS 503 - Advanced Microeconomics II Limit pricing models and PBE 1 1 Model Consider an entry game with an incumbent monopolist (Firm 1) and an entrant (Firm ) who analyzes whether or not to join the

More information

Methodological Issues in Analyzing Market Dynamics

Methodological Issues in Analyzing Market Dynamics Methodological Issues in Analyzing Market Dynamics Ariel Pakes June 22, 2015 Abstract This paper investigates progress in the development of models capable of empirically analyzing the evolution of industries.

More information

Extensive Form Games with Perfect Information

Extensive Form Games with Perfect Information Extensive Form Games with Perfect Information Pei-yu Lo 1 Introduction Recap: BoS. Look for all Nash equilibria. show how to nd pure strategy Nash equilibria. Show how to nd mixed strategy Nash equilibria.

More information

Lecture Notes on Game Theory

Lecture Notes on Game Theory Lecture Notes on Game Theory Levent Koçkesen 1 Bayesian Games So far we have assumed that all players had perfect information regarding the elements of a game. These are called games with complete information.

More information

Environmental R&D with Permits Trading

Environmental R&D with Permits Trading Environmental R&D with Permits Trading Gamal Atallah Department of Economics, University of Ottawa Jianqiao Liu Environment Canada April 2012 Corresponding author: Gamal Atallah, Associate Professor, Department

More information

9 A Class of Dynamic Games of Incomplete Information:

9 A Class of Dynamic Games of Incomplete Information: A Class of Dynamic Games of Incomplete Information: Signalling Games In general, a dynamic game of incomplete information is any extensive form game in which at least one player is uninformed about some

More information

Dynamic Games with Asymmetric Information: A Framework for Empirical Work

Dynamic Games with Asymmetric Information: A Framework for Empirical Work Dynamic Games with Asymmetric Information: A Framework for Empirical Work Chaim Fershtman and Ariel Pakes (Tel Aviv University and Harvard University). May 3, 2012 Abstract We develop a framework for the

More information

EconS Advanced Microeconomics II Handout on Repeated Games

EconS Advanced Microeconomics II Handout on Repeated Games EconS 503 - Advanced Microeconomics II Handout on Repeated Games. MWG 9.B.9 Consider the game in which the following simultaneous-move game as depicted in gure is played twice: Player Player 2 b b 2 b

More information

University of Warwick, Department of Economics Spring Final Exam. Answer TWO questions. All questions carry equal weight. Time allowed 2 hours.

University of Warwick, Department of Economics Spring Final Exam. Answer TWO questions. All questions carry equal weight. Time allowed 2 hours. University of Warwick, Department of Economics Spring 2012 EC941: Game Theory Prof. Francesco Squintani Final Exam Answer TWO questions. All questions carry equal weight. Time allowed 2 hours. 1. Consider

More information

Upstream capacity constraint and the preservation of monopoly power in private bilateral contracting

Upstream capacity constraint and the preservation of monopoly power in private bilateral contracting Upstream capacity constraint and the preservation of monopoly power in private bilateral contracting Eric Avenel Université de Rennes I et CREM (UMR CNRS 6) March, 00 Abstract This article presents a model

More information

Some Notes on Adverse Selection

Some Notes on Adverse Selection Some Notes on Adverse Selection John Morgan Haas School of Business and Department of Economics University of California, Berkeley Overview This set of lecture notes covers a general model of adverse selection

More information

Advanced Economic Growth: Lecture 8, Technology Di usion, Trade and Interdependencies: Di usion of Technology

Advanced Economic Growth: Lecture 8, Technology Di usion, Trade and Interdependencies: Di usion of Technology Advanced Economic Growth: Lecture 8, Technology Di usion, Trade and Interdependencies: Di usion of Technology Daron Acemoglu MIT October 3, 2007 Daron Acemoglu (MIT) Advanced Growth Lecture 8 October 3,

More information

Theory and Empirical Work on Imperfectly Competitive Markets. by Ariel Pakes. (Harvard University). The Fisher-Schultz Lecture

Theory and Empirical Work on Imperfectly Competitive Markets. by Ariel Pakes. (Harvard University). The Fisher-Schultz Lecture Theory and Empirical Work on Imperfectly Competitive Markets. by Ariel Pakes (Harvard University). The Fisher-Schultz Lecture World Congress of the Econometric Society London, August 2005. 1 Structure

More information

Robust Mechanism Design and Robust Implementation

Robust Mechanism Design and Robust Implementation Robust Mechanism Design and Robust Implementation joint work with Stephen Morris August 2009 Barcelona Introduction mechanism design and implementation literatures are theoretical successes mechanisms

More information

Labor Economics, Lecture 11: Partial Equilibrium Sequential Search

Labor Economics, Lecture 11: Partial Equilibrium Sequential Search Labor Economics, 14.661. Lecture 11: Partial Equilibrium Sequential Search Daron Acemoglu MIT December 6, 2011. Daron Acemoglu (MIT) Sequential Search December 6, 2011. 1 / 43 Introduction Introduction

More information

Notes on Mechanism Designy

Notes on Mechanism Designy Notes on Mechanism Designy ECON 20B - Game Theory Guillermo Ordoñez UCLA February 0, 2006 Mechanism Design. Informal discussion. Mechanisms are particular types of games of incomplete (or asymmetric) information

More information

Microeconomics II Lecture 4: Incomplete Information Karl Wärneryd Stockholm School of Economics November 2016

Microeconomics II Lecture 4: Incomplete Information Karl Wärneryd Stockholm School of Economics November 2016 Microeconomics II Lecture 4: Incomplete Information Karl Wärneryd Stockholm School of Economics November 2016 1 Modelling incomplete information So far, we have studied games in which information was complete,

More information

A Framework for Applied Dynamic Analysis in IO

A Framework for Applied Dynamic Analysis in IO A Framework for Applied Dynamic Analysis in IO Ulrich Doraszelski and Ariel Pakes October 6, 2006 Abstract This paper reviews a framework for numerically analyzing dynamic interactions in imperfectly competitive

More information

Estimating Dynamic Oligopoly Models of Imperfect Competition

Estimating Dynamic Oligopoly Models of Imperfect Competition Estimating Dynamic Oligopoly Models of Imperfect Competition Lanier Benkard, Yale University Leverhume Lecture, Warwick May 2010 Introduction Why are we interested in dynamic oligopoly? 1. Effects of policy/environmental

More information

Dynamic and Stochastic Model of Industry. Class: Work through Pakes, Ostrovsky, and Berry

Dynamic and Stochastic Model of Industry. Class: Work through Pakes, Ostrovsky, and Berry Dynamic and Stochastic Model of Industry Class: Work through Pakes, Ostrovsky, and Berry Reading: Recommend working through Doraszelski and Pakes handbook chapter Recall model from last class Deterministic

More information

Volume 30, Issue 3. Monotone comparative statics with separable objective functions. Christian Ewerhart University of Zurich

Volume 30, Issue 3. Monotone comparative statics with separable objective functions. Christian Ewerhart University of Zurich Volume 30, Issue 3 Monotone comparative statics with separable objective functions Christian Ewerhart University of Zurich Abstract The Milgrom-Shannon single crossing property is essential for monotone

More information

EconS Microeconomic Theory II Midterm Exam #2 - Answer Key

EconS Microeconomic Theory II Midterm Exam #2 - Answer Key EconS 50 - Microeconomic Theory II Midterm Exam # - Answer Key 1. Revenue comparison in two auction formats. Consider a sealed-bid auction with bidders. Every bidder i privately observes his valuation

More information

On Tacit versus Explicit Collusion

On Tacit versus Explicit Collusion On Tacit versus Explicit Collusion Yu Awaya y and Viay Krishna z Penn State University November 3, 04 Abstract Antitrust law makes a sharp distinction between tacit and explicit collusion whereas the theory

More information

"A Theory of Financing Constraints and Firm Dynamics"

A Theory of Financing Constraints and Firm Dynamics 1/21 "A Theory of Financing Constraints and Firm Dynamics" G.L. Clementi and H.A. Hopenhayn (QJE, 2006) Cesar E. Tamayo Econ612- Economics - Rutgers April 30, 2012 2/21 Program I Summary I Physical environment

More information

Bounded Rationality Lecture 4

Bounded Rationality Lecture 4 Bounded Rationality Lecture 4 Mark Dean Princeton University - Behavioral Economics The Story So Far... Introduced the concept of bounded rationality Described some behaviors that might want to explain

More information

EconS Advanced Microeconomics II Handout on Mechanism Design

EconS Advanced Microeconomics II Handout on Mechanism Design EconS 503 - Advanced Microeconomics II Handout on Mechanism Design 1. Public Good Provision Imagine that you and your colleagues want to buy a co ee machine for your o ce. Suppose that some of you may

More information

Spatial Competition and Collaboration Networks

Spatial Competition and Collaboration Networks Spatial Competition and Collaboration Networks Yasunori Okumura y Kanagawa University May, 009 Abstract In this paper, we discuss the formation of collaboration networks among rms that are located in a

More information

Market Power. Economics II: Microeconomics. December Aslanyan (VŠE) Oligopoly 12/09 1 / 39

Market Power. Economics II: Microeconomics. December Aslanyan (VŠE) Oligopoly 12/09 1 / 39 Market Power Economics II: Microeconomics VŠE Praha December 2009 Aslanyan (VŠE) Oligopoly 12/09 1 / 39 Microeconomics Consumers: Firms: People. Households. Monopoly. Oligopoly Now Perfect Competition.

More information

Experimentation and Observational Learning in a Market with Exit

Experimentation and Observational Learning in a Market with Exit ömmföäflsäafaäsflassflassflas ffffffffffffffffffffffffffffffffffff Discussion Papers Experimentation and Observational Learning in a Market with Exit Pauli Murto Helsinki School of Economics and HECER

More information

Experimentation, Patents, and Innovation

Experimentation, Patents, and Innovation Experimentation, Patents, and Innovation Daron Acemoglu y Kostas Bimpikis z Asuman Ozdaglar x October 2008. Abstract This paper studies a simple model of experimentation and innovation. Our analysis suggests

More information

EconS Microeconomic Theory II Homework #9 - Answer key

EconS Microeconomic Theory II Homework #9 - Answer key EconS 503 - Microeconomic Theory II Homework #9 - Answer key 1. WEAs with market power. Consider an exchange economy with two consumers, A and B, whose utility functions are u A (x A 1 ; x A 2 ) = x A

More information

Carrot and stick games

Carrot and stick games Bond University epublications@bond Bond Business School Publications Bond Business School 6-14-2001 Carrot and stick games Jeffrey J. Kline Bond University, jeffrey_kline@bond.edu.au Follow this and additional

More information

Lecture 7. Simple Dynamic Games

Lecture 7. Simple Dynamic Games Lecture 7. Simple Dynamic Games 1. Two-Stage Games of Complete and Perfect Information Two-Stages dynamic game with two players: player 1 chooses action a 1 from the set of his feasible actions A 1 player

More information

Uncertainty and Disagreement in Equilibrium Models

Uncertainty and Disagreement in Equilibrium Models Uncertainty and Disagreement in Equilibrium Models Nabil I. Al-Najjar & Northwestern University Eran Shmaya Tel Aviv University RUD, Warwick, June 2014 Forthcoming: Journal of Political Economy Motivation

More information

Answer Key: Problem Set 3

Answer Key: Problem Set 3 Answer Key: Problem Set Econ 409 018 Fall Question 1 a This is a standard monopoly problem; using MR = a 4Q, let MR = MC and solve: Q M = a c 4, P M = a + c, πm = (a c) 8 The Lerner index is then L M P

More information

Methodology for Analyzing Market Dynamics.

Methodology for Analyzing Market Dynamics. Methodology for Analyzing Market Dynamics. Adapted from three lectures given in 2014. The Cowles Lecture: N.A.Econometric Society, Minneapolis, June. Keynote Address: Society for Applied Dynamic Games,

More information

Barnali Gupta Miami University, Ohio, U.S.A. Abstract

Barnali Gupta Miami University, Ohio, U.S.A. Abstract Spatial Cournot competition in a circular city with transport cost differentials Barnali Gupta Miami University, Ohio, U.S.A. Abstract For an even number of firms with identical transport cost, spatial

More information

A Solution to the Problem of Externalities When Agents Are Well-Informed

A Solution to the Problem of Externalities When Agents Are Well-Informed A Solution to the Problem of Externalities When Agents Are Well-Informed Hal R. Varian. The American Economic Review, Vol. 84, No. 5 (Dec., 1994), pp. 1278-1293 Introduction There is a unilateral externality

More information

Final Exam (Solution) Economics 501b Microeconomic Theory

Final Exam (Solution) Economics 501b Microeconomic Theory Dirk Bergemann and Johannes Hoerner Department of Economics Yale Uniersity Final Exam (Solution) Economics 5b Microeconomic Theory May This is a closed-book exam. The exam lasts for 8 minutes. Please write

More information

DYNAMIC LIMIT PRICING: ONLINE APPENDIX Flavio Toxvaerd. March 8, 2018

DYNAMIC LIMIT PRICING: ONLINE APPENDIX Flavio Toxvaerd. March 8, 2018 DYNAMIC LIMIT PRICING: ONLINE APPENDIX Flavio Toxvaerd March 8, 2018 Abstract. This appendix o ers a detailed and self-contained analysis of the benchmark single-round version of the dynamic model presented

More information

Simultaneous Choice Models: The Sandwich Approach to Nonparametric Analysis

Simultaneous Choice Models: The Sandwich Approach to Nonparametric Analysis Simultaneous Choice Models: The Sandwich Approach to Nonparametric Analysis Natalia Lazzati y November 09, 2013 Abstract We study collective choice models from a revealed preference approach given limited

More information

Competitive Equilibrium and the Welfare Theorems

Competitive Equilibrium and the Welfare Theorems Competitive Equilibrium and the Welfare Theorems Craig Burnside Duke University September 2010 Craig Burnside (Duke University) Competitive Equilibrium September 2010 1 / 32 Competitive Equilibrium and

More information

Static Information Design

Static Information Design Static nformation Design Dirk Bergemann and Stephen Morris European Summer Symposium in Economic Theory, Gerzensee, July 2016 Mechanism Design and nformation Design Mechanism Design: Fix an economic environment

More information

Lecture 1. Evolution of Market Concentration

Lecture 1. Evolution of Market Concentration Lecture 1 Evolution of Market Concentration Take a look at : Doraszelski and Pakes, A Framework for Applied Dynamic Analysis in IO, Handbook of I.O. Chapter. (see link at syllabus). Matt Shum s notes are

More information

Externalities and PG. MWG- Chapter 11

Externalities and PG. MWG- Chapter 11 Externalities and PG MWG- Chapter 11 Simple Bilateral Externality When external e ects are present, CE are not PO. Assume: 1 Two consumers i = 1, 2 2 The actions of these consumers do not a ect prices

More information

EconS Sequential Competition

EconS Sequential Competition EconS 425 - Sequential Competition Eric Dunaway Washington State University eric.dunaway@wsu.edu Industrial Organization Eric Dunaway (WSU) EconS 425 Industrial Organization 1 / 47 A Warmup 1 x i x j (x

More information

Dynamic Stochastic Games with Sequential State-to-State Transitions

Dynamic Stochastic Games with Sequential State-to-State Transitions Dynamic Stochastic Games with Sequential State-to-State Transitions Ulrich Doraszelski Harvard University and CEPR Kenneth L. Judd Hoover Institution and NBER May 2007 Preliminary and incomplete. Introduction

More information

EconS Advanced Microeconomics II Handout on Subgame Perfect Equilibrium (SPNE)

EconS Advanced Microeconomics II Handout on Subgame Perfect Equilibrium (SPNE) EconS 3 - Advanced Microeconomics II Handout on Subgame Perfect Equilibrium (SPNE). Based on MWG 9.B.3 Consider the three-player nite game of perfect information depicted in gure. L R Player 3 l r a b

More information

Banks, depositors and liquidity shocks: long term vs short term interest rates in a model of adverse selection

Banks, depositors and liquidity shocks: long term vs short term interest rates in a model of adverse selection Banks, depositors and liquidity shocks: long term vs short term interest rates in a model of adverse selection Geethanjali Selvaretnam Abstract This model takes into consideration the fact that depositors

More information

Investment and R&D in a Dynamic Equilibrium with Incomplete Information

Investment and R&D in a Dynamic Equilibrium with Incomplete Information Investment and R&D in a Dynamic Equilibrium with Incomplete Information Carlos Daniel Santos y March 7, 27 Abstract In this paper I study industry behavior when rms can invest to accumulate both knowledge

More information

Low-Quality Leadership in a Vertically Differentiated Duopoly with Cournot Competition

Low-Quality Leadership in a Vertically Differentiated Duopoly with Cournot Competition Low-Quality Leadership in a Vertically Differentiated Duopoly with Cournot Competition Luca Lambertini Alessandro Tampieri Quaderni - Working Paper DSE N 750 Low-Quality Leadership in a Vertically Di erentiated

More information

INTERNAL ORGANIZATION OF FIRMS AND CARTEL FORMATION

INTERNAL ORGANIZATION OF FIRMS AND CARTEL FORMATION INTERNAL ORGANIZATION OF FIRMS AND CARTEL FORMATION by Jerome Kuipers and Norma Olaizola 2004 Working Paper Series: IL. 15/04 Departamento de Fundamentos del Análisis Económico I Ekonomi Analisiaren Oinarriak

More information

The Intuitive and Divinity Criterion:

The Intuitive and Divinity Criterion: The Intuitive and Divinity Criterion: Interpretation and Step-by-Step Examples Ana Espínola-Arredondo School of Economic Sciences Washington State University Pullman, WA 99164 Félix Muñoz-García y School

More information

Managerial delegation in multimarket oligopoly

Managerial delegation in multimarket oligopoly Managerial delegation in multimarket oligopoly Arup Bose Barnali Gupta Statistics and Mathematics Unit Department of Economics Indian Statistical Institute Miami University, Ohio INDIA USA bosearu@gmail.com

More information

EconS Oligopoly - Part 2

EconS Oligopoly - Part 2 EconS 305 - Oligopoly - Part 2 Eric Dunaway Washington State University eric.dunaway@wsu.edu November 29, 2015 Eric Dunaway (WSU) EconS 305 - Lecture 32 November 29, 2015 1 / 28 Introduction Last time,

More information

ECON2285: Mathematical Economics

ECON2285: Mathematical Economics ECON2285: Mathematical Economics Yulei Luo Economics, HKU September 17, 2018 Luo, Y. (Economics, HKU) ME September 17, 2018 1 / 46 Static Optimization and Extreme Values In this topic, we will study goal

More information

Discussion Paper #1541

Discussion Paper #1541 CMS-EMS Center for Mathematical Studies in Economics And Management Science Discussion Paper #1541 Common Agency with Informed Principals: Menus and Signals Simone Galperti Northwestern University June

More information

DEPARTMENT OF ECONOMICS PROFITABILITY OF HORIZONTAL MERGERS IN TRIGGER STRATEGY GAME. Berardino Cesiy University of Rome Tor Vergata

DEPARTMENT OF ECONOMICS PROFITABILITY OF HORIZONTAL MERGERS IN TRIGGER STRATEGY GAME. Berardino Cesiy University of Rome Tor Vergata DEPARTMENT OF ECONOMICS PROFITABILITY OF HORIZONTAL MERGERS IN TRIGGER STRATEGY GAME Berardino Cesiy University of Rome Tor Vergata Working Paper No. 06/4 January 2006 Pro tability of Horizontal Mergers

More information

Dynamic and Stochastic Model of Industry. Class: Work through Pakes, Ostrovsky, and Berry

Dynamic and Stochastic Model of Industry. Class: Work through Pakes, Ostrovsky, and Berry Dynamic and Stochastic Model of Industry Class: Work through Pakes, Ostrovsky, and Berry Reading: Recommend working through Doraszelski and Pakes handbook chapter Recall model from last class Deterministic

More information

No Information Sharing in Oligopoly: The Case of Price Competition with Cost Uncertainty

No Information Sharing in Oligopoly: The Case of Price Competition with Cost Uncertainty No Information Sharing in Oligopoly: The Case of Price Competition with Cost Uncertainty Stephan O. Hornig and Manfred Stadler* Abstract We show that concealing cost information is a dominant strategy

More information

Vickrey-Clarke-Groves Mechanisms

Vickrey-Clarke-Groves Mechanisms Vickrey-Clarke-Groves Mechanisms Jonathan Levin 1 Economics 285 Market Design Winter 2009 1 These slides are based on Paul Milgrom s. onathan Levin VCG Mechanisms Winter 2009 1 / 23 Motivation We consider

More information

Cartel Stability in a Dynamic Oligopoly with Sticky Prices

Cartel Stability in a Dynamic Oligopoly with Sticky Prices Cartel Stability in a Dynamic Oligopoly with Sticky Prices Hassan Benchekroun and Licun Xue y McGill University and CIREQ, Montreal This version: September 2005 Abstract We study the stability of cartels

More information

Lecture #11: Introduction to the New Empirical Industrial Organization (NEIO) -

Lecture #11: Introduction to the New Empirical Industrial Organization (NEIO) - Lecture #11: Introduction to the New Empirical Industrial Organization (NEIO) - What is the old empirical IO? The old empirical IO refers to studies that tried to draw inferences about the relationship

More information

Internet Appendix for The Labor Market for Directors and Externalities in Corporate Governance

Internet Appendix for The Labor Market for Directors and Externalities in Corporate Governance Internet Appendix for The Labor Market for Directors and Externalities in Corporate Governance DORON LEVIT and NADYA MALENKO The Internet Appendix has three sections. Section I contains supplemental materials

More information

Informed Principal in Private-Value Environments

Informed Principal in Private-Value Environments Informed Principal in Private-Value Environments Tymofiy Mylovanov Thomas Tröger University of Bonn June 21, 2008 1/28 Motivation 2/28 Motivation In most applications of mechanism design, the proposer

More information

Oligopoly Theory. This might be revision in parts, but (if so) it is good stu to be reminded of...

Oligopoly Theory. This might be revision in parts, but (if so) it is good stu to be reminded of... This might be revision in parts, but (if so) it is good stu to be reminded of... John Asker Econ 170 Industrial Organization January 23, 2017 1 / 1 We will cover the following topics: with Sequential Moves

More information

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics Ph. D. Comprehensive Examination: Macroeconomics Fall, 202 Answer Key to Section 2 Questions Section. (Suggested Time: 45 Minutes) For 3 of

More information

WORKING PAPER SERIES

WORKING PAPER SERIES DEPARTMENT OF ECONOMICS UNIVERSITY OF MILAN - BICOCCA WORKING PAPER SERIES EQUILIBRIUM PRINCIPAL-AGENT CONTRACTS Competition and R&D Incentives Federico Etro, Michela Cella No. 180 March 2010 Dipartimento

More information

Minimum Wages and Excessive E ort Supply

Minimum Wages and Excessive E ort Supply Minimum Wages and Excessive E ort Supply Matthias Kräkel y Anja Schöttner z Abstract It is well-known that, in static models, minimum wages generate positive worker rents and, consequently, ine ciently

More information

Bertrand Model of Price Competition. Advanced Microeconomic Theory 1

Bertrand Model of Price Competition. Advanced Microeconomic Theory 1 Bertrand Model of Price Competition Advanced Microeconomic Theory 1 ҧ Bertrand Model of Price Competition Consider: An industry with two firms, 1 and 2, selling a homogeneous product Firms face market

More information

Moral Hazard: Hidden Action

Moral Hazard: Hidden Action Moral Hazard: Hidden Action Part of these Notes were taken (almost literally) from Rasmusen, 2007 UIB Course 2013-14 (UIB) MH-Hidden Actions Course 2013-14 1 / 29 A Principal-agent Model. The Production

More information

OPTIMAL TWO-PART TARIFF LICENSING CONTRACTS WITH DIFFERENTIATED GOODS AND ENDOGENOUS R&D* Ramón Faulí-Oller and Joel Sandonís**

OPTIMAL TWO-PART TARIFF LICENSING CONTRACTS WITH DIFFERENTIATED GOODS AND ENDOGENOUS R&D* Ramón Faulí-Oller and Joel Sandonís** OPTIMAL TWO-PART TARIFF LICENSING CONTRACTS WITH DIFFERENTIATED GOODS AND ENDOGENOUS R&D* Ramón Faulí-Oller and Joel Sandonís** WP-AD 2008-12 Corresponding author: R. Fauli-Oller Universidad de Alicante,

More information

Columbia University. Department of Economics Discussion Paper Series. Collusion with Persistent Cost Shocks. Susan Athey Kyle Bagwell

Columbia University. Department of Economics Discussion Paper Series. Collusion with Persistent Cost Shocks. Susan Athey Kyle Bagwell Columbia University Department of Economics Discussion Paper Series Collusion with Persistent Cost Shocks Susan Athey Kyle Bagwell Discussion Paper No.: 0405-07 Department of Economics Columbia University

More information

SELECTION OF MARKOV EQUILIBRIUM IN A DYNAMIC OLIGOPOLY WITH PRODUCTION TO ORDER. Milan Horniaček 1

SELECTION OF MARKOV EQUILIBRIUM IN A DYNAMIC OLIGOPOLY WITH PRODUCTION TO ORDER. Milan Horniaček 1 SELECTION OF MARKOV EQUILIBRIUM IN A DYNAMIC OLIGOPOLY WITH PRODUCTION TO ORDER Milan Horniaček 1 CERGE-EI, Charles University and Academy of Sciences of Czech Republic, Prague We use the requirement of

More information

EconS Nash Equilibrium in Games with Continuous Action Spaces.

EconS Nash Equilibrium in Games with Continuous Action Spaces. EconS 424 - Nash Equilibrium in Games with Continuous Action Spaces. Félix Muñoz-García Washington State University fmunoz@wsu.edu February 7, 2014 Félix Muñoz-García (WSU) EconS 424 - Recitation 3 February

More information

Game Theory and Economics of Contracts Lecture 5 Static Single-agent Moral Hazard Model

Game Theory and Economics of Contracts Lecture 5 Static Single-agent Moral Hazard Model Game Theory and Economics of Contracts Lecture 5 Static Single-agent Moral Hazard Model Yu (Larry) Chen School of Economics, Nanjing University Fall 2015 Principal-Agent Relationship Principal-agent relationship

More information

Basics of Game Theory

Basics of Game Theory Basics of Game Theory Giacomo Bacci and Luca Sanguinetti Department of Information Engineering University of Pisa, Pisa, Italy {giacomo.bacci,luca.sanguinetti}@iet.unipi.it April - May, 2010 G. Bacci and

More information

Bayesian Nash equilibrium

Bayesian Nash equilibrium Bayesian Nash equilibrium Felix Munoz-Garcia EconS 503 - Washington State University So far we assumed that all players knew all the relevant details in a game. Hence, we analyzed complete-information

More information

Deceptive Advertising with Rational Buyers

Deceptive Advertising with Rational Buyers Deceptive Advertising with Rational Buyers September 6, 016 ONLINE APPENDIX In this Appendix we present in full additional results and extensions which are only mentioned in the paper. In the exposition

More information

Cross-Licensing and Competition

Cross-Licensing and Competition Cross-Licensing and Competition Doh-Shin Jeon and Yassine Lefouili y June 7, 2013 Very preliminary and incomplete - Please do not circulate Abstract We study bilateral cross-licensing agreements among

More information

Solutions to Problem Set 4 Macro II (14.452)

Solutions to Problem Set 4 Macro II (14.452) Solutions to Problem Set 4 Macro II (14.452) Francisco A. Gallego 05/11 1 Money as a Factor of Production (Dornbusch and Frenkel, 1973) The shortcut used by Dornbusch and Frenkel to introduce money in

More information

Game Theory. Bargaining Theory. ordi Massó. International Doctorate in Economic Analysis (IDEA) Universitat Autònoma de Barcelona (UAB)

Game Theory. Bargaining Theory. ordi Massó. International Doctorate in Economic Analysis (IDEA) Universitat Autònoma de Barcelona (UAB) Game Theory Bargaining Theory J International Doctorate in Economic Analysis (IDEA) Universitat Autònoma de Barcelona (UAB) (International Game Theory: Doctorate Bargainingin Theory Economic Analysis (IDEA)

More information

1 Games in Normal Form (Strategic Form)

1 Games in Normal Form (Strategic Form) Games in Normal Form (Strategic Form) A Game in Normal (strategic) Form consists of three components. A set of players. For each player, a set of strategies (called actions in textbook). The interpretation

More information

Learning and Information Aggregation in an Exit Game

Learning and Information Aggregation in an Exit Game Learning and Information Aggregation in an Exit Game Pauli Murto y and Juuso Välimäki z This Version: April 2010 Abstract We analyze information aggregation in a stopping game with uncertain payo s that

More information

Asymmetric Information and Bank Runs

Asymmetric Information and Bank Runs Asymmetric Information and Bank uns Chao Gu Cornell University Draft, March, 2006 Abstract This paper extends Peck and Shell s (2003) bank run model to the environment in which the sunspot coordination

More information

Dynamic Merger Review

Dynamic Merger Review Dynamic Merger Review Volker Nocke University of Oxford and CEPR Michael D. Whinston Northwestern University and NBER PRELIMINARY AND INCOMPLETE January 11, 2008 Abstract We analyze the optimal dynamic

More information

Layo Costs and E ciency with Asymmetric Information

Layo Costs and E ciency with Asymmetric Information Layo Costs and E ciency with Asymmetric Information Alain Delacroix (UQAM) and Etienne Wasmer (Sciences-Po) September 4, 2009 Abstract Wage determination under asymmetric information generates ine ciencies

More information

A Centralized or a Decentralized Labor Market?

A Centralized or a Decentralized Labor Market? ömmföäflsäafaäsflassflassflas ffffffffffffffffffffffffffffffffff Discussion Papers A Centralized or a Decentralized Labor Market? Juha Virrankoski Aalto University and HECER Discussion Paper No. 42 November

More information

Collusion with Persistent Cost Shocks

Collusion with Persistent Cost Shocks Collusion with Persistent Cost Shocks Susan Athey and Kyle Bagwell First Draft: March, 2003; This Draft: July, 2006 Abstract We consider a dynamic Bertrand game, in which prices are publicly observed and

More information

Time is discrete and indexed by t =0; 1;:::;T,whereT<1. An individual is interested in maximizing an objective function given by. tu(x t ;a t ); (0.

Time is discrete and indexed by t =0; 1;:::;T,whereT<1. An individual is interested in maximizing an objective function given by. tu(x t ;a t ); (0. Chapter 0 Discrete Time Dynamic Programming 0.1 The Finite Horizon Case Time is discrete and indexed by t =0; 1;:::;T,whereT

More information

Common-Value All-Pay Auctions with Asymmetric Information

Common-Value All-Pay Auctions with Asymmetric Information Common-Value All-Pay Auctions with Asymmetric Information Ezra Einy, Ori Haimanko, Ram Orzach, Aner Sela July 14, 014 Abstract We study two-player common-value all-pay auctions in which the players have

More information

Lecture Notes on Game Theory

Lecture Notes on Game Theory Lecture Notes on Game Theory Levent Koçkesen Strategic Form Games In this part we will analyze games in which the players choose their actions simultaneously (or without the knowledge of other players

More information

Notes on the Thomas and Worrall paper Econ 8801

Notes on the Thomas and Worrall paper Econ 8801 Notes on the Thomas and Worrall paper Econ 880 Larry E. Jones Introduction The basic reference for these notes is: Thomas, J. and T. Worrall (990): Income Fluctuation and Asymmetric Information: An Example

More information

AMBIGUITY AND SOCIAL INTERACTION

AMBIGUITY AND SOCIAL INTERACTION AMBIGUITY AND SOCIAL INTERACTION Jürgen Eichberger Alfred Weber Institut, Universität Heidelberg. David Kelsey Department of Economics, University of Exeter. Burkhard C. Schipper Department of Economics,

More information