Observational Learning with Position Uncertainty
|
|
- Monica Morrison
- 5 years ago
- Views:
Transcription
1 Observational Learning with Position Uncertainty Ignacio Monzón and Michael Rapp December 009 Abstract Observational learning is typically examined when agents have precise information about their position in the sequence of play. We present a model in which agents are uncertain about their positions. Agents are allowed to have arbitrary ex-ante beliefs about their positions: they may observe their position perfectly, imperfectly, or not at all. Agents sample the decisions of past individuals and receive a private signal about the state of the world. We show that social learning is robust to position uncertainty. Under any sampling rule satisfying a stationarity assumption, learning is complete if signal strength is unbounded. In cases with bounded signal strength, we show that agents achieve what we define as constrained efficient learning: all individuals do at least as well as the most informed agent would do in isolation. JEL Classification: C7, D83, D85 We are grateful to Bill Sandholm for his advice, suggestions and encouragement. We also thank Dan Quint, Ricardo Serrano-Padial, Marzena Rostek, Marek Weretka, Ben Cowan, María Eugenia Garibotti, Danqing Hu, Sang Yoon Lee, Fernando Louge and David Rivers for valuable comments. All remaining errors are ours. Collegio Carlo Alberto, Via Real Collegio 30, 1004 Moncalieri O), Italy. ignacio@carloalberto.org, Department of Economics, University of Wisconsin Madison, 1180 Observatory Drive, Madison, WI 53706, USA. mrapp@wisc.edu,
2 1. Introduction In a wide range of economic situations, agents possess private information regarding some shared uncertainty. hese include choosing between competing technologies, deciding whether to invest in a new business, deciding whether to eat at a new restaurant in town, or selecting which new novel to read. If actions are observable but private information is not, one agent s behavior provides useful information to others. Consider the example of choosing between two MP3 players recently released to the market. One is inherently better for all individuals, but agents do not know which. 1 Individuals form a personal impression about the quality of each device based on information they receive privately. hey also observe the choices of others, which partly reveal the private information of those agents. An important characteristic of this environment has been overlooked: when choosing between competing alternatives, an agent may not know how many individuals have faced the same decision before him. Moreover, when an agent observes someone else s decision, he also may not know when that decision was made. For example, even when individuals know when the competing MP3 players were released, they may be unaware of exactly how many individuals have already chosen between the devices. In addition, when an agent observes someone else listening to an MP3 player on the bus, he does not know whether the person using the device bought it that morning or on the day it was released. We study settings with position uncertainty, and ask whether individuals eventually learn and choose the superior technology, by observing the behavior of others. he seminal contributions to the social learning literature are Banerjee 199] and Bikhchandani, Hirshleifer, and Welch 199]. In these papers, a set of rational agents choose sequentially between two technologies. he payoff from this decision is identical to all, but unknown. Agents know their position in the adoption sequence. In each period, one agent receives a signal and observes what all agents before him have chosen. Afterwards, this agent makes a once-and-for-all decision between the two technologies. Given that each agent knows that the signal he has received is no better than the signal other individuals have received, agents eventually follow the behavior of others and ignore their own signals. Consequently, Banerjee and Bikhchandani et al. 1 We study situations where network externalities do not play a salient role, including choosing between news sites, early search engines, computer software, MP3 players or computer brands. Our focus is on informational externalities. We are currently working on a setting with network externalities, for which we have partial results. 1
3 show that the optimal behavior of rational agents can prevent social learning. Smith and Sørensen 000] highlight that there is no information aggregation in the models of Banerjee 199] and Bikhchandani et al. 199] because these models assume signals are of bounded strength. In Smith and Sørensen 000], agents also know their positions in the adoption sequence and observe the behavior of all preceding agents. In contrast to Banerjee 199] and Bikhchandani et al. 199], agents receive signals of unbounded strength: signals can get arbitrarily close to being perfectly informative. In such a context, the conventional wisdom represented by a long sequence of agents making the same decision can always be overturned by the action of an agent with a strong enough signal. As a result, individuals never fully disregard their own information. In fact, Smith and Sørensen 000] show that if signals are of unbounded strength, social learning occurs. We take as our starting point the basic setup in the literature on social learning and introduce position uncertainty. In our setting, agents, exogenously ordered in a sequence, choose between two competing technologies. Agents receive a noisy private signal about the quality of each technology. We study cases of both bounded and unbounded signal strength. We depart from the literature by assuming that agents may not know 1) their position in the sequence or ) the position of those they observe. We refer to both phenomena as position uncertainty. We also depart from Banerjee 199], Bikhchandani et al. 199] and Smith and Sørensen 000] in that agents observe the behavior of only a sample of preceding agents. o understand the importance of position uncertainty, note that if an agent knows his position in the sequence, he can condition his behavior on three pieces of information: his private signal, his sample and his position in the sequence. In fact, he may weigh his sample and signal differently depending on his position. he typical story of social learning is one of learning over time. Early agents place a relatively larger weight on the signal than on the sample. As time progresses, information is aggregated and the behavior of agents becomes a better indicator of the true state of the world. Later agents place a relatively larger weight on the sample, which, in turn, can lead to precise information aggregation. In contrast, if agents have no information about their positions, such heterogeneity in play is impossible. Instead, agents place the same weight on the sample regardless of when they play. Heterogeneous play based on positions cannot drive social learning in this setting. Position uncertainty leads to an additional difficulty. Agents A and B do not know if A precedes B or vice versa. his adds a strategic component to the game: A s strategy affects B s payoffs inasmuch as B s strategy affects A s payoffs. hus, the game cannot be solved recursively and, in fact, there are cases with multiple equilibria.
4 his paper presents a flexible framework for studying observational learning under position uncertainty. Our framework is general in two ways. First, agents are placed in the adoption sequence according to an arbitrary distribution. For example, some agents may, ex-ante, be more likely to have an early position in the sequence, while others are more likely to be towards the end. Second, our setup allows for general specifications of the information agents obtain about their position, once it has been realized. Agents may observe their position, receive no information about it, or have imperfect information about it. For instance, agents may know if they are early or late adopters, even if they do not know their exact positions. We focus on stationary sampling, which allows for a rich class of natural sampling rules. Sampling rules are stationary if the probability that an agent samples a particular set of individuals is only a function of the distance between the agent and those he observes that is, how far back in time those decisions were made). his assumption implies that no individual plays a decisive role in everyone else s samples. We say social learning occurs if the probability that a randomly selected agent chooses the superior technology approaches one as the number of agents grows large. We find that learning is robust to the introduction of position uncertainty. We specify weak requirements on the information of agents: they observe a private signal and at least an unordered sample of past play. Agents need not have any information on their position in the sequence. Under unbounded signal strength, social learning occurs. his is due to the fact that as the number of agents grows large, individuals rely less on the signal. In cases with bounded signal strength, we show agents achieve what we define as constrained efficient learning: all individuals do at least as well as the most informed agent would do in isolation. In the present setting, social learning results from the combination of stationary sampling and a modified improvement principle. First introduced by Banerjee and Fudenberg 004], the improvement principle states that since agents can copy the decisions of others, they must do at least as well, in expectation, as those they observe. In addition, when an agent receives a very strong signal and follows it, he must do better than those he observes. As long as the agents observed choose the inferior technology with positive probability, the improvement is bounded away from zero. In Banerjee and Fudenberg s model, agents know their positions and observe others from the preceding period. If learning did not occur, this would mean that agents choose the inferior technology with positive probability in every period. hus, there would be an improvement between any two consecutive periods. his would lead to an infinite sequence of improvements, 3
5 which cannot occur, since utility is bounded. In that way, Banerjee and Fudenberg show social learning must occur in their setting. Acemoglu, Dahleh, Lobel, and Ozdaglar 008] and Smith and Sørensen 008] use similar arguments to show social learning in their models. We develop an ex-ante improvement principle, which allows for position uncertainty, and so places weaker requirements on the information that agents have. However, this ex-ante improvement principle does not guarantee learning by itself. Under position uncertainty, the improvement upon those observed is only true ex-ante i.e. before the positions are realized). Conditional on his position, an agent may actually do worse, on average, than those he observes. 3 Stationary sampling rules have the following implication: as the number of agents grows large, all agents become equally likely to be sampled. 4 his leads to a useful accounting identity. Note that the ex-ante expected) utility of an agent is the average over the utilities he would get in each possible position. As all agents become equally likely to be sampled, the expected utility of an observed agent approaches the average over all possible positions. hus, for all stationary sampling rules, the difference between the ex-ante utility and the expected utility of those observed must vanish as the number of agents grows large. o summarize, our ex-ante improvement principle states that the difference between the exante utility and the expected utility of an observed agent only goes to zero if, in the limit, agents cannot improve upon those observed. his, combined with stationary sampling rules, implies there is social learning if signal strength is unbounded and constrained efficient learning if signal strength is bounded. he result of constrained efficient learning is novel in the literature and is useful for two reasons. First, it describes a lower bound on information aggregation for all information structures. For any signal strength, agents do, in expectation, at least as well as if they were provided with the strongest signals available. Second, it highlights the continuity of social learning, or perfect information aggregation. Social learning becomes a limit result from constrained efficient learn- 3 o see this, consider the following example. here is a large number of agents in a sequence. Everyone knows their position exactly except for the agents in the second and last positions, who have an equal chance of being in each of these positions. If each agent observes the decision of the agent that preceded him, then, by the standard improvement principle, the penultimate agent in the sequence makes the correct choice with a probability approaching one. he agent who plays last, on the other hand, is not sure that he is playing last. He does not know if he is observing the penultimate agent, who is almost always right, or the first agent, who often makes mistakes. As a result, the agent who plays last relies on his signal too often, causing him to make mistakes more often than the agent he observes. 4 he following example illustrates this point. Let all agents be equally likely to be placed in any position in a finite sequence and let sampling follow a simple stationary rule: each agent observes the behavior of the preceding agent. An agent does not know his own position and wonders about the position of the individual he observes. Since the agent is equally likely to be in any position, the individual observed is also equally likely to be in any position except for the last position in the sequence). 4
6 ing: as we relax the bounds on the signal strength, the lower bound on information aggregation approaches perfect information aggregation. Before introducing the formal model, we describe the related literature on observational learning. Most of the literature has focused on cases in which agents know their own positions, and sample from past behavior. he aforementioned Banerjee and Fudenberg 004], Çelen and Kariv 004], Acemoglu, Dahleh, Lobel, and Ozdaglar 008] and Smith and Sørensen 008] are among them. Acemoglu et al. 008] present a model on social learning with stochastic sampling. In their model, agents know both their own position and the position of sampled individuals. Under mild conditions on sampling, social learning occurs. 5 Smith and Sørensen 008] is the only paper to allow for some uncertainty about positions. In their model, agents know their own position but do not know the positions of the individuals sampled. Since agents know their own positions, an improvement principle and a mild assumption on sampling ensures that social learning occurs. In the next section we explain the model and discuss the implications of stationary sampling. In Section 3, we present our results on social learning in the case that agents are ex-ante symmetric and play a symmetric equilibrium. Section 4 generalizes these findings to the asymmetric cases. We also extend our results to a non-stationary sampling rule: geometric sampling. Section 5 concludes.. Model here are players, indexed by i. Agents are exogenously ordered as follows. Let p : {1,..., } {1,..., } be a permutation and let P be the set of all possible permutations. hen, P is a random variable with realizations p P. he random variable P i) specifies the period in which player i is asked to play. We assume that every permutation has equal probability. In Section 4.1 we generalize the model to allow for arbitrary distributions over permutations and show that our results also hold in that case. Agents know the length of the sequence but may not know their position in it. We are interested in settings where is large. herefore we study the behavior of agents as approaches infinity. 5 o see why position uncertainty matters in a setup like the one described in Acemoglu et al. 008], consider the following example. Each agent observes the very first agent and the agent before him. However, they do not know which is which. Since the first agent may choose the inferior technology, his incorrect choice may carry over. his happens because individuals cannot identify who is the first agent in their sample. If agents knew the position of those observed, social learning would occur. 5
7 All agents choose one of two technologies: a A = {0, 1}. here are two states of the world: θ Θ = {0, 1}. he timing of the game is as follows. First, θ Θ and p P are chosen. he true state of the world and the agents order in the sequence are not directly revealed to agents. Instead, each individual i receives a noisy signal Z pi) about the true state of the world, a second signal S pi) that may include information about his position, and a sample ξ pi) of the decisions of agents before him. With these three sources of information, each agent decides between technologies 0 and 1, collects payoffs, and dies. hus, the decision of each individual is once-and-for-all..1 Payoffs We study situations with no payoff externalities. Let ua, θ) be the payoff from choosing technology a when the state of the world is θ. We assume that the optimal technology depends on the state of the world; that is, u0, 0) > u1, 0) and u1, 1) > u0, 1). We show in Appendix B.1 that under these assumptions, any model is equivalent to a case with Prθ = 1) = Prθ = 0) = 1 and payoffs as given in able 1, for some λ > 0. echnology a) rue State θ) λ able 1: Payoff Function. Minimum Information Structure We describe the information available to agents in two steps. First, we present some minimum restrictions on the private signal Z and the sample ξ that guarantee social learning. Next, we describe the additional signal S, which includes other information about the game that agents may have and that does not disrupt learning. In order to achieve social learning agents must be able both to learn from others through the sample and to add new idiosyncratic information through their private signal. he requirements we impose on the sample guarantee that agents can identify the choice of a randomly selected agent from their sample and therefore do at least as well as those observed. 6
8 ..1 Private Signals about the State of the World Agent i receives a private signal Z Pi), with realizations z Z. Conditional on the true state of the world, signals are i.i.d. across individuals and distributed according to µ 1 if θ = 1 or µ 0 if θ = 0. We assume µ 0 and µ 1 are mutually absolutely continuous. hen, no perfectly-revealing signals occur with positive probability, and the following likelihood ratio Radon-Nikodym derivative) exists: lz) dµ 1 dµ 0 z) An agent s behavior depends on the signal Z only through the likelihood ratio l. Given ξ and S, individuals that choose technology 1 are those that receive a likelihood ratio greater than some threshold. For this reason, it is of special interest to define a distribution function G θ for this likelihood ratio: G θ l) Pr lz) l θ). We define signal strength as follows. DEFINIION 1. UNBOUNDED SIGNAL SRENGH. Signal strength is unbounded if for all likelihood ratios l 0, ), and for θ {0, 1}, 0 < G θ l) < 1. Let suppg) be the support of both G 0 and G 1. If Definition 1 does not hold, we assume that ] the convex hull of suppg) is given by co suppg)) = l, l, with both l > 0 and l <, and we call this the case of bounded signal strength. We study this case in Section he Sample Let a t A be the behavior of the agent playing in period t. he history of past behavior at period t is defined by h t =..., a 1, a 0, a 1, a,..., a t 1 ). Actions a t for t 0 are not chosen strategically but are instead specified exogenously by an arbitrary distribution f h 1 θ) = Pr H 1 = h 1 θ) over a set H 1 of initial histories. 6 For example, the sample of the first agent in the sequence is always exogenously specified by this distribution. Conditional on θ, H 1 is independent of P and of S t for all t. he results we present in this paper do not place any other restrictions on the distribution of the actions of early agents. Let H t be the 6 Restaurants, hotels, clubs, and other establishments change hands occasionally but agents are not always aware of this. At the same time, changes in characteristics of the options may imply significant variations in consumers utility but may not always be observed. Consequently, the arbitrary distribution over initial histories may result from agents t 0 playing the same game when payoffs were different. 7
9 random) history at time t, with realizations h t H t. Agents observe the actions of a sample of individuals from previous periods. he random set O t, which takes realizations o t O t, lists the positions of the agents observed by the individual in position t. For example, if the individual in position t = 5 observes the actions of individuals in periods and 3 then o 5 = {, 3}. We place the following restrictions on O t. First, it should be nonempty, and independent of θ and P. Second, O t should be independent of its concurrent signal Z t. Lastly, the set of sampled individuals, O t, must be independent of the actual decisions of those individuals. o accomplish this, we assume that O t is independent of H 1, independent of Z τ and S τ for all τ < t, and independent of O τ for all τ = t. We also assume sampling to be stationary. We describe this assumption later. At a minimum, we assume that each agent observes an unordered sample of past play, fully determined by H t and O t. Formally, the sample ξ : O t H t Ξ = N is defined by ) ξo t, h t ) o t, a τ, where o t is the number of elements in o t. τ o t herefore, ξo t, h t ) specifies the sample size and the number of observed agents that chose technology 1. 7 Agents know the sample size and how many agents in the sample chose each technology. Choosing technology 1 with a probability equal to the proportion of agents choosing technology 1 in the sample is equivalent to picking an agent at random and following his behavior. Such a behavior leads, in expectation, to a utility equal to the expected utility of those observed. In other words, this minimum requirement on the information in the sample guarantees that agents can mimic those observed and obtain at least their average expected utility, in expectation. It is worth noting, however, that agents may have more information about the positions of observed agents. his information is included in the second signal S, as explained in the next subsection. We impose restrictions on the stochastic set of observed agents O t. o do so, we first define the ] expected weight of agent τ on agent t s sample by w t,τ E. hese weights play a role in 1{τ Ot } O t the present model because if agents indeed pick a random agent from the sample and follow his strategy, the ex-ante probability agent t picks agent τ is given by w t,τ. Now, different distributions for O t induce different weights w t,τ. We restrict the sampling rule by imposing restrictions directly 7 For example, if t = 5, h 5 =..., a 1 = 1, a = 1, a 3 = 0, a 4 = 0) and o 5 = {, 3} then ξ 5 o 5, h 5 ) =, 1); that is, the agent in position 5 knows he observed two agents and that only one of them chose technology 1. 8
10 on the weights. We suppose that weights only depend on relative positions. DEFINIION. SAIONARY SAMPLING. A sampling rule is stationary if the weight of agent τ on agent t s sample depends only on the distance between these agents: w t,τ = w t τ) for all t and all τ. he weights w t,τ depend only on the distance between agents, and so no individual plays a prevalent role in the game. A rich family of such rules can be obtained by assuming that for all t, t and t 1, t,..., t n ), the following condition holds: Pr O t = { t t 1, t t,..., t } ) ) t n = Pr O t = {t t 1, t t,..., t t n }. 1) his condition imposes a stronger restriction than w t,τ relative positions determine the likelihood of samples. 8 rules include: = w t τ) since it requires that only Some examples of stationary sampling 1. Agent t observes the set of his M predecessors.. Agent t observes a uniformly random agent from the set of his M predecessors. 3. Agent t observes a uniformly random subset K of the set of his M predecessors. Section 4. studies an example of non-stationary sampling rules, geometric sampling, which includes the case when agents draw uniformly from all those who have already played..3 Additional Information: he Signal S he previous subsections presented the minimum information requirements of our model. ogether with stationary sampling, the assumptions on the signal Z and the sample ξ guarantee social learning, as we show later. he random variables Z and ξ are observable, whereas P, O, H and of course θ are not. Our model, however, allows agents to possess more information about the game they play. We add a second observable signal S, with realizations s S, to allow for more general information frameworks. Signal S may provide information about the unobserved 8 he following example shows a stationary sampling rule that violates equation 1). Every individual in an even position t observes two agents, those in positions t 1 and t. Every individual in an odd position t observes one of two agents, either the agent in position t 1 or that in position t. Weights are w t,τ = 1 if τ {t 1, t } and zero otherwise, so sampling is stationary. 9
11 random variables. Our only additional assumption is that conditional on θ, S t be independent of Z τ for all τ. For technical reasons, we also assume that S is finite for any length of the sequence. o illustrate how S can provide additional information, assume that agents know their position. In such a case, S = {1,..., } and S t = t. his setting corresponds to the model of Smith and Sørensen 008] where agents know their position and observe an unordered sample of past behavior. Our setup can also accommodate cases in which agents have no information about their position, but have some information about the sample, which can be ordered. 9 If agents have imperfect information about their position in the sequence, then S specifies a distribution over positions. Moreover, S may provide information on several unobservable variables at the same time. In Section 4 we show that S plays a critical role when allowing for general position beliefs. o summarize, the signal S represents all information agents have that is not captured by Z or ξ. he results we present in this paper do not depend on the distribution of S. In particular, social learning does not depend on the information agents possess about their position..4 Strategies and Equilibrium All information available to an agent is summarized by I = {z, ξ, s}, which is an element of I = Z Ξ S. Agent i s strategy is a function σ i : I 0, 1] that specifies a probability σ i I) for choosing technology 1 given the information available. Let σ i be the strategies for all players other than i. hen the profile of play is given by σ = σ i, σ i ). he random variables in this model are Ω) = ) θ, P, {O t }, {Z t}, {S t}, H 1. hese random variables, along with the strategy profile, determine the history of past play. Actions a t for t 0 are arbitrarily specified by the distribution f h 1 θ) = Pr H 1 = h 1 θ), as stated earlier. For all t 1 each h t leads to one of two possible values h t+1, depending on the action a t. Let h t and h t+1 list the same actions up to period t 1. We say that h t+1 = h t 1 if a t = 1 and that h t+1 = h t 0 if a t = 0. We define the distribution f h t θ, p, σ) Pr H t = h t θ, p, σ) recursively 9 ake the example presented in the previous subsection, with t = 5, h 5 =..., a 1 = 1, a = 1, a 3 = 0, a 4 = 0) and o 5 = {, 3}, which leads to ξ 5 o 5, h 5 ) =, 1). If samples are ordered, then s 5 = 1, 0), denoting that the first agent in the sample chose technology 1 and the second one chose technology 0. 10
12 by Pr a t = 1 θ, p, σ, h t ) f h t θ, p, σ) if h t+1 = h t 1 f h t+1 θ, p, σ) = Pr a t = 0 θ, p, σ, h t ) f h t θ, p, σ) if h t+1 = h t 0 0 otherwise With the distribution over histories, we can write the utility of agent i, conditional on a particular order of agents, p, as follows u i σ, p) = 1 θ h pi) H pi) f ) ) h pi) θ, p, σ Pr a pi) = θ θ, p, σ, h pi) λ 1 θ) + θ). We focus on properties of the game before both the order of agents and the state of the world are determined. We define the ex-ante utility as the expected utility of a player. DEFINIION 3. EX-ANE UILIY. he ex-ante utility ū i is given by ū i σ) = 1! u i σ, p). p P Profile σ = σ i, σ i) is a Bayes-Nash equilibrium of the game if: ū i σ i, σ i) ūi σi, σ i ) for all σ i and for all i. Agents are ex-ante homogeneous; the information they obtain depends on their position P i) but not on their identity i. hat is why we focus on strategies where the agent in a given position behaves in the same way regardless of his identity. A profile of play is symmetric if σ i = σ for all i. A symmetric equilibrium is one in which the profiles of play are symmetric. Using a fixed point argument, we show that symmetric equilibria exist. PROPOSIION 1. EXISENCE. For each there exists a symmetric equilibrium σ ). See Appendix B.3 for the proof. In the present model, multiple equilibria may arise, and some may be asymmetric. Appendix A.1 presents an example of such equilibria. In the remainder of this section, however, we focus on symmetric equilibria. In Section 4.1 we allow for asymmetric ones and show that the results we present in this section hold for all equilibria. Under symmetric profiles of play, an agent s utility is not affected by the order of other agents. 11
13 In other words, if σ is symmetric, the utility of an agent facing a game with positions p is given by u t σ) = u i σ, p) for any p such that p i) = t. Only the uncertainty over one s own position matters, and so the ex-ante utility of every player can be expressed as follows: ū i σ) = 1 u t σ). his simplifies the analysis of the game, and so we drop the index i and focus on the position t in the sequence. For social learning, we require that the expected proportion of agents choosing the right technology approaches 1 as the number of players grows large. We can infer this expected proportion through the average utility of the agents. DEFINIION 4. AVERAGE UILIY. he average utility ū is given by ū σ) = 1 ū i σ). i=1 he expected proportion of agents choosing the right technology approaches 1 if and only if the average utility reaches its maximum possible value, λ+1. In principle, there can be multiple equilibria for each length of the game. We say social learning occurs in a particular sequence of equilibria, {σ )} =1, when the average utility approaches its maximum value. DEFINIION 5. SOCIAL LEARNING. Social learning occurs in a sequence {σ )} =1 of equilibria if lim ū σ )) = λ + 1. Our definition of social learning also applies to the asymmetric cases studied in Section 4.1. In the main part of the paper we focus on the symmetric case all players are ex-ante identical and play symmetric strategies) and so all agents have identical ex-ante utilities. As a result, the average utility is equal to each agent s ex-ante utility, ū i σ) = ūσ). In this symmetric setting, social learning occurs if and only if every individual s ex-ante utility converges to its maximum value. 1
14 .5 Average Utility of hose Observed We present an ex-ante improvement principle in Section 3.1: as long as those observed choose the inferior technology with positive probability, an agent s utility can always be strictly higher than the average utility of those observed. o accomplish this, we need to define the average utility ũ i of those that agent i observes. Let ξ t denote the action of a randomly chosen agent from those sampled by agent i when in position t. hen, we define ũ i as follows. DEFINIION denoted ũ i σ i ), is given by 6. AVERAGE UILIY OF HOSE OBSERVED. he average utility of those observed, ũ i σ i ) = 1 θ 1 Pr ] ) ξ t = θ θ, σ i λ 1 θ) + θ). he average utility ũ i of those that agent i observes is the expected utility of a randomly chosen agent from agent i s sample. When i is in position t the weight w t,τ represents the probability that agent τ is selected at random from agent i s sample. hus, ũ i can be reexpressed as follows. PROPOSIION. he average utility of those observed can be rewritten as 10 ũ i σ i ) = 1 τ:τ<t w t,τ u τ σ i ). See Appendix B.4 for the proof. From now on, we utilize the expression for ũ i given by Proposition. Finally, we also use ũσ) to denote the average of ũ i σ i ) taken across individuals. Notice that since players are ex-ante symmetric, when the strategy profile σ is symmetric, ũσ) = ũ i σ i )..6 Consequences of Stationary Sampling Stationary sampling has the following useful implication: the weights that one agent places on previous individuals can be translated into weights that subsequent individuals place on him. he graph on the left side of Figure 1 represents the weights w t,τ agents t place on agents τ. For example, w,4 represents the weight agent 4 places on agent and w 0, represents the weight agent places on agent 0. hese weights are highlighted in the left-hand graph of Figure 1. Since the 10 We allow agents to observe individuals from the initial history, so τ can be less than 1. 13
15 distance between agents and 4 is equal to the distance between agents and 0, then these weights must be equal: w,4 = w 0, = w). he graph on the right side of Figure 1 shows the sampling weights only as a function of the distance between agents. τ τ w 6,5 w 1) w 5,4 w 6,4 w 1) w ) w 4,3 w 5,3 w 6,3 w 1) w ) w 3) w 3, w 4, w 5, w 6, w 1) w ) w 3) w 4) w,1 w 3,1 w 4,1 w 5,1 w 6,1 w 1) w ) w 3) w 4) w 5) w 1,0 w,0 w 3,0 w 4,0 w 5,0 w 6,0 t w 1) w ) w 3) w 4) w 5) w 6) t w 1, 1 w, 1 w 3, 1 w 4, 1 w 5, 1 w 6, 1 w 1, w, w 3, w 4, w 5, w 6, w ) w 3) w 4) w 5) w 6) w 7) w 3) w 4) w 5) w 6) w 7) w 8) Figure 1: Stationary Sampling he horizontal ellipse in the right-hand side of Figure 1 includes the weights subsequent agents place on agent. he vertical dashed ellipse shows the weights agent places on previous agents. Since weights are only determined by the distance between agents, the sum of the horizontal weights must equal the sum of the vertical weights. 11 By definition, 1 the vertical sum of weights is given by τ:τ<t w t,τ = 1. hus, for any agent far enough from the end of the sequence, the sum of horizontal weights those subsequent agents place on him is arbitrarily close to 1. Intuitively, this means that all individuals in the sequence are equally important in the samples. Since the weights subsequent agents place on any one agent add up to 1, the average observed utility ultimately weights all agents equally, and the proportion of the total weight placed on any fixed group of agents vanishes. hen, as we show next, the homogeneous role of agents 11 Algebraically, for any J > 0, t 1 τ=t J w t,τ = τ=t J t 1 wt τ) = J j=1 wj) = t+j τ=t+1 w t,τ 1 1{τ Ot ] ]] ] } τ:τ<t w t,τ = τ:τ<t E = E 1 O t O t E τ:τ<t 1 {τ Ot } O t = Ot E = 1. O t 14
16 under stationary sampling imposes an accounting identity: average utilities and average observed utilities get closer as the number of agents grows large. his is not an equilibrium result but a direct consequence of stationary sampling, as the following proposition shows. PROPOSIION 3. Let sampling be stationary, and, for each, let { y t } t= with 0 y t λ+1. Let ȳ) = 1 y t and ỹ) = 1 τ:τ<t w t,τ y τ. hen, be an arbitrary sequence, lim sup {yt } t= ȳ) ỹ) = 0. See Appendix B.5 for the proof. We use arbitrarily specified sequences to highlight that Proposition 3 holds for all possible payoff sequences. urning to the case of interest, let {σ)} =1 be a sequence of symmetric strategy profiles that induces average utility ū) = ūσ)) and average utility of those observed ũ) = ũσ)). Define the ex-ante improvement by v) ū) ũ) to be the difference between the average utility and the average utility of those observed. Proposition 3 has the following immediate corollary. COROLLARY 1. If sampling is stationary and the strategy profile is symmetric, then lim v) = 0. he ex-ante improvement v) represents how much an agent in an unknown position expects to improve upon an agent he selects at random from his sample. Corollary 1 shows that under stationary sampling rules, the ex-ante improvement v) vanishes as the number of agents in the sequence grows larger. 3. Social Learning In this section, we present an improvement principle for settings in which agents do not know their position in the sequence. he improvement principle states that agents can do better, on average, than those they observe. Moreover, if the average utility of those observed is bounded away from the maximum possible utility, then the improvement upon their utility must be bounded away from zero. Banerjee and Fudenberg 004] are the first to use an improvement principle to study social learning. Smith and Sørensen 008] and Acemoglu et al. 008] adopt a similar approach. In those papers, agents do better than those they observe, and so, over time, agents utilities increase. Our argument is similar in the sense that agents do better than those they observe. It is different, 15
17 though, in that our improvement principle is stated ex-ante; agents do better than those they observe, unconditional on their position. In what follows, we first explain why an ex-ante improvement principle holds in the present setting. hen, we show how this ex-ante improvement principle can be used to study social learning in this setting. 3.1 he Ex-ante Improvement Principle We restrict agents information in a particular way and show that agents can improve upon those they observe. Consequently, with less restricted information they can still improve upon those observed. o see this, pick an agent at random from those agent i observes and disregard the rest of the information from his sample. Let ξ denote the action of the selected agent. Also, disregard all information contained in S. After restricting the information in this way, what remains is Ĩ = { z, ξ } with Ĩ Ĩ = Z {0, 1}. he average utility ũ i of those observed by agent i depends on how likely they are to choose the superior technology in each state of the world. Define π 0 and π 1 to be the likelihoods that a randomly picked agent from the sample chooses the superior technology in states θ = 0 and θ = 1, respectively. hat is, define π 0 Pr ξ = 0 θ = 0 ) and π 1 Pr ξ = 1 θ = 1 ). hen, the average utility of those observed by agent i can be expressed as ũ i = 1 π 0λ + π 1 ). A simple strategy, copy a random agent from the sample, guarantees the average utility ũ i of those observed. o show that agents do strictly better than those they observe, we show that there is always an alternative that does better than this simple strategy. he likelihood ratio L : {0, 1} 0, ) implied by the information from the randomly chosen agent is given by L ξ ) Pr θ = 1 ξ, σ i ) Pr θ = 0 ξ, σ i ). hen, the information provided by ξ is captured by L1) = π 1 1 π 0 and L0) = 1 π 1. Now, if the information provided by the signal is more powerful than the information reflected in these likelihood ratios, agents are better off following the signal than following the sample. We use this to define a smarter strategy σ i that utilizes not only information from the sample, but also information from the signal. If signal strength is unbounded and observed agents do not always choose the superior technology, the smarter strategy does strictly better than simply copying a π 0 16
18 random agent from the sample. In fact, fix a level U for the utility of observed agents. he thick line in Figure corresponds to combinations of π 0 and π 1 such that ũ i σ i ) = U, and the shaded area represents combinations such that ũ i σ i ) U. In the shaded area, the improvement upon those observed is bounded below by a positive-valued function CU). π 1 ũ i σ i ) = U π 0 Figure : Ex-ante Improvement Principle with Unbounded Signals PROPOSIION 4. EX-ANE IMPROVEMEN PRINCIPLE. If signal strength is unbounded, then there exists a strategy σ i and a positive-valued function C such that for all σ i See Appendix B.6 for the proof. ) ū i σ i, σ i ũi σ i ) C U) > 0 if ũ i σ i ) U < λ + 1. Proposition 4 presents a strategy that allows the agent to improve over those observed. hen, in any equilibrium σ ), he must improve at least that much. In addition, since we focus on symmetric equilibria, ū i σ )) = ū σ )) and ũ i σ i ) ) = ũ σ )). With this two facts, we present the following corollary. COROLLARY. If signal strength is unbounded, then in any symmetric equilibrium σ ), ū σ )) ũ σ )) C U) > 0 if ũ σ )) U < λ + 1. o study social learning, we focus on the behavior of the average utility ū σ )) as the number of players approaches infinity. As Corollary 1 shows, stationary sampling implies that v) = ū σ )) ũ σ )) must vanish as the number of players grows large. If v) van- 17
19 ishes, Corollary implies that ū σ )) approaches the maximum possible utility; that is, social learning occurs. PROPOSIION 5. SOCIAL LEARNING WIH UNBOUNDED SIGNALS. If signal strength is unbounded and sampling is stationary, then social learning occurs in any sequence of symmetric equilibria. See Appendix B.7 for the proof. 3. Bounded Signals: Constrained Efficient Learning So far, we have studied social learning when signal strength is unbounded. However, one can imagine settings in which this assumption may be too strong. If signal strength is bounded, social learning cannot be guaranteed. We can define, however, a constrained efficient learning concept that is satisfied in this setting. o study constrained efficient learning, imagine there is an agent in isolation who only receives the strongest signals, those yielding likelihoods l and l. Under such a signal structure the likelihood ratios determine how often each signal occurs in each state. Let u cl denote the maximum utility in isolation, that is the expected utility this agent would obtain by following his signal. We show that in the limit, agents utility cannot be lower than u cl. 13 In order to show that constrained efficient learning occurs, we present an ex-ante improvement principle when signal strength is bounded. In this setting, as long as the information from private signals is sometimes more precise than that obtained from the sample, agents can still improve upon those observed. he signal can always be used if either of the following inequalities are satisfied: L1)l > λ or L0)l < λ. he dotted lines in Figure 3 correspond to combinations of π 0 and π 1 such that the previous expressions hold with equality. Consequently, outside of the shaded area in Figure 3, signals can be used to improve upon those observed, as Proposition 6 shows. PROPOSIION 6. EX-ANE IMPROVEMEN PRINCIPLE UNDER BOUNDED SIGNALS. If signal strength is bounded, then there exists a strategy σ i and a positive-valued function C such that for all σ i ū i σ i, σ i) ũi σ i ) C U) > 0 if ũ i σ i ) U < u cl. See Appendix B.8 for the proof. We say that a model of observational learning satisfies constrained efficient learning if agents 13 A policy implication may be that if a regulator is no more informed than the most informed agents in the sequence, she would not be able to outperform the result of a social learning algorithm. 18
20 π 1 1 L0)l = λ ũ = u cl L1)l = λ 1 π 0 Figure 3: Ex-ante Improvement Principle with Bounded Signals receive at least u cl in the limit. PROPOSIION 7. CONSRAINED EFFICIEN LEARNING. If signal strength is bounded and sampling is stationary, then constrained efficient learning occurs in any sequence of symmetric equilibria. See Appendix B.9 for the proof. 4. General Setup and Extensions his section is still preliminary. 4.1 General Position Beliefs and Asymmetric Equilibria In this section we show that our results of social learning with signals of unbounded strength and constrained efficient learning with signals of bounded strength hold for arbitrary distributions over permutations and for all equilibria, including asymmetric ones. When agents are not placed in the sequence according to a uniform distribution or when the equilibrium played is not symmetric, agents are not identical ex-ante. As a result, the ex-ante utility of agents depends on their identity, ũ i need not be constant over agents anymore. o show our results for games with agents that are not identical ex-ante, we construct an analogous game in that agents exhibit the same behavior, but they are ex-ante identical. Since we have established the result for the symmetric case, the intuition behind the general model is simple. For any game with an arbitrary distribution over permutations we can construct an analogous one in 19
21 which players are ex-ante symmetric but have interim beliefs corresponding to the arbitrary distribution over permutations. his is done by adding information through the additional signal. Since we have established that the additional signal does not prevent learning, we are done. Likewise, when agents are allowed to have extra information, the assumption of a symmetric equilibrium is without loss of generality. o see this, suppose that some players do not react in the same way to the same information. hen, we can simply say that they are different types. Since their type is part of the information they receive, now they do react the same way to the same information. hat is, before agents are assigned their type, they act symmetrically. he game played under arbitrary beliefs, which we call Γ 1, only requires an alternate specification of the random permutation and the additional signal. Everything else remains as stated in the main model. If beliefs satisfy the common prior assumption, there is a commonly understood prior and signaling structure. his is modeled by allowing Q P to be a random permutation with an arbitrary distribution. Also, let S i S i be an additional signal that agent i receives from a finite set, and S S be the signal profile with an arbitrary distribution. We assume that, conditional on θ, S is independent of Z. We also assume that S Q 1 τ) is independent of O t for all τ < t. he pair Q, S) can induce any beliefs over positions which satisfy the common prior assumption. Finally, fix a possibly asymmetric) Nash equilibrium, and let σ i denote the strategy for player i. PROPOSIION 8. If signal strength is unbounded then social learning occurs in any sequence of equilibria of Γ 1. Next, we present the equivalent corollary to Proposition 7 for bounded signals. PROPOSIION equilibria of Γ If signal strength is bounded then limited social learning occurs in any sequence of We proceed by defining an auxiliary game, Γ. On the one hand, players in Γ 1 have arbitrary beliefs about positions. On the other hand, the game Γ satisfies all the assumptions of the model in the main section. We then show that for each equilibrium of Γ 1 there is a symmetric equilibrium of Γ in which the distribution over histories is the same. Our results do not immediately apply to arbitrary permutations because agents in Γ 1 are allowed to have non-identical ex-ante beliefs about positions. o remedy this, we create the game Γ, where agents are first uniformly shuffled to become the players in Γ 1. As a result, the ex-ante belief of each agent is that he has an equal chance of playing in any time period. After being shuffled, players in Γ are informed about which player they are in Γ 1 through the extra signal S. 0
22 hus, the game Γ 1 is informationally equivalent to the auxiliary game Γ after agents know which player in Γ 1 they are. Moreover, Γ satisfies all the requirements of our model. he auxiliary game Γ, is constructed as follows. Let P P be a uniformly-distributed random permutation. Agents are assign to time periods by the composed permutation P Q, which is itself a uniformly distributed random permutation. he additional signal the agents receive is defined by S t Q 1 t), S Q 1 t)). One can think of this game as follows. he permutation P is a mapping from agents playing in Γ to positions in Γ 1. Next, we tell each agent which position of Γ 1 he occupies, and pass along the information that the agent in Γ 1 receives in that position. Since Q is an injective function, the inverse Q 1 t) indicates which position of Γ 1 is asked to play in period t. And S Q 1 t) is the extra signal received by the agent in position Q 1 t). So the agent of Γ that plays in period t knows which player in Γ 1 plays in period t as well as all the extra information of that player. So the Γ player has all of the information of the Γ 1 player. Additionally, since P is a uniform random permutation, the fact that the player in Γ knows his identity provides him no useful information. In both games the players playing in period t have the same information. he game Γ satisfies the assumptions given in Section. As we have already mentioned the mapping from agents to time periods, P Q, is uniformly random. Given our assumptions on S, the signal S τ is independent of O t for all τ < t. Last, we can restrict attention to symmetric equilibria of Γ since the strategy profile with σ i s t, z t, ξ t ) = σ q 1 t) s q 1 t), z t, ξ t ) is a symmetric strategy profile and, as we see next, an equilibrium of Γ. In order to show that these two games induce the same distribution over histories, it is useful to realize the two games on the same probability space. o do so, we need players in each position to use the same random variable to employ a mixed strategy. An agent playing in period t uses B t, a random variable distributed uniformly on 0, 1] and independent of all other random variables. In this case, the histories of Γ 1, written H 1 t σ ), and the histories of Γ, written H t σ ), are the same in every period. We can show this by induction on t. By definition, H1 1 = H 1. If the histories are equivalent up to period t, H 1 t σ ) = H t σ ) = h t, then the agents in both games playing in period t receive the same sample, ξo t, h t ). In Γ 1, the agent chooses technology one ) when B t σ Q 1 t) S Q 1 t), Z t, ξo t, h t ). Similarly, in Γ, the agent chooses technology one when B t σ P 1 Q 1 t)) S t, Z t, ξo t, h t )). hese are the same, by the definition of σ i s t, z t, ξ t ). Since the agents have the same information in each game, σ is a symmetric equilibrium of Γ. Proposition 8 is now a straightforward corollary of the Proposition 5. 1
Observational Learning with Position Uncertainty
Observational Learning with Position Uncertainty Ignacio Monzón and Michael Rapp September 15, 2014 Abstract Observational learning is typically examined when agents have precise information about their
More informationBayesian Learning in Social Networks
Bayesian Learning in Social Networks Asu Ozdaglar Joint work with Daron Acemoglu, Munther Dahleh, Ilan Lobel Department of Electrical Engineering and Computer Science, Department of Economics, Operations
More informationPreliminary Results on Social Learning with Partial Observations
Preliminary Results on Social Learning with Partial Observations Ilan Lobel, Daron Acemoglu, Munther Dahleh and Asuman Ozdaglar ABSTRACT We study a model of social learning with partial observations from
More informationObservational Learning in Large Anonymous Games
Observational Learning in Large Anonymous Games Ignacio Monzón September 8, 07 Abstract I present a model of observational learning with payoff interdependence. Agents, ordered in a sequence, receive private
More informationSocial Learning with Endogenous Network Formation
Social Learning with Endogenous Network Formation Yangbo Song November 10, 2014 Abstract I study the problem of social learning in a model where agents move sequentially. Each agent receives a private
More informationRate of Convergence of Learning in Social Networks
Rate of Convergence of Learning in Social Networks Ilan Lobel, Daron Acemoglu, Munther Dahleh and Asuman Ozdaglar Massachusetts Institute of Technology Cambridge, MA Abstract We study the rate of convergence
More informationNBER WORKING PAPER SERIES BAYESIAN LEARNING IN SOCIAL NETWORKS. Daron Acemoglu Munther A. Dahleh Ilan Lobel Asuman Ozdaglar
NBER WORKING PAPER SERIES BAYESIAN LEARNING IN SOCIAL NETWORKS Daron Acemoglu Munther A. Dahleh Ilan Lobel Asuman Ozdaglar Working Paper 14040 http://www.nber.org/papers/w14040 NATIONAL BUREAU OF ECONOMIC
More informationConfronting Theory with Experimental Data and vice versa. Lecture VII Social learning. The Norwegian School of Economics Nov 7-11, 2011
Confronting Theory with Experimental Data and vice versa Lecture VII Social learning The Norwegian School of Economics Nov 7-11, 2011 Quantal response equilibrium (QRE) Players do not choose best response
More informationSelecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden
1 Selecting Efficient Correlated Equilibria Through Distributed Learning Jason R. Marden Abstract A learning rule is completely uncoupled if each player s behavior is conditioned only on his own realized
More informationLearning from Others Outcomes
Learning from Others Outcomes Alexander Wolitzky MIT BFI, July 21, 2017 Wolitzky (MIT) Learning from Others Outcomes BFI, July 21, 2017 1 / 53 Introduction Scarcity of Cost-Saving Innovation Development
More informationCostly Social Learning and Rational Inattention
Costly Social Learning and Rational Inattention Srijita Ghosh Dept. of Economics, NYU September 19, 2016 Abstract We consider a rationally inattentive agent with Shannon s relative entropy cost function.
More informationCooperation in Social Dilemmas through Position Uncertainty
Cooperation in Social Dilemmas through Position Uncertainty Andrea Gallice and Ignacio Monzón Università di Torino and Collegio Carlo Alberto North American Summer Meetings of the Econometric Society St.
More informationUC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer Social learning and bargaining (axiomatic approach)
UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer 2015 Social learning and bargaining (axiomatic approach) Block 4 Jul 31 and Aug 1, 2015 Auction results Herd behavior and
More informationBayes Correlated Equilibrium and Comparing Information Structures
Bayes Correlated Equilibrium and Comparing Information Structures Dirk Bergemann and Stephen Morris Spring 2013: 521 B Introduction game theoretic predictions are very sensitive to "information structure"
More informationA Rothschild-Stiglitz approach to Bayesian persuasion
A Rothschild-Stiglitz approach to Bayesian persuasion Matthew Gentzkow and Emir Kamenica Stanford University and University of Chicago December 2015 Abstract Rothschild and Stiglitz (1970) represent random
More informationObservational Learning and Demand for Search Goods
Observational Learning and Demand for Search Goods Ken Hendricks University of Texas at Austin Tom Wiseman University of Texas at Austin October 16, 2009 Alan Sorensen Stanford University & NBER Abstract
More informationEfficient Random Assignment with Cardinal and Ordinal Preferences: Online Appendix
Efficient Random Assignment with Cardinal and Ordinal Preferences: Online Appendix James C. D. Fisher December 11, 2018 1 1 Introduction This document collects several results, which supplement those in
More informationEx Post Cheap Talk : Value of Information and Value of Signals
Ex Post Cheap Talk : Value of Information and Value of Signals Liping Tang Carnegie Mellon University, Pittsburgh PA 15213, USA Abstract. Crawford and Sobel s Cheap Talk model [1] describes an information
More informationCostly Expertise. Dino Gerardi and Leeat Yariv yz. Current Version: December, 2007
Costly Expertise Dino Gerardi and Leeat Yariv yz Current Version: December, 007 In many environments expertise is costly. Costs can manifest themselves in numerous ways, ranging from the time that is required
More informationKnown Unknowns: Power Shifts, Uncertainty, and War.
Known Unknowns: Power Shifts, Uncertainty, and War. Online Appendix Alexandre Debs and Nuno P. Monteiro May 10, 2016 he Appendix is structured as follows. Section 1 offers proofs of the formal results
More informationDefinitions and Proofs
Giving Advice vs. Making Decisions: Transparency, Information, and Delegation Online Appendix A Definitions and Proofs A. The Informational Environment The set of states of nature is denoted by = [, ],
More informationPayoff Continuity in Incomplete Information Games
journal of economic theory 82, 267276 (1998) article no. ET982418 Payoff Continuity in Incomplete Information Games Atsushi Kajii* Institute of Policy and Planning Sciences, University of Tsukuba, 1-1-1
More informationQuestion 1. (p p) (x(p, w ) x(p, w)) 0. with strict inequality if x(p, w) x(p, w ).
University of California, Davis Date: August 24, 2017 Department of Economics Time: 5 hours Microeconomics Reading Time: 20 minutes PRELIMINARY EXAMINATION FOR THE Ph.D. DEGREE Please answer any three
More informationMonotonic ɛ-equilibria in strongly symmetric games
Monotonic ɛ-equilibria in strongly symmetric games Shiran Rachmilevitch April 22, 2016 Abstract ɛ-equilibrium allows for worse actions to be played with higher probability than better actions. I introduce
More informationArea I: Contract Theory Question (Econ 206)
Theory Field Exam Summer 2011 Instructions You must complete two of the four areas (the areas being (I) contract theory, (II) game theory A, (III) game theory B, and (IV) psychology & economics). Be sure
More informationEconomics 209B Behavioral / Experimental Game Theory (Spring 2008) Lecture 3: Equilibrium refinements and selection
Economics 209B Behavioral / Experimental Game Theory (Spring 2008) Lecture 3: Equilibrium refinements and selection Theory cannot provide clear guesses about with equilibrium will occur in games with multiple
More informationOn the Unique D1 Equilibrium in the Stackelberg Model with Asymmetric Information Janssen, M.C.W.; Maasland, E.
Tilburg University On the Unique D1 Equilibrium in the Stackelberg Model with Asymmetric Information Janssen, M.C.W.; Maasland, E. Publication date: 1997 Link to publication General rights Copyright and
More informationFirst Prev Next Last Go Back Full Screen Close Quit. Game Theory. Giorgio Fagiolo
Game Theory Giorgio Fagiolo giorgio.fagiolo@univr.it https://mail.sssup.it/ fagiolo/welcome.html Academic Year 2005-2006 University of Verona Summary 1. Why Game Theory? 2. Cooperative vs. Noncooperative
More informationSocial Learning and the Shadow of the Past
MPRA Munich Personal RePEc Archive Social Learning and the Shadow of the Past Yuval Heller and Erik Mohlin Bar Ilan University, Lund University 28 April 2017 Online at https://mpraubuni-muenchende/79930/
More informationDeceptive Advertising with Rational Buyers
Deceptive Advertising with Rational Buyers September 6, 016 ONLINE APPENDIX In this Appendix we present in full additional results and extensions which are only mentioned in the paper. In the exposition
More informationEconomics of Networks Social Learning
Economics of Networks Social Learning Evan Sadler Massachusetts Institute of Technology Evan Sadler Social Learning 1/38 Agenda Recap of rational herding Observational learning in a network DeGroot learning
More informationCorrelated Equilibrium in Games with Incomplete Information
Correlated Equilibrium in Games with Incomplete Information Dirk Bergemann and Stephen Morris Econometric Society Summer Meeting June 2012 Robust Predictions Agenda game theoretic predictions are very
More informationObservational Learning and Demand for Search Goods
Observational Learning and Demand for Search Goods Kenneth Hendricks University of Texas at Austin Tom Wiseman University of Texas at Austin September 2010 Alan Sorensen Stanford University & NBER Abstract
More informationBayesian Persuasion Online Appendix
Bayesian Persuasion Online Appendix Emir Kamenica and Matthew Gentzkow University of Chicago June 2010 1 Persuasion mechanisms In this paper we study a particular game where Sender chooses a signal π whose
More informationConfronting Theory with Experimental Data and vice versa Lecture II: Social Learning. Academia Sinica Mar 3, 2009
Confronting Theory with Experimental Data and vice versa Lecture II: Social Learning Academia Sinica Mar 3, 2009 Background Social learning describes any situation in which individuals learn by observing
More informationUncertainty. Michael Peters December 27, 2013
Uncertainty Michael Peters December 27, 20 Lotteries In many problems in economics, people are forced to make decisions without knowing exactly what the consequences will be. For example, when you buy
More informationTheory of Auctions. Carlos Hurtado. Jun 23th, Department of Economics University of Illinois at Urbana-Champaign
Theory of Auctions Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Jun 23th, 2015 C. Hurtado (UIUC - Economics) Game Theory On the Agenda 1 Formalizing
More informationPersuading Skeptics and Reaffirming Believers
Persuading Skeptics and Reaffirming Believers May, 31 st, 2014 Becker-Friedman Institute Ricardo Alonso and Odilon Camara Marshall School of Business - USC Introduction Sender wants to influence decisions
More informationA Rothschild-Stiglitz approach to Bayesian persuasion
A Rothschild-Stiglitz approach to Bayesian persuasion Matthew Gentzkow and Emir Kamenica Stanford University and University of Chicago January 2016 Consider a situation where one person, call him Sender,
More informationRobustness of Equilibria in Anonymous Local Games
Robustness of Equilibria in Anonymous Local Games Willemien Kets October 12, 2010 Abstract This paper studies the robustness of symmetric equilibria in anonymous local games to perturbations of prior beliefs.
More informationNeuro Observational Learning
Neuro Observational Learning Kfir Eliaz Brown University and Tel Aviv University Ariel Rubinstein Tel Aviv University and New York University Abstract. We propose a simple model in which an agent observes
More informationCoordination and Continuous Choice
Coordination and Continuous Choice Stephen Morris and Ming Yang Princeton University and Duke University December 2016 Abstract We study a coordination game where players choose what information to acquire
More informationPerfect Conditional -Equilibria of Multi-Stage Games with Infinite Sets of Signals and Actions (Preliminary and Incomplete)
Perfect Conditional -Equilibria of Multi-Stage Games with Infinite Sets of Signals and Actions (Preliminary and Incomplete) Roger B. Myerson and Philip J. Reny Department of Economics University of Chicago
More informationInformation diffusion in networks through social learning
Theoretical Economics 10 2015), 807 851 1555-7561/20150807 Information diffusion in networks through social learning Ilan Lobel IOMS Department, Stern School of Business, New York University Evan Sadler
More informationFixed Point Theorems
Fixed Point Theorems Definition: Let X be a set and let f : X X be a function that maps X into itself. (Such a function is often called an operator, a transformation, or a transform on X, and the notation
More informationAppendix B for The Evolution of Strategic Sophistication (Intended for Online Publication)
Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Nikolaus Robalino and Arthur Robson Appendix B: Proof of Theorem 2 This appendix contains the proof of Theorem
More informationEvolutionary Dynamics and Extensive Form Games by Ross Cressman. Reviewed by William H. Sandholm *
Evolutionary Dynamics and Extensive Form Games by Ross Cressman Reviewed by William H. Sandholm * Noncooperative game theory is one of a handful of fundamental frameworks used for economic modeling. It
More informationSolving Extensive Form Games
Chapter 8 Solving Extensive Form Games 8.1 The Extensive Form of a Game The extensive form of a game contains the following information: (1) the set of players (2) the order of moves (that is, who moves
More informationLEARNING IN SOCIAL NETWORKS
LEARNING IN SOCIAL NETWORKS Ben Golub and Evan Sadler... 1. Introduction Social ties convey information through observations of others decisions as well as through conversations and the sharing of opinions.
More informationWars of Attrition with Budget Constraints
Wars of Attrition with Budget Constraints Gagan Ghosh Bingchao Huangfu Heng Liu October 19, 2017 (PRELIMINARY AND INCOMPLETE: COMMENTS WELCOME) Abstract We study wars of attrition between two bidders who
More informationConfronting Theory with Experimental Data and vice versa. European University Institute. May-Jun Lectures 7-8: Equilibrium
Confronting Theory with Experimental Data and vice versa European University Institute May-Jun 2008 Lectures 7-8: Equilibrium Theory cannot provide clear guesses about with equilibrium will occur in games
More informationGame Theory Fall 2003
Game Theory Fall 2003 Problem Set 1 [1] In this problem (see FT Ex. 1.1) you are asked to play with arbitrary 2 2 games just to get used to the idea of equilibrium computation. Specifically, consider the
More informationSelf-stabilizing uncoupled dynamics
Self-stabilizing uncoupled dynamics Aaron D. Jaggard 1, Neil Lutz 2, Michael Schapira 3, and Rebecca N. Wright 4 1 U.S. Naval Research Laboratory, Washington, DC 20375, USA. aaron.jaggard@nrl.navy.mil
More informationMechanism Design: Basic Concepts
Advanced Microeconomic Theory: Economics 521b Spring 2011 Juuso Välimäki Mechanism Design: Basic Concepts The setup is similar to that of a Bayesian game. The ingredients are: 1. Set of players, i {1,
More informationGuilt in Games. P. Battigalli and M. Dufwenberg (AER, 2007) Presented by Luca Ferocino. March 21 st,2014
Guilt in Games P. Battigalli and M. Dufwenberg (AER, 2007) Presented by Luca Ferocino March 21 st,2014 P. Battigalli and M. Dufwenberg (AER, 2007) Guilt in Games 1 / 29 ADefinitionofGuilt Guilt is a cognitive
More informationDiscrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 10
EECS 70 Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 10 Introduction to Basic Discrete Probability In the last note we considered the probabilistic experiment where we flipped
More informationBasic Game Theory. Kate Larson. January 7, University of Waterloo. Kate Larson. What is Game Theory? Normal Form Games. Computing Equilibria
Basic Game Theory University of Waterloo January 7, 2013 Outline 1 2 3 What is game theory? The study of games! Bluffing in poker What move to make in chess How to play Rock-Scissors-Paper Also study of
More informationCOORDINATION AND EQUILIBRIUM SELECTION IN GAMES WITH POSITIVE NETWORK EFFECTS
COORDINATION AND EQUILIBRIUM SELECTION IN GAMES WITH POSITIVE NETWORK EFFECTS Alexander M. Jakobsen B. Curtis Eaton David Krause August 31, 2009 Abstract When agents make their choices simultaneously,
More informationSatisfaction Equilibrium: Achieving Cooperation in Incomplete Information Games
Satisfaction Equilibrium: Achieving Cooperation in Incomplete Information Games Stéphane Ross and Brahim Chaib-draa Department of Computer Science and Software Engineering Laval University, Québec (Qc),
More informationINFORMATIONAL ROBUSTNESS AND SOLUTION CONCEPTS. Dirk Bergemann and Stephen Morris. December 2014 COWLES FOUNDATION DISCUSSION PAPER NO.
INFORMATIONAL ROBUSTNESS AND SOLUTION CONCEPTS By Dirk Bergemann and Stephen Morris December 2014 COWLES FOUNDATION DISCUSSION PAPER NO. 1973 COWLES FOUNDATION FOR RESEARCH IN ECONOMICS YALE UNIVERSITY
More informationLEM. New Results on Betting Strategies, Market Selection, and the Role of Luck. Giulio Bottazzi Daniele Giachini
LEM WORKING PAPER SERIES New Results on Betting Strategies, Market Selection, and the Role of Luck Giulio Bottazzi Daniele Giachini Institute of Economics, Scuola Sant'Anna, Pisa, Italy 2018/08 February
More informationFast and Slow Learning From Reviews
Fast and Slow Learning From Reviews Daron Acemoglu Ali Makhdoumi Azarakhsh Malekian Asu Ozdaglar Abstract This paper develops a model of Bayesian learning from online reviews, and investigates the conditions
More informationNASH IMPLEMENTATION USING SIMPLE MECHANISMS WITHOUT UNDESIRABLE MIXED-STRATEGY EQUILIBRIA
NASH IMPLEMENTATION USING SIMPLE MECHANISMS WITHOUT UNDESIRABLE MIXED-STRATEGY EQUILIBRIA MARIA GOLTSMAN Abstract. This note shows that, in separable environments, any monotonic social choice function
More information6.207/14.15: Networks Lecture 16: Cooperation and Trust in Networks
6.207/14.15: Networks Lecture 16: Cooperation and Trust in Networks Daron Acemoglu and Asu Ozdaglar MIT November 4, 2009 1 Introduction Outline The role of networks in cooperation A model of social norms
More informationEconomics 201B Economic Theory (Spring 2017) Bargaining. Topics: the axiomatic approach (OR 15) and the strategic approach (OR 7).
Economics 201B Economic Theory (Spring 2017) Bargaining Topics: the axiomatic approach (OR 15) and the strategic approach (OR 7). The axiomatic approach (OR 15) Nash s (1950) work is the starting point
More informationEntry under an Information-Gathering Monopoly Alex Barrachina* June Abstract
Entry under an Information-Gathering onopoly Alex Barrachina* June 2016 Abstract The effects of information-gathering activities on a basic entry model with asymmetric information are analyzed. In the
More information6.207/14.15: Networks Lectures 22-23: Social Learning in Networks
6.207/14.15: Networks Lectures 22-23: Social Learning in Networks Daron Acemoglu and Asu Ozdaglar MIT December 2 end 7, 2009 1 Introduction Outline Recap on Bayesian social learning Non-Bayesian (myopic)
More informationThe Impact of Organizer Market Structure on Participant Entry Behavior in a Multi-Tournament Environment
The Impact of Organizer Market Structure on Participant Entry Behavior in a Multi-Tournament Environment Timothy Mathews and Soiliou Daw Namoro Abstract. A model of two tournaments, each with a field of
More informationBELIEFS & EVOLUTIONARY GAME THEORY
1 / 32 BELIEFS & EVOLUTIONARY GAME THEORY Heinrich H. Nax hnax@ethz.ch & Bary S. R. Pradelski bpradelski@ethz.ch May 15, 217: Lecture 1 2 / 32 Plan Normal form games Equilibrium invariance Equilibrium
More information1. Introduction. 2. A Simple Model
. Introduction In the last years, evolutionary-game theory has provided robust solutions to the problem of selection among multiple Nash equilibria in symmetric coordination games (Samuelson, 997). More
More informationCorrelated Equilibria of Classical Strategic Games with Quantum Signals
Correlated Equilibria of Classical Strategic Games with Quantum Signals Pierfrancesco La Mura Leipzig Graduate School of Management plamura@hhl.de comments welcome September 4, 2003 Abstract Correlated
More informationObservational Learning Under Imperfect Information
Observational Learning Under Imperfect Information Boğaçhan Çelen New York University Shachar Kariv New York University January 23, 2003 Abstract The analysis explores Bayes-rational sequential decision
More informationSocial Choice Theory. Felix Munoz-Garcia School of Economic Sciences Washington State University. EconS Advanced Microeconomics II
Social Choice Theory Felix Munoz-Garcia School of Economic Sciences Washington State University EconS 503 - Advanced Microeconomics II Social choice theory MWG, Chapter 21. JR, Chapter 6.2-6.5. Additional
More informationRepeated Downsian Electoral Competition
Repeated Downsian Electoral Competition John Duggan Department of Political Science and Department of Economics University of Rochester Mark Fey Department of Political Science University of Rochester
More informationSF2972 Game Theory Exam with Solutions March 15, 2013
SF2972 Game Theory Exam with s March 5, 203 Part A Classical Game Theory Jörgen Weibull and Mark Voorneveld. (a) What are N, S and u in the definition of a finite normal-form (or, equivalently, strategic-form)
More informationStrategic Abuse and Accuser Credibility
1 / 33 Strategic Abuse and Accuser Credibility HARRY PEI and BRUNO STRULOVICI Department of Economics, Northwestern University November 16th, 2018 Montréal, Canada 2 / 33 Motivation Many crimes/abuses
More informationBAYES CORRELATED EQUILIBRIUM AND THE COMPARISON OF INFORMATION STRUCTURES IN GAMES. Dirk Bergemann and Stephen Morris
BAYES CORRELATED EQUILIBRIUM AND THE COMPARISON OF INFORMATION STRUCTURES IN GAMES By Dirk Bergemann and Stephen Morris September 203 Revised October 204 COWLES FOUNDATION DISCUSSION PAPER NO. 909RR COWLES
More informationPathological Outcomes of Observational Learning
Pathological Outcomes of Observational Learning by Lones Smith and Peter Sorensen Econometrica, March 2000 In the classic herding models of Banerjee (1992) and Bikhchandani, Hirshleifer, and Welch (1992),
More informationLecture Slides - Part 1
Lecture Slides - Part 1 Bengt Holmstrom MIT February 2, 2016. Bengt Holmstrom (MIT) Lecture Slides - Part 1 February 2, 2016. 1 / 36 Going to raise the level a little because 14.281 is now taught by Juuso
More informationA Game-Theoretic Analysis of Games with a Purpose
A Game-Theoretic Analysis of Games with a Purpose The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Published Version
More informationEcon 618: Correlated Equilibrium
Econ 618: Correlated Equilibrium Sunanda Roy 1 Basic Concept of a Correlated Equilibrium MSNE assumes players use a random device privately and independently, that tells them which strategy to choose for
More informationOther Equilibrium Notions
Other Equilibrium Notions Ichiro Obara UCLA January 21, 2012 Obara (UCLA) Other Equilibrium Notions January 21, 2012 1 / 28 Trembling Hand Perfect Equilibrium Trembling Hand Perfect Equilibrium We may
More informationThe Value of Congestion
Laurens G. Debo Tepper School of Business Carnegie Mellon University Pittsburgh, PA 15213 The Value of Congestion Uday Rajan Ross School of Business University of Michigan Ann Arbor, MI 48109. February
More informationSocial Learning Equilibria
Social Learning Equilibria Elchanan Mossel, Manuel Mueller-Frank, Allan Sly and Omer Tamuz March 21, 2018 Abstract We consider social learning settings in which a group of agents face uncertainty regarding
More informationEconomics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Extensive games with perfect information OR6and7,FT3,4and11
Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Extensive games with perfect information OR6and7,FT3,4and11 Perfect information A finite extensive game with perfect information
More informationComplexity and Mixed Strategy Equilibria
Complexity and Mixed Strategy Equilibria ai-wei Hu Northwestern University First Version: Nov 2008 his version: March 2010 Abstract Unpredictable behavior is central for optimal play in many strategic
More informationOn the errors introduced by the naive Bayes independence assumption
On the errors introduced by the naive Bayes independence assumption Author Matthijs de Wachter 3671100 Utrecht University Master Thesis Artificial Intelligence Supervisor Dr. Silja Renooij Department of
More informationMechanism Design: Implementation. Game Theory Course: Jackson, Leyton-Brown & Shoham
Game Theory Course: Jackson, Leyton-Brown & Shoham Bayesian Game Setting Extend the social choice setting to a new setting where agents can t be relied upon to disclose their preferences honestly Start
More informationStatic Information Design
Static Information Design Dirk Bergemann and Stephen Morris Frontiers of Economic Theory & Computer Science, Becker-Friedman Institute, August 2016 Mechanism Design and Information Design Basic Mechanism
More information6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2
6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2 Daron Acemoglu and Asu Ozdaglar MIT October 14, 2009 1 Introduction Outline Mixed Strategies Existence of Mixed Strategy Nash Equilibrium
More informationThe Value of Information Under Unawareness
The Under Unawareness Spyros Galanis University of Southampton January 30, 2014 The Under Unawareness Environment Agents invest in the stock market Their payoff is determined by their investments and the
More informationA Rothschild-Stiglitz approach to Bayesian persuasion
A Rothschild-Stiglitz approach to Bayesian persuasion Matthew Gentzkow and Emir Kamenica Stanford University and University of Chicago September 2015 Abstract Rothschild and Stiglitz (1970) introduce a
More informationA remark on discontinuous games with asymmetric information and ambiguity
Econ Theory Bull DOI 10.1007/s40505-016-0100-5 RESEARCH ARTICLE A remark on discontinuous games with asymmetric information and ambiguity Wei He 1 Nicholas C. Yannelis 1 Received: 7 February 2016 / Accepted:
More informationRobust Predictions in Games with Incomplete Information
Robust Predictions in Games with Incomplete Information joint with Stephen Morris (Princeton University) November 2010 Payoff Environment in games with incomplete information, the agents are uncertain
More informationEquilibrium Refinements
Equilibrium Refinements Mihai Manea MIT Sequential Equilibrium In many games information is imperfect and the only subgame is the original game... subgame perfect equilibrium = Nash equilibrium Play starting
More informationExistence of Nash Networks in One-Way Flow Models
Existence of Nash Networks in One-Way Flow Models pascal billand a, christophe bravard a, sudipta sarangi b a CREUSET, Jean Monnet University, Saint-Etienne, France. email: pascal.billand@univ-st-etienne.fr
More information6 Evolution of Networks
last revised: March 2008 WARNING for Soc 376 students: This draft adopts the demography convention for transition matrices (i.e., transitions from column to row). 6 Evolution of Networks 6. Strategic network
More informationGrowing competition in electricity industry and the power source structure
Growing competition in electricity industry and the power source structure Hiroaki Ino Institute of Intellectual Property and Toshihiro Matsumura Institute of Social Science, University of Tokyo [Preliminary
More information6.207/14.15: Networks Lecture 11: Introduction to Game Theory 3
6.207/14.15: Networks Lecture 11: Introduction to Game Theory 3 Daron Acemoglu and Asu Ozdaglar MIT October 19, 2009 1 Introduction Outline Existence of Nash Equilibrium in Infinite Games Extensive Form
More informationColumbia University. Department of Economics Discussion Paper Series
Columbia University Department of Economics Discussion Paper Series Equivalence of Public Mixed-Strategies and Private Behavior Strategies in Games with Public Monitoring Massimiliano Amarante Discussion
More information