Costly word-of-mouth learning in networks

Size: px
Start display at page:

Download "Costly word-of-mouth learning in networks"

Transcription

1 Costly word-of-mouth learning in networks By Daniel C. Opolot and Théophile T. Azomahou This paper develops a framework for an information externality game in which individuals interacting through a network choose an optimal time to act. Individuals exchange information through word-of-mouth communication and waiting to become more informed is beneficial but costly. We characterize equilibrium exit times and asymptotic learning. The main result is that asymptotic learning with rational agents obtains in networks that exhibit exponentially growing neighbourhood sizes and/or bounded diameter. Asymptotic learning by naïve agents on the other hand obtains in networks that are asymptotically balanced with the second largest eigenvalue bounded away from one. JEL: C72, D82, D83, D85 Keywords: Information externalities, word-of-mouth communication, social learning, networks, exit times. I. Introduction This paper studies the process of social learning in networks in which: (a) The process of information exchange involves word-of-mouth communication, whereby individuals communicate the summary statistic (posterior belief) of their private information. (b) Gathering information is costly and Opolot: Maastricht University School of Business and Economics, Tongersestraat 53, 6211 LM Maastricht, the Netherlands, d.opolot@maastrichtuniversity.nl. Azomahou: Maastricht University UNU-MERIT, Maastricht, the Netherlands, azomahou@merit.unu.edu. We thank Dimitri Vayanos for the constructive comments on an earlier version of this paper that has circulated under the title Belief dynamics in communication networks. This research was supported by UNU-MERIT. The usual disclaimer applies. 1

2 2 OCTOBER 2015 individuals face a choice between waiting to become more informed, which increases confidence in their opinion, and taking an action early enough to avoid costs associated with waiting. Such costs can result from either impatience by agents due to urgency of the decision at hand, or material costs associated to loss of potential gains. Many social decisions, ranging from product and investment choices to political and occupations decisions, fit into this framework as demonstrated by the following motivational examples The first example concerns public programs such as public health. Consider a society of individuals to whom a new vaccination program has been introduced. The willingness to vaccinate depends on individual perceptions or beliefs about vaccine safety and its consequences for disease control. Each individual starts with a prior belief about the benefits of the vaccination program. If the level of confidence in their belief is low, they will wait to receive more information from those in their social network, which includes media sources. Once their confidence reaches a sufficient level, they then decide on whether to participate in the vaccination program. There exists empirical evidence in support of social learning in such settings. Henrich and Holmes (2011) conduct an empirical study of online discussions regarding the vaccination program for the 2009 H1N1 pandemic in Canada and find that they reflected actual public opinion and hence decision making process about vaccination. 1 A second example of social decisions that fit into our framework is product, technology and services choices by consumers. Consider a group of individuals contemplating the adoption of one among new insurance services. Each individual will possess a prior belief about the benefits of avail- 1 Miguel et al. (2003) examine social learning using data from a program that promoted use of deworming medicine in Kenyan schools. They find positive effects of information exchange through social networks, which in turn influenced individual beliefs and decisions.

3 COSTLY WORD-OF-MOUTH LEARNING 3 able options. Only those with an infinite level of confidence in their belief will make a choice initially; otherwise each individual would gather information regarding the available options until their confidence is sufficiently high. Cai et al. (2015) study the influence of social networks on weather insurance adoption and the mechanisms through which social networks operate in rural China. They find positive evidence of social network effect, which is not driven by the diffusion of information on purchase decisions, but instead by the diffusion of knowledge about insurance. Their result thus also highlights evidence of word-of-mouth communication rather than observation of others actions. 2 The third example is political behaviour in elections. Consider an electoral process with two or more candidates. Each voter initially possess prior beliefs regarding each candidate. As the campaign period progresses, voters gather/receive more information via both traditional and internet media sources as well as direct word-of-mouth communication with friends and colleagues. Some voters who start with high levels of confidence in their opinion about particular candidates make final choice even before the end of campaign period; these would typically be voters who pledge allegiance to a particular political party or ideologies. The rest of the voters would continue gathering information until their confidence is sufficiently high. In both cases however, voters final choice depends on their belief resulting from the process of information exchange. 3 The framework we propose to study the above mentioned social deci- 2 Duflo and Saez (2002, 2003) provide evidence of social networks (that is information received from social connections) effects in retirement decisions by employees in a university. A similar study and result is established by Sorensen (2006). There is also strong evidence of word-of-mouth communication and social network effects in adoption of new technologies (Bandiera and Rasul, 2006; Conley and Udry, 2010) and investment decisions (Hong et al., 2004). 3 Tumasjan et al. (2010) investigate whether Twitter is used as a forum for political deliberation and whether online messages on Twitter validly mirror offline political sentiment in the case of German federal election. They find that the mere number of messages mentioning a party reflects the election result and that the content of Twitter messages plausibly reflects the offline political landscape.

4 4 OCTOBER 2015 sions can be summarized as follows. Agents interactions are governed by a network whose edges depict the directions of information exchange. Here, the network nodes do not have to be homogeneous; they can be a mix of individuals making decisions and news channels that are additional sources of information. Each agent is endowed with prior beliefs about the subject at hand. We distinguish between the cases in which prior beliefs of neighbours are observable from that in which they are not. In addition to possessing a prior belief, each agent also observes a private signal that is informative about the state of nature. Prior beliefs together with private signals form agents private beliefs. Agents then share information through word-of-mouth communication by sharing their posterior beliefs. When incorporating new information into their beliefs, agents follow a Bayesian updating rule. We distinguish between Bayesian learning by rational from that by naïve agents. Rational agents are capable of distinguishing between new and old information from their neighbours messages while naïve agents are incapable of such complex process and hence simply take weighted average of neighbours messages. We make this distinction since our focus is not on which among the two learning mechanisms best describes reality. Moreover, the empirical evidence in this regard is not conclusive. A field experiment by Möbius et al. (2010) points to evidence of Bayesian rational learning in social networks while a lab experiment by Chandrasekhar et al. (2012) indicates that individuals tend to be naïve in their belief updating. Waiting to collect information is costly but necessary to increase ones confidence in their belief. Each agent is then faced with two choices at each period, wait and gather more information or stop gathering information and take an irreversible action. The above framework can be viewed as an information externality game whose equilibrium behaviour consists of agents choosing optimal exit time; that is the time to stop gathering information and take an irreversible

5 COSTLY WORD-OF-MOUTH LEARNING 5 action. We first characterize the nature of exit times given the network topology and model parameters: prior belief and signal precisions, and discount rate. We then focus on asymptotic learning, that is whether a large population of agents make correct choices at the end of the learning process. Asymptotic learning both with rational and naïve agents depends on the network topology. We provide a direct relationship between exit times, which is a function of model parameters, and parameters associated with the network topology, such as network diameter and eigenvalue spectrum. The main findings of this paper can be summarized as follows. Asymptotic learning with rational agents occurs in networks with either exponentially growing neighbourhood sizes and/or bounded network diameter. An example of the former is tree network families, which generally exhibit a characteristic of hierarchies, while and example of the later is Erdös-Rényi family of random networks in which the average degree is unbounded. This result is true for both the case in which prior beliefs of neighbours are observable and when they are not. Asymptotic learning with naïve agents occurs under three conditions. (a) Prior beliefs must be informative of the true state of nature. (b) The convergence time to a consensus must be asymptotically bounded. This implies that the second largest eigenvalue of the associated normalized adjacency matrix of the network must be asymptotically bounded away from one. (c) The network must be asymptotically balanced. A balanced network is that in which the total influence each agent exerts on her first-order neighbours is equal to the total influence her neighbours exert on her. Asymptotic balancedness then implies that a network becomes balanced for a large population. Among networks topologies satisfying both conditions (b) and (c) is the Erdös-Rényi family of random networks. Compared to conditions for asymptotic learning by rational agents, fewer network topologies support asymptotic learning by naïve agents. This findings underscore

6 6 OCTOBER 2015 the superiority of rational agents in aggregating decentralized information when compared to naïve agents. In addition to the empirical literature discussed above, this paper is related to other strands of existing literature in the following ways. First, it is related to the literature on Bayesian learning with rational agents in social networks (e.g. Gale and Kariv (2003), Rosenberg et al. (2009) and Mueller-Frank (2013)). Just as we do in this paper, these papers also consider simultaneous communication among agents. 4 They however consider social learning whereby agents observe actions of their neighbours. And the primary focus of analysis is on the uniformity and local indifference in the actions chosen by agents at equilibrium. On a contrary, our interest is in costly word-of-mouth communication and we focus on asymptotic learning. Secondly, this paper is closely related to models of information percolation in which rational agents exchange private signals. The notable contributions are Duffie et al. (2009) and Acemoglu et al. (2014). In the model of Duffie et al. (2009), a continuum of agents are randomly matched according to search intensities determined by individual specific effort to gather information. At each period, the set of agents that happen to meet share their signals. They characterize equilibrium search intensities as a function of information gathered. The model of Acemoglu et al. (2014) is similar to ours in that interactions are governed by a social network and communication is costly. But as in the case of Duffie et al. (2009), agents exchange signals rather than communicate their beliefs. They obtain the interesting result that, learning in large societies occurs in the presence of information hubs, which receive and distribute a large amount of information. 4 There also exists a literature on sequential Bayesian learning in which agents make a decision once in a lifetime in an exogenously predefined order. When it is an agent s turn to act, he observes the history of actions of all agents that acted before him. The primary concern of this literature is establishing conditions under which informational cascades and herds behavior occurs. The main contributions are Banerjee (1992), Bikhchandani et al. (1992), Smith and Sorensen (2000) and Acemoglu et al. (2011).

7 COSTLY WORD-OF-MOUTH LEARNING 7 In addition to the fundamental difference that we consider word-of-mouth communication, which enables us to study the effect of prior belief uncertainty on asymptotic learning, our analysis is explicit in characterizing the relationship between equilibrium exit times and general network properties. In doing so, we provide concrete analytical tools for characterizing a range of parameter value within which asymptotic learning occurs. There are two main contributions in this regard. (a) The analytical tools provided pave a way for empirical analysis in such social decision environments. We demonstrate this possibility with three examples of empirical social networks from the Stanford Large Network Data set Collection (Leskovec and Krevl, 2014). (b) Our analytical tools make it feasible to establish conditions for asymptotic learning within any given network topology. For example, we find that in additions to information hubs being essential, asymptotic learning also occurs in other forms of networks that assume a random and/or regular structures, such as the Erdös-Rényi family of random networks and the regular-tree networks respectively. Moreover, we show that the desirable properties for asymptotic learning with rational agents are that either the neighbourhood sizes grow exponentially and/or the network diameter is bounded. Thirdly, this paper is related to the literature on word-of-mouth learning by naïve agents. Notable contributions include Ellison and Fudenberg (1995) and Banerjee and Fudenberg (2004) who consider the case in which agents observe and take weighted average of their neighbours payoffs, and DeGroot (1974), Demarzo et al. (2003) and Golub and Jackson (2010) in which agents observe and take weighted average of neighbours opinions. The main outcome in both cases is that learning converges to a consensus in choices made (for the former models) and in opinions (for the later models). This paper is more closely related to the later set of models and in particular Golub and Jackson (2010) who also study learning in large societies but

8 8 OCTOBER 2015 in which communication is costless. They find an interesting result that, asymptotic learning in such a framework occurs in networks where the total influence of the most influential agent decays with population size. In comparison to this paper, the first fundamental difference is that we consider costly word-of-mouth communication, that is when the discount rate r 0. The framework in Golub and Jackson (2010) thus corresponds to the case in which r = 0. In addition, we develop alternative analytical methods for characterizing asymptotic learning that directly relate to the parameters of first-order influence among agents. The measure we developed, the balancedness of a network, is computationally less cumbersome and analytically tractable making it suitable for empirical analysis. Secondly, and perhaps most importantly, our framework takes into account convergence rates in examining asymptotic learning as reflected in exit times. We provide examples of cases where asymptotic learning obtains when r = 0 but the convergence time of learning is asymptotically infinite, which makes it an important factor to take into account in such analysis. Finally, our framework is also related to the literature on exit games, in which players learn from their own private experiences and by observing the actions of other players. Players then choose an optimal time to exit play. This framework has been applied to model investment decisions by firms and notable contributions are Caplin and Leahy (1994), Bulow and Klemperer (1994), Chamley and Gale (1994), and Murto and Välimäki (2011). Although we have focused on word-of-mouth communication as opposed to observation of others actions, our framework has a potential of contributing to the aforementioned literature. The predictions of these models are that equilibrium behaviour exhibits two extremes in which either all agents participate in the game or massively exit. Such predictions suitably capture the phenomena observable in market crushes. Our model however, is that in which the likelihood to exit the game depends on the level of confidence

9 COSTLY WORD-OF-MOUTH LEARNING 9 in ones belief. Starting from an initial state where agents possess heterogeneous information precisions, it is feasible to obtain intermediate (interior) equilibria in which some agents exit while others continue participating in the game. Hence, providing a richer understanding of collective behaviour in such decision environments. The remainder of the paper is organized as follows. Section II outlines a framework of costly word-of-mouth learning and provides the structure of equilibrium exit times. Section III presents a characterization of asymptotic learning by rational agents when prior beliefs of opponents are observable. Section IV characterizes asymptotic learning by naïve agents. Section V characterizes asymptotic learning by rational agents when prior beliefs of opponents are not observable. A conclusion is offered in Section VI and lengthy proofs are relegated to the Appendix. II. The model This section outlines the specifications of a game of informational externalities, in which agents exchange information through word-of-mouth communication. A. Actions, payoffs, informational and communication structures We consider an information externality game played by a set N of agents, each of whom chooses an irreversible action x R. The payoff (more specifically the loss function) U(x, θ) to choice x is a function of x and a state of nature θ; that is U(x, θ) = (x θ) 2. We consider this simple structure of U(x, θ) for the sake of concreteness. As will become clear in the following sections, what matters is that the optimal value of U(x, θ) must in one way or the other reflect individual confidence in their opinion; so U(x, θ) could take any form provided this property is preserved. The true state of nature θ is unknown to all agents and is drawn from a normal

10 10 OCTOBER 2015 distribution with mean θ and variance σθ 2. Each agent observes a noisy signal s that is informative about θ of the form s i = θ + ε i, where across all i, θ and ε1,, ε n knowledge that ε i N (0, σ 2 ε). are independently distributed, and it is common Agents exchange information through word-of-mouth communication and their interactions are governed by a network G n = (N n, E n ), where for the population of size n, N n is the set of agents and E n is the set of links connecting them. The network of interactions imposes constraints on information exchange in that agents can only communicate to those in their neighbourhood. We write G n, with elements g ij, for the adjacency matrix of G n. If g ij > 0, then j communicates or sends a message to i and g ij = 0 implies otherwise. Communication is simultaneous and deterministic. The following definitions and notations regarding networks will be used through out the paper. The t-order neighbourhood of i will be denoted by N i,t such that N i,1 is the set of agents that directly communicate to i. We write b n i,t and k n i,t for the size of t-order and t-th neighbourhood respectively. That is k n i,t is the number of agents at the t radius from i and b n i,t is the number of all agents within and including those at the t radius from i. A network is said to be strongly connected if for every pair of agents i and j, there exists a directed path from i to j and vice versa. The distance or equivalently the geodesic d ij (G n ) is the shortest path from i to j. The diameter d(g n ) is the longest geodesic of G n. B. Dynamics and learning mechanisms In addition to the constraints imposed by the communication network on information exchange, there is a cost associated with delays in taking an irreversible action. At every period, each agent i chooses between taking an irreversible action and exit the game or wait to collect more information. Waiting is costly in that the payoffs are discounted over time with discount

11 COSTLY WORD-OF-MOUTH LEARNING 11 rate of r. That is at time t, i s loss function to taking action x i is e rt (x i θ) 2. Under such a set-up, agents endogenously choose the optimal time to exit the game. Our analysis can however also directly extend to situations where the time to exit the game is exogenously given. An example of such a situation occurs in electoral processes where the campaigns period is the time interval within which information exchange occurs. Let I n i,t denote the information set of i at time t; where I n i,0 = {s i }. The information set at any t > 0 depends on the network structure and the learning mechanism. If agents are rational in the sense that they follow Bayesian updating rule, and in addition the network structure is common knowledge and that prior beliefs are observable (we relax the assumption of observable prior beliefs in section V), then I n i,t will consist of all signals from agents within the radius of t. That is Ii,t n = {s i, {s j } bn i,t j=1 }. The above argument follows directly from Bayes rule. Let µ n i,t and ρ n i,t denote the mean and precision (that is the reciprocal of the variance var n i,t) of i s posterior belief at t. From Bayes rule, j s posterior belief after observing a signal s j has mean and precision of 5 µ j,1 = σ2 ε σθ 2 + µ j,0 + σ2 θ σ2 ε σθ 2 + s i and ρ n σ2 j,1 = ρ θ + ρ ε. ε If j is a first order neighbour of i and prior beliefs of neighbours are observable, then i can correctly deduce the signal of j after observing j posterior belief. Consequently, at the end of period t the posterior belief 5 The relation follows from the Bayesian rule that given θ N (µ, σ 2 θ ) and ε N (0, σ 2 ε), if s = θ + ε then σ2 ε E [θ s] = σθ 2 + σ2 ε with a conditional variance of var [θ s] = σ2 ε σ2 θ σ 2 θ +σ2 ε µ + σ2 θ σθ 2 + s. σ2 ε

12 12 OCTOBER 2015 of i has mean and precision of (1) µ n i,t = σ 2 ε σ 2 θ b n i,t σ2 θ + µ j,0 + σ2 ε b n i,t σ2 θ + σ2 ε b n i,t s j and ρ n i,t = ρ θ + b n i,tρ ε. j=1 If on the other hand agents are naïve or boundedly rational, then their information sets at t consist of individual signals and the summary statistic of other agents private information within the tth order neighbourhood. More specifically, we assume that agents are not able to disentangle between old and new information from messages of their first-order neighbours. After receiving private signals, each agent updates their prior belief in accordance to Bayes rule. The resulting private beliefs are then communicated to the first-order neighbours. From the second period onwards, agents incorporate information from their first-order neighbours by taking the weighted average of their posterior beliefs. That is, at period t i s posterior mean is given by (2) µ n i,t+1 = n g ij (t)µ n j,t j=1 i = 1,, n If G n (t) is the associated matrix of interactions at time t, that is the normalized adjacency matrix of G n, then (2) can be written as (3) µ n t+1 = G n (t)µ n t where µ t is a vector of posterior means in the t-th period. We assume that individual belief precisions evolve in a similar manner as in (1). The learning dynamics in (3) is often referred to as the DeGroot model after DeGroot (1974). The main properties of the outcomes of this model have been extensively studied in the literature (e.g. Demarzo et al. (2003), Golub and Jackson (2010) and Jadbabaie et al. (2012)). Contrary to the analysis

13 COSTLY WORD-OF-MOUTH LEARNING 13 in the literature, we are interested in learning outcomes when waiting to collect information is costly (i.e. when the discount rate r > 0), and to establish conditions for correct learning in large societies. by Given information set I n i,t, i s optimal actions if he decides to act is given x n i,t = arg min x R E [ (x θ) 2 I n i,t] = E [ θ I n i,t ] = µ n i,t and the payoff corresponding to the optimal action is U it (Ii,t) n = E [ e rt (x n it θ) 2 ] Ii,t n = e rt var n i,t = e rt 1 ρ θ + b n i,t ρ ε where var n i,t is the variance of i s posterior belief at t. If on the other hand i decides to wait for one more period before taking an irreversible action, then the respective expected loss function is U i,t+1 (I n i,t+1) = E [ e r (x θ) 2 I n i,t+1]. C. Equilibrium of information externality game The dynamic process described above entails an information externality game. Individual actions depend on those of other agents through information sharing. For each i and t, let Ii,t n denote the set of all possible information sets. Agent i s action at t is then a mapping a n i,t : Ii,t n R { wait }, from an information set to an action set. We write a n t for the action profile at time t, and a n i,t for an action profile that excludes i s strategy. Then the respective value function for i at t is var n i,t(ii,t) n i,t = max e r E [ ] U i,t+1 (Ii,t+1) I n i,t n V n when a n i,t = x when a n i,t = wait

14 14 OCTOBER 2015 Given the value function above, an equilibrium of information externality game is then defined as follows. Definition 1. An action profile a n, is a pure-strategy perfect Bayesian equilibrium of the information externality game if for every i N, t and action profile a n, i, action an, i,t yields to i an expected payoff equal to i s value function at t, V n i,t(i n i,t). We denote the set of equilibria of the game by A (G n ) An equilibrium strategy profile induces an equilibrium timing or equivalently exit time profile t n,a e. Each t n,a is the time at which i takes an irreversible action and exits the game. After taking an irreversible action i stops acquiring new information and can only transmit information acquired until t = t n,a. Since an agent s t-order neighbourhood is influenced by others actions, we then write b n,a i,t when the strategy profile is a n. for a t-order neighbourhood size of i The exit times are a function of discount rate, precision of agents beliefs, which in itself is a function of the network topology. Lemma 1 below shows the relationship between these components. Lemma 1: Suppose that the precision of agents beliefs evolve as in (1). The exit time for each i N is given by the solution to the equation (4) rb n,a i,t d dt ( ) b n,a rρ θ i,t + = 0 ρ ε where d b is the derivative of b with respect to t. dt PROOF: See Appendix VII.A The size of an agent s t-order neighbourhood b n,a i,t is non-decreasing function of time. For each i, the variation of b n,a i,t with time depends on network topology and i s position in the network. After a relation between b n,a i,t t is established, t n,a can be obtained by solving (4). and

15 COSTLY WORD-OF-MOUTH LEARNING 15 Consider the two infinite regular networks in Figures 1 and 2. For the network in Figure 1, it is easy to see that for each i, b n,a i,t = 4t 1. That is, at t = 1 each i receives three signals from the first order neighbours and from t 2 onward receives four signals periodically. Substituting into (4) and solving for t n,a yields (5) t n,a = 1 r 1 ( ) ρθ 1 4 ρ ε The precisions ρ θ, ρ ε and discount rate r are all exogenous parameters. The exits times are thus readily computable once the network topology is known. The exit time decreases with discount rate in that the larger the cost of waiting the earlier agents take irreversible action and exit the game. It is a decreasing function of agents prior belief precision in that the less confident they are in their prior belief, the longer or more signals it takes to be convinced in their opinion regarding the correct choice. Figure 1. : A 3-regular infinite grid network. Figure 2. : A 3-regular infinite tree network. For the network in Figure 2 b n,a i,t = 3 (2 t 1). In fact for a family of tree networks as that in Figure 2, if the first-order degree is k then b n,a i,t =

16 16 OCTOBER 2015 k ((k 1) t 1). Substituting into (4) and solving for t n,a yields (6) t n,a ( ) ( ln 1 ρ θ = 3ρ ε ln 1 ln(2) r ln(2) Clearly, what matters in determining the exit time is not just the firstorder connectivity or degree of an agent but rather the growth of their t-order neighbourhood. For the two networks in Figure 1 and 2, although the first-order degree for each agent is three in both networks, the exit times are not equivalent. For example, when r = 0.7 and ρ θ = ρ ε = 1, t n,a = 1.4 and 4.2 for Figure 1 and 2 respectively. The neighbourhood of network in Figure 2 grows exponentially as opposed to that of Figure 1 which is linear. Agents in Figure 2 benefit from waiting for a few more periods as the number of signals grows exponentially. Through their influence on the exit times, the discount rate, precisions and network topology all influence information aggregation. The exit times are interdependent in that as some agents exit earlier than others, it affects the speed and how much information the remaining agents receive. Consequently, these externalities determine whether or not correct learning (as defined in the next section) occurs. ) D. Learning in large societies Given the learning mechanism, equilibrium strategies and network structure, we are interested in determining whether correct learning occurs at equilibrium; equivalently, the conditions under which equilibrium behaviour leads to correct aggregation of decentralized information. We particularly focus on the network topologies that lead to correct learning. We define correct learning or more accurately asymptotic learning as the convergence in probability to the correct equilibrium action as prescribed by the true state of nature. That is whether equilibrium choices coincide with

17 COSTLY WORD-OF-MOUTH LEARNING 17 that prescribed by the true state of nature when the population size is infinitely large. Definition 2. Given a sequence of networks {G n } n 2, asymptotic learning is said to occur if lim n P ( x n,a θ > ɛ ) = 0 for all i N, where x n,a is the choice made by i after exiting play. Note that the correct action for the model considered here is equivalent to the true state of nature θ. Our next objective is then to characterize, for each learning mechanism, the conditions for asymptotic learning given a sequence of networks. III. Asymptotic learning: rational agents This section establishes conditions for asymptotic learning by rational agents. We focus on the relationship between belief confidence, discount rate and network characteristics in influencing correct learning. As highlighted in section II.C, the interdependence between exit times (induced by Bayesian perfect equilibrium) of agents through informational externalities influences information aggregation and hence asymptotic learning. The key to asymptotic learning is that no agents are forced to exit the game too early. By too early we mean before fully aggregating information from other agents. The analysis for asymptotic learning thus anchors on comparing for each agent their exit time to the time it would take to aggregate information if waiting were not costly. The time it takes an agent to fully aggregate information when waiting is not costly is related to the maximum geodesic of that agent. The following theorem establishes this relationship. Theorem 1: Given a sequence {G n } n 2 of strongly connected communication networks, let {d i (G n )} i N,n 2 be the corresponding sequence of agents maximum geodesics. Then asymptotic learning with rational agents

18 18 OCTOBER 2015 and observable priors obtains if (7) 1 n n i=1 ( t n,a d i(g n ) ) 0 as n, in probability PROOF: See Appendix VII.B Theorem 1 establishes a condition under which asymptotic learning oc- t curs. It can equivalently be stated as lim n,a n d i = 1 for all i. The (G n ) strength of Theorem 1 lies in the fact that for every sequence of networks one can derive a range of parameter values under which asymptotic learning occurs. For a given network structure, exit times t n,a can be derived from (4) and geodesics d i (G n ) directly from the network. Once expressions for these two measures are derived, the parameter values of r, ρ ε and ρ θ within which asymptotic learning obtains are those that satisfy condition (7). In the next sections, we demonstrate the application of Theorem 1 to various families of networks, starting with deterministic, then random/empirical networks. A. Deterministic networks In this section, we show how to determine whether or not asymptotic learning occurs in a given network where r > 0. Consider the networks of Figure 1, the exit time for all agents is (5) and diameter when the network is finite and of size n is d(g) = n. An agent i with the shortest geodesic 2 thus has d i (G) = n. From Theorem 1, asymptotic learning occurs if 4 (8) lim n t n,a d i (G n ) = lim n t n,a e d(g n ) = 1 for all i N where t n,a e = max i N t n,a is the exit time of agents at opposite ends of the network diameter. Condition (8) implies that asymptotic learning occurs if and only if r = 0 and fails in this family of networks whenever r > 0.

19 COSTLY WORD-OF-MOUTH LEARNING 19 For the network in Figure 2, the exit times are given by (6). The diameter when the network is finite and of size n is given by 2 ln ( ) n+2 ln 2 3. Clearly, in this network, asymptotic learning occurs 6 whenever r = ln 2. The following corollary generalizes this result to infinite k-regular tree networks. 7 Corollary 1: Asymptotic learning with rational agents and observable prior beliefs occurs in k-regular infinite tree networks whenever r = ln(k 1). Corollary 1 shows that asymptotic learning occurs in tree networks, and this result generally applies to all networks that exhibit the property of exponentially growing neighbourhood sizes just as tree networks do. The underlying reason being that when r = ln(k 1), the exponential growth of neighbourhood size balances out the exponential decay of the payoff loss associated with the cost of waiting. This makes agents indifferent between waiting to collect more information or take an irreversible action and exit the game. Examples of tree networks include hierarchical networks that are typically the organizational structures found in both public and corporate institutions. Acemoglu et al. (2014) show that under a similar set-up, asymptotic learning occurs in a special form of random hierarchical networks that are bounded and exhibit homophily. That is agents are connected to both those in the upper hierarchy and those at the same 6 This follows from Theorem 1, that asymptotic learning occurs when ( ) ( ) t n,a ln 1 ρ θ e lim n d(g n ) = lim 3ρ ε ln 1 ln(2) r n 2 ln ( ) n+2 = 1, 3 a condition that is satisfied whenever r = ln 2. 7 The proof follows from the fact that for a k-regular infinite tree network, b n,a i,t = k ((k 1) t 1) for each i N. Substituting into (4) yields for each i. t n,a = ln ( 1 ρ θ kρ ε ) ln ( ln(k 1) 1 ln(k 1) r ( ) The diameter when the network is finite and of size n is given by (k 1) ln(k 1) ln n+(k 1) k. It follows from condition (8) that asymptotic learning occurs if and only if r = ln(k 1). )

20 20 OCTOBER 2015 hierarchical level. Here, we have shown that homophily is not a necessary condition for asymptotic learning in hierarchical networks. B. Random and empirical networks This section demonstrates how to derive parameter values of r within which asymptotic learning occurs in random networks such as the Erdös- Rényi and scale-free families of networks. The Erdös-Rényi family of random networks G n = (N, p) are constructed by connecting every pair of nodes randomly and independently with probability p. Chung and Lu (2001) show that if the average degree np grows with n, that is np, then the diameter is approximately equal to ln n/ ln(np). Moreover, if np/ ln n > 8 then the diameter is concentrated around two values. This implies that the diameter for this family of networks is bounded. A parameter space thus exists for r > 0 within which asymptotic learning is feasible in Erdös-Rényi family of random networks provided np and that the network is strongly connected. Chung and Lu (2001) also show that when np 1, the diameter of Erdös-Rényi family of random networks is approximately ln n. Similarly, for scale-free networks (the topology that is representative of many real world networks), Newman et al. (2001) show the diameter to be approximately ln n. A scale-free network G n = (N, γ) has a degree distribution that follows a power law and results from dynamic process that exhibit preferential attachment. That is p(k) = k γ, where p(k) is the number of agents with degree k. The derivation of neighbourhood size growth function and hence exit times can be done empirically. For Erdös-Rényi family of random networks, we demonstrate this process with network parameter values n = 5000 and p = Through numerical simulations, we find that for these parameter values, the network diameter to be on average 7. The growth of

21 COSTLY WORD-OF-MOUTH LEARNING 21 neighbourhood sizes follows a log-normal distribution. That is ln(b n,a i,t ) = t f(τ)dτ, where 1 f(t) = 1 ( σ 2π exp 1 ) (t µ)2 2σ2 with σ and µ as the standard deviation and mean respectively. Figure 3 shows the quantile-quantile plot of the log values of b n,a i,t for a randomly chosen agent. The distribution of points around the 45-degree line implies that the neighbourhood growth indeed follows a log-normal distribution. The following lemma shows the structure of exit time for Erdös-Rényi family of random networks, and any other networks whose neighbourhood sizes growth follows a log-normal distribution. Normal Q Q Plot Sample Quantiles Theoretical Quantiles Figure 3. : A quantile-quantile plot of the log values of b n,a i,t Erdös-Rényi random networks. for a randomly chosen agent in an Lemma 2: For networks in which b n,a i,t times are of the form t n,a is log-normally distributed, the exit = A + Br + C ln r. where A, B and C are some constants that depend on the parameters values of ρ θ, ρ ε, σ and µ. PROOF: See Appendix VII.C Corollary 2: For a sequence {G} n 2 of strongly connected networks

22 22 OCTOBER 2015 whose neighbourhood sizes grow log-normally, let the network diameter be asymptotically bounded. Under rational learning with observable priors, there exists a real number r > 0 such that asymptotic learning occurs whenever r < r. PROOF: Given that for such networks t n,a Theorem 1 that asymptotic learning occurs whenever = A + Br + C ln r, it follows from from lim n t n,a e d(g n ) = lim A + Br + C ln r n d(g n ) = 1, If the network diameter is asymptotically bounded, that is lim n d(g n ) = c, where c is some constant, then there exists a value of r = r > 0 below which asymptotic learning occurs. Corollary 2 shows that what matters for asymptotic learning in networks with log-normally growing neighbourhood sizes is for the network diameter to be asymptotically bounded. Examples of such networks is the Erdös- Rényi family of random networks in which np as discussed above. It is easy to show that this result also holds for networks in which neighbourhood sizes growth follows a normal distribution. In the case of scale-free networks, the neighbourhood sizes growth follows a type II Pareto distribution. That is b n,a i,t f(t) = ( 1 + t 1 ) α t m = 1 f(t), where A parameter of the distribution, α > 0, determines its shape, and t m > 0 is the minimum possible value of t. The following corollary provides the structure of exit times for networks whose neighbourhood sizes growth follows a type II Pareto distribution, and that asymptotic learning fails in scale-free networks

23 COSTLY WORD-OF-MOUTH LEARNING 23 Corollary 3: For networks in which b n,a i,t follows a type II Pareto distribution, exit times are of the form t n,a = 1 t m ( 1 ( r 1 + r ) ) 1/α ρ ε + ρ θ ρ ε This implies that asymptotic learning fails in scale-free networks whenever r > 0. PROOF: See Appendix VII.D Unlike the case of Erdös-Rényi family of random networks where there exists a range of values of parameter p within which the network diameter is asymptotically bounded, scale-free networks have asymptotically infinite network diameter. This makes complete aggregation of decentralized information infeasible in such networks. In the supplementary appendix VII.I, we explore the properties of neighbourhood size growth for three empirical networks the Stanford Large Network Data set Collection (Leskovec and Krevl, 2014). We conclude this section by highlighting the novelty of the above analysis in relation to existing literature. As already mentioned above, a closely related study is Acemoglu et al. (2014). In addition to the fundamental differences discussed in Section I, we have been explicit in laying out the relationship between exit times and network variables. We have developed tools for characterizing conditions on the network topology and parameter values under which asymptotic learning occurs that can be applied to any family of networks. Acemoglu et al. (2014) find that asymptotic learning obtains in the presence of information hubs which receive and distribute a large amount of information. Here, we have been able to show that the key properties for asymptotic learning are that either the neighbourhood sizes grow exponentially or the diameter of the network is asymptotically

24 24 OCTOBER 2015 bounded. Hence, asymptotic learning can occur even in random networks provided either of these conditions is satisfied. IV. Asymptotic learning: Naïve agents This section characterizes conditions for asymptotic learning by naïve agents. We start by briefly reviewing properties of equilibrium outcomes of learning dynamics given by (3). A well known result for such a dynamic process is that a consensus in beliefs emerges provided the network is not disconnected; the network need not be strongly connected. For the case of dynamic networks, Azomahou and Opolot (2014) show that a consensus obtains if for every interval [t, t + τ), the sequence of networks {G n } t+τ t is jointly-connected. For a given time interval [t, t + τ), a corresponding sequence of networks Gt n, Gt+1, n, Gt+τ n is said to be jointly-connected if t+τ the network resulting from the union is connected. The following Lemma summarizes these results. Lemma 3: For some integer τ 0, if for every time interval [t, t + τ) t+τ the network is connected, then j=t (9) µ n i, = G n j [ lim T ] T Gγ(t)µ n 1 t=0 i j=t where (v n ) is the transpose of the vector v n. G n j = (v n ) µ 1 for every i N The implication of Lemma 3 is that for dynamic networks, the network G t at any given time t need not be connected provided it is jointly connected with either the preceding or succeeding sequence of networks. The vector v n is the influence vector and reflects the level of influence each agent exerts on the resulting beliefs. The overall influence of each agent, vi n depends not only on the level of influence that i exerts on the first-order

25 COSTLY WORD-OF-MOUTH LEARNING 25 c b a e f 0.4 c b a 0.4 d f e d (a) A connected network: G 1 (b) Network with prominent family : G 2 Figure 4. neighbours but also on the second-order neighbours, third-order neighbours, and so forth. 8 Consider an example of the two graphs in Figures 4a and 4b, call them G 1 and G 2 respectively. The corresponding influence vectors are respectively v 1 = (0.363, 0.204, 0.191, 0.073, 0.121, 0.048) and v 2 = (0, 0, 0, 0.359, 0.381, 0.260). From v 1 and v 1, the overall influence of each agent clearly depends on their first-order connectivity, the connectivity of their second-order neighbours, and so forth. Take for example the network G 1 in which agents d and f both observe the posterior beliefs of only one other agent and both communicate to one other agent. Although the first-order neighbour of agent f attaches more weight to f s messages than does the first-order neighbour of agent d, agent d is more influential than f in overall. This is precisely the effect of being connected to other agents who themselves are highly connected. In the case of network G 2, there are two subgroups (that is {a, b, c} and {d, e, f}) each of whom form a complete subgroup. The inter-subgroup communication on the other hand is unidirectional, that is members of subgroup {a, b, c} receive messages of those in subgroup {d, e, f} and not vice versa. As a consequence, a consensus emerges in a long-run in which members of subgroup {a, b, c} adopt private beliefs of subgroup {d, e, f}. 8 see Demarzo et al. (2003) and Golub and Jackson (2010) for a detailed study of the properties of the influence vector

26 26 OCTOBER 2015 This example highlights the effect of presence of prominent families. For the learning dynamics in this paper however, agents can exit the game before fully aggregating information such that the influence vectors can be different for each i depending on the perfect Bayesian equilibrium. To distinguish this characteristic of the learning process, we write v n,a i for the vector of influence of all agents on i s beliefs at the end of the learning process when the equilibrium choice profile is a. Where no confusion arise, we write v n,a to imply an influence vector with a consensus; that is whenever v n,a i identical for all i. The above two examples also act to illustrate how the network topology may affect asymptotic learning. Consider the case in which no agent exits the game before fully aggregating information. Under such a setting, the irreversible action taken by agent i at the end of the learning process is given by x n,a I n,a and µ n,a = lim t x n,a i,t = E[θ I n,a ] = µn,a is = (vn,a i ) µ 1, where are the information set and posterior mean of i at the end of the learning process respectively. Recall also that µ j,1 = It then follows that [ x n,a = 1 σθ 2 + σ σ2 ε 2 ε n j=1 v n,a ij µ j,0 + σθ 2 n j=1 σ2 ε µ σθ 2 j,0 + σ2 θ s +σ2 ε σθ 2 j. +σ2 ε v n,a ij s j Clearly, whether or not asymptotic learning occurs, that is x n,a ] = θ (the true state of the nature) depends on the values of influence vector. Therefore, in addition to restrictions on network topology imposed by exit times, there are also restrictions imposed by influence vector. Before stating the main result in this regard, the following definitions are used in the subsections that follow. Definition 3. A matrix G n is said to be doubly stochastic if n j=1 gn ij = n i=1 gn ij = 1 for all (i, j) N Definition 4. (a) A communication network G n is said to be perfectly balanced if the corresponding transition matrix G n is doubly stochas-

27 COSTLY WORD-OF-MOUTH LEARNING 27 tic. (b) A sequence of networks {G n } n 2 is said to be asymptotically balanced if lim n G n = S, where S is an infinite doubly stochastic matrix. The following theorem provides conditions on network topology and prior belief distribution for asymptotic learning with naïve agents to occur. Theorem 2: Let {G n } n 2 be a sequence of networks that are jointly connected, then asymptotic learning with naïve agents occurs if and only if the following conditions hold: (i) µ 0 N ( θ, σ 2 pi), where θ is an n dimensional vector of θ, µ 0 is a vector of prior beliefs and I is the identity matrix. (ii) lim n t n,a = t s = lim n ln(2ɛv n min ) ln λ 2 (G n ) for all i, where λ 2 (G n ) is the second largest eigenvalue of the transition matrix associated with the network, v n min = min i N v n,a i and ɛ < 1 2 that defines how close the vector µ n,a t influence vector v n,a. (iii) lim n G n = S PROOF: See Appendix VII.E. is an arbitrary real number of posterior beliefs at t is to the Theorem 2 highlights three main conditions for asymptotic learning with naïve agents. The first relates to the structure of prior beliefs. Unlike in the case of rational learning where the distribution of prior beliefs can be independent of the true state of nature, asymptotic learning with naïve agents occurs only if prior beliefs are informative about the true state of nature. That is the prior belief of each agent must be normally distributed with mean equal to the true state of nature. This condition generally highlights the superiority of rational over naïve agents in signal interpretation and hence in aggregating decentralized information.

28 28 OCTOBER 2015 The second condition concerns the relationship between exit times and network properties; and in particular the second largest eigenvalue of the transition matrix associated with the network. It states that the exit times for all agents must be asymptotically equal to the time t s at which a consensus obtains or equivalently, the time learning stops when the discount rate is zero. The second largest eigenvalue of graphs is a well studied concept in the literature of graph theory. Generally, the more sparsely connected a network is, the larger is λ 2. For example, a complete network (that in which each agent is connected to every other agent) of size n has λ 2 = 1 n 1, while a cyclic network (in which agents are arranged around a circle each connected to two neighbours on the left and right) of size n has λ 2 = 1 for odd value of n. Since for a complete network v n i = v n min = 1 n for all i, it follows that t s = 1. This value is fairly accurate since in a complete network, agents update their beliefs only once and in the second period a consensus obtains. The third condition relates to the general properties of network structure. That is, asymptotic learning occurs if and only if the network is asymptotically balanced. To fully characterize implications of condition (iii) of Theorem 2, we develop a measure of balancedness for arbitrary network in the next subsection. A. Asymptotically balanced networks To begin with, balanced networks are also asymptotically balanced but not vice versa. To characterize the class of networks that are asymptotically balanced, we need to construct a measure of balancedness of a network/matrix, which we denote by φ n. Definition 5. Let S n be closest doubly stochastic matrix in terms of the Frobenius norm to the matrix G n. Then G n is said to be φ n -balanced if given S n, S n G n 1 = φ n, where. 1 is the L 1 norm.

Bayesian Learning in Social Networks

Bayesian Learning in Social Networks Bayesian Learning in Social Networks Asu Ozdaglar Joint work with Daron Acemoglu, Munther Dahleh, Ilan Lobel Department of Electrical Engineering and Computer Science, Department of Economics, Operations

More information

Social Learning with Endogenous Network Formation

Social Learning with Endogenous Network Formation Social Learning with Endogenous Network Formation Yangbo Song November 10, 2014 Abstract I study the problem of social learning in a model where agents move sequentially. Each agent receives a private

More information

Preliminary Results on Social Learning with Partial Observations

Preliminary Results on Social Learning with Partial Observations Preliminary Results on Social Learning with Partial Observations Ilan Lobel, Daron Acemoglu, Munther Dahleh and Asuman Ozdaglar ABSTRACT We study a model of social learning with partial observations from

More information

NBER WORKING PAPER SERIES BAYESIAN LEARNING IN SOCIAL NETWORKS. Daron Acemoglu Munther A. Dahleh Ilan Lobel Asuman Ozdaglar

NBER WORKING PAPER SERIES BAYESIAN LEARNING IN SOCIAL NETWORKS. Daron Acemoglu Munther A. Dahleh Ilan Lobel Asuman Ozdaglar NBER WORKING PAPER SERIES BAYESIAN LEARNING IN SOCIAL NETWORKS Daron Acemoglu Munther A. Dahleh Ilan Lobel Asuman Ozdaglar Working Paper 14040 http://www.nber.org/papers/w14040 NATIONAL BUREAU OF ECONOMIC

More information

Rate of Convergence of Learning in Social Networks

Rate of Convergence of Learning in Social Networks Rate of Convergence of Learning in Social Networks Ilan Lobel, Daron Acemoglu, Munther Dahleh and Asuman Ozdaglar Massachusetts Institute of Technology Cambridge, MA Abstract We study the rate of convergence

More information

Economics of Networks Social Learning

Economics of Networks Social Learning Economics of Networks Social Learning Evan Sadler Massachusetts Institute of Technology Evan Sadler Social Learning 1/38 Agenda Recap of rational herding Observational learning in a network DeGroot learning

More information

6.207/14.15: Networks Lectures 22-23: Social Learning in Networks

6.207/14.15: Networks Lectures 22-23: Social Learning in Networks 6.207/14.15: Networks Lectures 22-23: Social Learning in Networks Daron Acemoglu and Asu Ozdaglar MIT December 2 end 7, 2009 1 Introduction Outline Recap on Bayesian social learning Non-Bayesian (myopic)

More information

Costly Social Learning and Rational Inattention

Costly Social Learning and Rational Inattention Costly Social Learning and Rational Inattention Srijita Ghosh Dept. of Economics, NYU September 19, 2016 Abstract We consider a rationally inattentive agent with Shannon s relative entropy cost function.

More information

Appendix of Homophily in Peer Groups The Costly Information Case

Appendix of Homophily in Peer Groups The Costly Information Case Appendix of Homophily in Peer Groups The Costly Information Case Mariagiovanna Baccara Leeat Yariv August 19, 2012 1 Introduction In this Appendix we study the information sharing application analyzed

More information

Area I: Contract Theory Question (Econ 206)

Area I: Contract Theory Question (Econ 206) Theory Field Exam Summer 2011 Instructions You must complete two of the four areas (the areas being (I) contract theory, (II) game theory A, (III) game theory B, and (IV) psychology & economics). Be sure

More information

Neuro Observational Learning

Neuro Observational Learning Neuro Observational Learning Kfir Eliaz Brown University and Tel Aviv University Ariel Rubinstein Tel Aviv University and New York University Abstract. We propose a simple model in which an agent observes

More information

Convergence of Rule-of-Thumb Learning Rules in Social Networks

Convergence of Rule-of-Thumb Learning Rules in Social Networks Convergence of Rule-of-Thumb Learning Rules in Social Networks Daron Acemoglu Department of Economics Massachusetts Institute of Technology Cambridge, MA 02142 Email: daron@mit.edu Angelia Nedić Department

More information

Information Cascades in Social Networks via Dynamic System Analyses

Information Cascades in Social Networks via Dynamic System Analyses Information Cascades in Social Networks via Dynamic System Analyses Shao-Lun Huang and Kwang-Cheng Chen Graduate Institute of Communication Engineering, National Taiwan University, Taipei, Taiwan Email:{huangntu,

More information

LEARNING IN SOCIAL NETWORKS

LEARNING IN SOCIAL NETWORKS LEARNING IN SOCIAL NETWORKS Ben Golub and Evan Sadler... 1. Introduction Social ties convey information through observations of others decisions as well as through conversations and the sharing of opinions.

More information

Deceptive Advertising with Rational Buyers

Deceptive Advertising with Rational Buyers Deceptive Advertising with Rational Buyers September 6, 016 ONLINE APPENDIX In this Appendix we present in full additional results and extensions which are only mentioned in the paper. In the exposition

More information

Experimentation and Observational Learning in a Market with Exit

Experimentation and Observational Learning in a Market with Exit ömmföäflsäafaäsflassflassflas ffffffffffffffffffffffffffffffffffff Discussion Papers Experimentation and Observational Learning in a Market with Exit Pauli Murto Helsinki School of Economics and HECER

More information

Learning from Others Outcomes

Learning from Others Outcomes Learning from Others Outcomes Alexander Wolitzky MIT BFI, July 21, 2017 Wolitzky (MIT) Learning from Others Outcomes BFI, July 21, 2017 1 / 53 Introduction Scarcity of Cost-Saving Innovation Development

More information

6.207/14.15: Networks Lecture 12: Generalized Random Graphs

6.207/14.15: Networks Lecture 12: Generalized Random Graphs 6.207/14.15: Networks Lecture 12: Generalized Random Graphs 1 Outline Small-world model Growing random networks Power-law degree distributions: Rich-Get-Richer effects Models: Uniform attachment model

More information

6 Evolution of Networks

6 Evolution of Networks last revised: March 2008 WARNING for Soc 376 students: This draft adopts the demography convention for transition matrices (i.e., transitions from column to row). 6 Evolution of Networks 6. Strategic network

More information

Persuading Skeptics and Reaffirming Believers

Persuading Skeptics and Reaffirming Believers Persuading Skeptics and Reaffirming Believers May, 31 st, 2014 Becker-Friedman Institute Ricardo Alonso and Odilon Camara Marshall School of Business - USC Introduction Sender wants to influence decisions

More information

Players as Serial or Parallel Random Access Machines. Timothy Van Zandt. INSEAD (France)

Players as Serial or Parallel Random Access Machines. Timothy Van Zandt. INSEAD (France) Timothy Van Zandt Players as Serial or Parallel Random Access Machines DIMACS 31 January 2005 1 Players as Serial or Parallel Random Access Machines (EXPLORATORY REMARKS) Timothy Van Zandt tvz@insead.edu

More information

Equilibrium Refinements

Equilibrium Refinements Equilibrium Refinements Mihai Manea MIT Sequential Equilibrium In many games information is imperfect and the only subgame is the original game... subgame perfect equilibrium = Nash equilibrium Play starting

More information

Information Aggregation in Complex Dynamic Networks

Information Aggregation in Complex Dynamic Networks The Combined 48 th IEEE Conference on Decision and Control and 28 th Chinese Control Conference Information Aggregation in Complex Dynamic Networks Ali Jadbabaie Skirkanich Associate Professor of Innovation

More information

A GENERAL MODEL OF BOUNDEDLY RATIONAL OBSERVATIONAL LEARNING: THEORY AND EXPERIMENT

A GENERAL MODEL OF BOUNDEDLY RATIONAL OBSERVATIONAL LEARNING: THEORY AND EXPERIMENT Working Paper WP-1120-E February, 2015 A GENERAL MODEL OF BOUNDEDLY RATIONAL OBSERVATIONAL LEARNING: THEORY AND EXPERIMENT Manuel Mueller-Frank Claudia Neriy IESE Business School University of Navarra

More information

Opting Out in a War of Attrition. Abstract

Opting Out in a War of Attrition. Abstract Opting Out in a War of Attrition Mercedes Adamuz Department of Business, Instituto Tecnológico Autónomo de México and Department of Economics, Universitat Autònoma de Barcelona Abstract This paper analyzes

More information

Social Learning Equilibria

Social Learning Equilibria Social Learning Equilibria Elchanan Mossel, Manuel Mueller-Frank, Allan Sly and Omer Tamuz March 21, 2018 Abstract We consider social learning settings in which a group of agents face uncertainty regarding

More information

Learning faster or more precisely? Strategic experimentation in networks

Learning faster or more precisely? Strategic experimentation in networks Learning faster or more precisely? Strategic experimentation in networks Mirjam Wuggenig July 31, 2015 Abstract The paper analyzes a dynamic model of rational strategic learning in a network. It complements

More information

Learning and Information Aggregation in an Exit Game

Learning and Information Aggregation in an Exit Game Learning and Information Aggregation in an Exit Game Pauli Murto y and Juuso Välimäki z This Version: April 2010 Abstract We analyze information aggregation in a stopping game with uncertain payo s that

More information

Social Learning and the Shadow of the Past

Social Learning and the Shadow of the Past MPRA Munich Personal RePEc Archive Social Learning and the Shadow of the Past Yuval Heller and Erik Mohlin Bar Ilan University, Lund University 28 April 2017 Online at https://mpraubuni-muenchende/79930/

More information

Strongly rational expectations equilibria with endogenous acquisition of information

Strongly rational expectations equilibria with endogenous acquisition of information Strongly rational expectations equilibria with endogenous acquisition of information Gabriel Desgranges Maik Heinemann 9 February 004 This paper analyzes conditions for existence of a strongly rational

More information

Herding and Congestion 1

Herding and Congestion 1 Herding and Congestion 1 Laurens G. Debo 2 Tepper School of Business Carnegie Mellon University Pittsburgh, PA 15213 Christine A. Parlour 3 Tepper School of Business Carnegie Mellon University Pittsburgh,

More information

Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games

Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games Gabriel Y. Weintraub, Lanier Benkard, and Benjamin Van Roy Stanford University {gweintra,lanierb,bvr}@stanford.edu Abstract

More information

Existence of Nash Networks in One-Way Flow Models

Existence of Nash Networks in One-Way Flow Models Existence of Nash Networks in One-Way Flow Models pascal billand a, christophe bravard a, sudipta sarangi b a CREUSET, Jean Monnet University, Saint-Etienne, France. email: pascal.billand@univ-st-etienne.fr

More information

Probability Models of Information Exchange on Networks Lecture 5

Probability Models of Information Exchange on Networks Lecture 5 Probability Models of Information Exchange on Networks Lecture 5 Elchanan Mossel (UC Berkeley) July 25, 2013 1 / 22 Informational Framework Each agent receives a private signal X i which depends on S.

More information

Confronting Theory with Experimental Data and vice versa. Lecture VII Social learning. The Norwegian School of Economics Nov 7-11, 2011

Confronting Theory with Experimental Data and vice versa. Lecture VII Social learning. The Norwegian School of Economics Nov 7-11, 2011 Confronting Theory with Experimental Data and vice versa Lecture VII Social learning The Norwegian School of Economics Nov 7-11, 2011 Quantal response equilibrium (QRE) Players do not choose best response

More information

Learning distributions and hypothesis testing via social learning

Learning distributions and hypothesis testing via social learning UMich EECS 2015 1 / 48 Learning distributions and hypothesis testing via social learning Anand D. Department of Electrical and Computer Engineering, The State University of New Jersey September 29, 2015

More information

Supplementary appendix to the paper Hierarchical cheap talk Not for publication

Supplementary appendix to the paper Hierarchical cheap talk Not for publication Supplementary appendix to the paper Hierarchical cheap talk Not for publication Attila Ambrus, Eduardo M. Azevedo, and Yuichiro Kamada December 3, 011 1 Monotonicity of the set of pure-strategy equilibria

More information

Tractable Bayesian Social Learning on Trees

Tractable Bayesian Social Learning on Trees Tractable Bayesian Social Learning on Trees Yashodhan Kanoria and Omer Tamuz April 16, 2012 Abstract We study a model of Bayesian agents in social networks who learn from the actions of their neighbors.

More information

Confronting Theory with Experimental Data and vice versa Lecture II: Social Learning. Academia Sinica Mar 3, 2009

Confronting Theory with Experimental Data and vice versa Lecture II: Social Learning. Academia Sinica Mar 3, 2009 Confronting Theory with Experimental Data and vice versa Lecture II: Social Learning Academia Sinica Mar 3, 2009 Background Social learning describes any situation in which individuals learn by observing

More information

Asymptotic Learning on Bayesian Social Networks

Asymptotic Learning on Bayesian Social Networks Asymptotic Learning on Bayesian Social Networks Elchanan Mossel Allan Sly Omer Tamuz January 29, 2014 Abstract Understanding information exchange and aggregation on networks is a central problem in theoretical

More information

Political Cycles and Stock Returns. Pietro Veronesi

Political Cycles and Stock Returns. Pietro Veronesi Political Cycles and Stock Returns Ľuboš Pástor and Pietro Veronesi University of Chicago, National Bank of Slovakia, NBER, CEPR University of Chicago, NBER, CEPR Average Excess Stock Market Returns 30

More information

Ex Post Cheap Talk : Value of Information and Value of Signals

Ex Post Cheap Talk : Value of Information and Value of Signals Ex Post Cheap Talk : Value of Information and Value of Signals Liping Tang Carnegie Mellon University, Pittsburgh PA 15213, USA Abstract. Crawford and Sobel s Cheap Talk model [1] describes an information

More information

Coordination and Continuous Choice

Coordination and Continuous Choice Coordination and Continuous Choice Stephen Morris and Ming Yang Princeton University and Duke University December 2016 Abstract We study a coordination game where players choose what information to acquire

More information

6.207/14.15: Networks Lecture 16: Cooperation and Trust in Networks

6.207/14.15: Networks Lecture 16: Cooperation and Trust in Networks 6.207/14.15: Networks Lecture 16: Cooperation and Trust in Networks Daron Acemoglu and Asu Ozdaglar MIT November 4, 2009 1 Introduction Outline The role of networks in cooperation A model of social norms

More information

Ergodicity and Non-Ergodicity in Economics

Ergodicity and Non-Ergodicity in Economics Abstract An stochastic system is called ergodic if it tends in probability to a limiting form that is independent of the initial conditions. Breakdown of ergodicity gives rise to path dependence. We illustrate

More information

Bayesian Persuasion Online Appendix

Bayesian Persuasion Online Appendix Bayesian Persuasion Online Appendix Emir Kamenica and Matthew Gentzkow University of Chicago June 2010 1 Persuasion mechanisms In this paper we study a particular game where Sender chooses a signal π whose

More information

Government 2005: Formal Political Theory I

Government 2005: Formal Political Theory I Government 2005: Formal Political Theory I Lecture 11 Instructor: Tommaso Nannicini Teaching Fellow: Jeremy Bowles Harvard University November 9, 2017 Overview * Today s lecture Dynamic games of incomplete

More information

Wars of Attrition with Budget Constraints

Wars of Attrition with Budget Constraints Wars of Attrition with Budget Constraints Gagan Ghosh Bingchao Huangfu Heng Liu October 19, 2017 (PRELIMINARY AND INCOMPLETE: COMMENTS WELCOME) Abstract We study wars of attrition between two bidders who

More information

CS224W: Analysis of Networks Jure Leskovec, Stanford University

CS224W: Analysis of Networks Jure Leskovec, Stanford University CS224W: Analysis of Networks Jure Leskovec, Stanford University http://cs224w.stanford.edu 10/30/17 Jure Leskovec, Stanford CS224W: Social and Information Network Analysis, http://cs224w.stanford.edu 2

More information

Entry under an Information-Gathering Monopoly Alex Barrachina* June Abstract

Entry under an Information-Gathering Monopoly Alex Barrachina* June Abstract Entry under an Information-Gathering onopoly Alex Barrachina* June 2016 Abstract The effects of information-gathering activities on a basic entry model with asymmetric information are analyzed. In the

More information

A New Random Graph Model with Self-Optimizing Nodes: Connectivity and Diameter

A New Random Graph Model with Self-Optimizing Nodes: Connectivity and Diameter A New Random Graph Model with Self-Optimizing Nodes: Connectivity and Diameter Richard J. La and Maya Kabkab Abstract We introduce a new random graph model. In our model, n, n 2, vertices choose a subset

More information

Distributional stability and equilibrium selection Evolutionary dynamics under payoff heterogeneity (II) Dai Zusai

Distributional stability and equilibrium selection Evolutionary dynamics under payoff heterogeneity (II) Dai Zusai Distributional stability Dai Zusai 1 Distributional stability and equilibrium selection Evolutionary dynamics under payoff heterogeneity (II) Dai Zusai Tohoku University (Economics) August 2017 Outline

More information

The Value of Congestion

The Value of Congestion Laurens G. Debo Tepper School of Business Carnegie Mellon University Pittsburgh, PA 15213 The Value of Congestion Uday Rajan Ross School of Business University of Michigan Ann Arbor, MI 48109. February

More information

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden 1 Selecting Efficient Correlated Equilibria Through Distributed Learning Jason R. Marden Abstract A learning rule is completely uncoupled if each player s behavior is conditioned only on his own realized

More information

Bayesian Social Learning with Random Decision Making in Sequential Systems

Bayesian Social Learning with Random Decision Making in Sequential Systems Bayesian Social Learning with Random Decision Making in Sequential Systems Yunlong Wang supervised by Petar M. Djurić Department of Electrical and Computer Engineering Stony Brook University Stony Brook,

More information

1. Introduction. 2. A Simple Model

1. Introduction. 2. A Simple Model . Introduction In the last years, evolutionary-game theory has provided robust solutions to the problem of selection among multiple Nash equilibria in symmetric coordination games (Samuelson, 997). More

More information

Game Theory. Wolfgang Frimmel. Perfect Bayesian Equilibrium

Game Theory. Wolfgang Frimmel. Perfect Bayesian Equilibrium Game Theory Wolfgang Frimmel Perfect Bayesian Equilibrium / 22 Bayesian Nash equilibrium and dynamic games L M R 3 2 L R L R 2 2 L R L 2,, M,2, R,3,3 2 NE and 2 SPNE (only subgame!) 2 / 22 Non-credible

More information

Learning and Information Aggregation in an Exit Game

Learning and Information Aggregation in an Exit Game Learning and Information Aggregation in an Exit Game Pauli Murto y and Juuso Välimäki z First Version: December 2005 This Version: May 2008 Abstract We analyze information aggregation in a stopping game

More information

Pathological Outcomes of Observational Learning

Pathological Outcomes of Observational Learning Pathological Outcomes of Observational Learning by Lones Smith and Peter Sorensen Econometrica, March 2000 In the classic herding models of Banerjee (1992) and Bikhchandani, Hirshleifer, and Welch (1992),

More information

Costly Expertise. Dino Gerardi and Leeat Yariv yz. Current Version: December, 2007

Costly Expertise. Dino Gerardi and Leeat Yariv yz. Current Version: December, 2007 Costly Expertise Dino Gerardi and Leeat Yariv yz Current Version: December, 007 In many environments expertise is costly. Costs can manifest themselves in numerous ways, ranging from the time that is required

More information

INFORMATION AND INTERACTION. Dirk Bergemann, Tibor Heumann, and Stephen Morris. May 2017 COWLES FOUNDATION DISCUSSION PAPER NO.

INFORMATION AND INTERACTION. Dirk Bergemann, Tibor Heumann, and Stephen Morris. May 2017 COWLES FOUNDATION DISCUSSION PAPER NO. INFORMATION AND INTERACTION By Dirk Bergemann, Tibor Heumann, and Stephen Morris May 2017 COWLES FOUNDATION DISCUSSION PAPER NO 2088 COWLES FOUNDATION FOR RESEARCH IN ECONOMICS YALE UNIVERSITY Box 208281

More information

Are Obstinacy and Threat of Leaving the Bargaining Table Wise Tactics in Negotiations?

Are Obstinacy and Threat of Leaving the Bargaining Table Wise Tactics in Negotiations? Are Obstinacy and Threat of Leaving the Bargaining Table Wise Tactics in Negotiations? Selçuk Özyurt Sabancı University Very early draft. Please do not circulate or cite. Abstract Tactics that bargainers

More information

Social network analysis: social learning

Social network analysis: social learning Social network analysis: social learning Donglei Du (ddu@unb.edu) Faculty of Business Administration, University of New Brunswick, NB Canada Fredericton E3B 9Y2 October 20, 2016 Donglei Du (UNB) AlgoTrading

More information

Modeling Strategic Information Sharing in Indian Villages

Modeling Strategic Information Sharing in Indian Villages Modeling Strategic Information Sharing in Indian Villages Jeff Jacobs jjacobs3@stanford.edu Arun Chandrasekhar arungc@stanford.edu Emily Breza Columbia University ebreza@columbia.edu December 3, 203 Matthew

More information

Bargaining, Contracts, and Theories of the Firm. Dr. Margaret Meyer Nuffield College

Bargaining, Contracts, and Theories of the Firm. Dr. Margaret Meyer Nuffield College Bargaining, Contracts, and Theories of the Firm Dr. Margaret Meyer Nuffield College 2015 Course Overview 1. Bargaining 2. Hidden information and self-selection Optimal contracting with hidden information

More information

NASH IMPLEMENTATION USING SIMPLE MECHANISMS WITHOUT UNDESIRABLE MIXED-STRATEGY EQUILIBRIA

NASH IMPLEMENTATION USING SIMPLE MECHANISMS WITHOUT UNDESIRABLE MIXED-STRATEGY EQUILIBRIA NASH IMPLEMENTATION USING SIMPLE MECHANISMS WITHOUT UNDESIRABLE MIXED-STRATEGY EQUILIBRIA MARIA GOLTSMAN Abstract. This note shows that, in separable environments, any monotonic social choice function

More information

When to Ask for an Update: Timing in Strategic Communication

When to Ask for an Update: Timing in Strategic Communication When to Ask for an Update: Timing in Strategic Communication Work in Progress Ying Chen Johns Hopkins University Atara Oliver Rice University March 19, 2018 Main idea In many communication situations,

More information

A Rothschild-Stiglitz approach to Bayesian persuasion

A Rothschild-Stiglitz approach to Bayesian persuasion A Rothschild-Stiglitz approach to Bayesian persuasion Matthew Gentzkow and Emir Kamenica Stanford University and University of Chicago December 2015 Abstract Rothschild and Stiglitz (1970) represent random

More information

When to Ask for an Update: Timing in Strategic Communication. National University of Singapore June 5, 2018

When to Ask for an Update: Timing in Strategic Communication. National University of Singapore June 5, 2018 When to Ask for an Update: Timing in Strategic Communication Ying Chen Johns Hopkins University Atara Oliver Rice University National University of Singapore June 5, 2018 Main idea In many communication

More information

Waiting for my neighbors

Waiting for my neighbors Waiting for my neighbors Sidartha Gordon, Emeric Henry and Pauli Murto January 14, 2016 Abstract We study a waiting game on a network where the payoff of taking an action increases each time a neighbor

More information

Building socio-economic Networks: How many conferences should you attend?

Building socio-economic Networks: How many conferences should you attend? Prepared with SEVI S LIDES Building socio-economic Networks: How many conferences should you attend? Antonio Cabrales, Antoni Calvó-Armengol, Yves Zenou January 06 Summary Introduction The game Equilibrium

More information

Introduction Benchmark model Belief-based model Empirical analysis Summary. Riot Networks. Lachlan Deer Michael D. König Fernando Vega-Redondo

Introduction Benchmark model Belief-based model Empirical analysis Summary. Riot Networks. Lachlan Deer Michael D. König Fernando Vega-Redondo Riot Networks Lachlan Deer Michael D. König Fernando Vega-Redondo University of Zurich University of Zurich Bocconi University June 7, 2018 Deer & König &Vega-Redondo Riot Networks June 7, 2018 1 / 23

More information

Observational Learning with Position Uncertainty

Observational Learning with Position Uncertainty Observational Learning with Position Uncertainty Ignacio Monzón and Michael Rapp December 009 Abstract Observational learning is typically examined when agents have precise information about their position

More information

A hierarchical network formation model

A hierarchical network formation model Available online at www.sciencedirect.com Electronic Notes in Discrete Mathematics 50 (2015) 379 384 www.elsevier.com/locate/endm A hierarchical network formation model Omid Atabati a,1 Babak Farzad b,2

More information

Inferring Quality from a Queue

Inferring Quality from a Queue Inferring Quality from a Queue Laurens G. Debo Tepper School of Business Carnegie Mellon University Pittsburgh, PA 15213 Uday Rajan Ross School of Business University of Michigan Ann Arbor, MI 48109. January

More information

Definitions and Proofs

Definitions and Proofs Giving Advice vs. Making Decisions: Transparency, Information, and Delegation Online Appendix A Definitions and Proofs A. The Informational Environment The set of states of nature is denoted by = [, ],

More information

Diffusion Centrality: Foundations and Extensions

Diffusion Centrality: Foundations and Extensions Diffusion Centrality: Foundations and Extensions Yann Bramoullé and Garance Genicot* October 2018 Abstract: We first clarify the precise theoretical foundations behind the notion of diffusion centrality.

More information

Online Appendix for Slow Information Diffusion and the Inertial Behavior of Durable Consumption

Online Appendix for Slow Information Diffusion and the Inertial Behavior of Durable Consumption Online Appendix for Slow Information Diffusion and the Inertial Behavior of Durable Consumption Yulei Luo The University of Hong Kong Jun Nie Federal Reserve Bank of Kansas City Eric R. Young University

More information

Information Choice in Macroeconomics and Finance.

Information Choice in Macroeconomics and Finance. Information Choice in Macroeconomics and Finance. Laura Veldkamp New York University, Stern School of Business, CEPR and NBER Spring 2009 1 Veldkamp What information consumes is rather obvious: It consumes

More information

Experimentation, Patents, and Innovation

Experimentation, Patents, and Innovation Experimentation, Patents, and Innovation Daron Acemoglu y Kostas Bimpikis z Asuman Ozdaglar x October 2008. Abstract This paper studies a simple model of experimentation and innovation. Our analysis suggests

More information

Information diffusion in networks through social learning

Information diffusion in networks through social learning Theoretical Economics 10 2015), 807 851 1555-7561/20150807 Information diffusion in networks through social learning Ilan Lobel IOMS Department, Stern School of Business, New York University Evan Sadler

More information

Delay and Information Aggregation in Stopping Games with Private Information

Delay and Information Aggregation in Stopping Games with Private Information ömmföäflsäafaäsflassflassflas ffffffffffffffffffffffffffffffffffff Discussion Papers Delay and Information Aggregation in Stopping Games with Private Information Pauli Murto Helsinki School of Economics

More information

Spread of (Mis)Information in Social Networks

Spread of (Mis)Information in Social Networks Spread of (Mis)Information in Social Networks Daron Acemoglu, Asuman Ozdaglar, and Ali ParandehGheibi May 9, 009 Abstract We provide a model to investigate the tension between information aggregation and

More information

Correlated Equilibrium in Games with Incomplete Information

Correlated Equilibrium in Games with Incomplete Information Correlated Equilibrium in Games with Incomplete Information Dirk Bergemann and Stephen Morris Econometric Society Summer Meeting June 2012 Robust Predictions Agenda game theoretic predictions are very

More information

Word of Mouth Advertising and Strategic Learning in Networks 1

Word of Mouth Advertising and Strategic Learning in Networks 1 Word of Mouth Advertising and Strategic Learning in Networks 1 Kalyan Chatterjee Department of Economics, The Pennsylvania State University, University Park, Pa. 16802, USA. Bhaskar Dutta Department of

More information

SF2972 Game Theory Exam with Solutions March 15, 2013

SF2972 Game Theory Exam with Solutions March 15, 2013 SF2972 Game Theory Exam with s March 5, 203 Part A Classical Game Theory Jörgen Weibull and Mark Voorneveld. (a) What are N, S and u in the definition of a finite normal-form (or, equivalently, strategic-form)

More information

Does Majority Rule Produce Hasty Decisions?

Does Majority Rule Produce Hasty Decisions? Does Majority Rule Produce Hasty Decisions? Jimmy Chan 1 Wing Suen 2 1 Fudan University 2 University of Hong Kong October 18, 2013 Ohio State University Chan/Suen (Fudan/HKU) Majority Rule and Hasty Decisions

More information

Online Appendix for Sourcing from Suppliers with Financial Constraints and Performance Risk

Online Appendix for Sourcing from Suppliers with Financial Constraints and Performance Risk Online Appendix for Sourcing from Suppliers with Financial Constraints and Performance Ris Christopher S. Tang S. Alex Yang Jing Wu Appendix A: Proofs Proof of Lemma 1. In a centralized chain, the system

More information

Economic Growth: Lecture 9, Neoclassical Endogenous Growth

Economic Growth: Lecture 9, Neoclassical Endogenous Growth 14.452 Economic Growth: Lecture 9, Neoclassical Endogenous Growth Daron Acemoglu MIT November 28, 2017. Daron Acemoglu (MIT) Economic Growth Lecture 9 November 28, 2017. 1 / 41 First-Generation Models

More information

h Edition Money in Search Equilibrium

h Edition Money in Search Equilibrium In the Name of God Sharif University of Technology Graduate School of Management and Economics Money in Search Equilibrium Diamond (1984) Navid Raeesi Spring 2014 Page 1 Introduction: Markets with Search

More information

Knowing What Others Know: Coordination Motives in Information Acquisition

Knowing What Others Know: Coordination Motives in Information Acquisition Knowing What Others Know: Coordination Motives in Information Acquisition Christian Hellwig and Laura Veldkamp UCLA and NYU Stern May 2006 1 Hellwig and Veldkamp Two types of information acquisition Passive

More information

Robustness of Equilibria in Anonymous Local Games

Robustness of Equilibria in Anonymous Local Games Robustness of Equilibria in Anonymous Local Games Willemien Kets October 12, 2010 Abstract This paper studies the robustness of symmetric equilibria in anonymous local games to perturbations of prior beliefs.

More information

Implementability, Walrasian Equilibria, and Efficient Matchings

Implementability, Walrasian Equilibria, and Efficient Matchings Implementability, Walrasian Equilibria, and Efficient Matchings Piotr Dworczak and Anthony Lee Zhang Abstract In general screening problems, implementable allocation rules correspond exactly to Walrasian

More information

UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer Social learning and bargaining (axiomatic approach)

UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer Social learning and bargaining (axiomatic approach) UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer 2015 Social learning and bargaining (axiomatic approach) Block 4 Jul 31 and Aug 1, 2015 Auction results Herd behavior and

More information

Graph Detection and Estimation Theory

Graph Detection and Estimation Theory Introduction Detection Estimation Graph Detection and Estimation Theory (and algorithms, and applications) Patrick J. Wolfe Statistics and Information Sciences Laboratory (SISL) School of Engineering and

More information

Observational Learning with Position Uncertainty

Observational Learning with Position Uncertainty Observational Learning with Position Uncertainty Ignacio Monzón and Michael Rapp September 15, 2014 Abstract Observational learning is typically examined when agents have precise information about their

More information

Optimal Pricing in Networks with Externalities

Optimal Pricing in Networks with Externalities Optimal Pricing in Networks with Externalities Ozan Candogan Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology, MA, Cambridge, MA 0139, candogan@mit.edu Kostas

More information

SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES. Victor Aguirregabiria (Boston University) and. Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota

SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES. Victor Aguirregabiria (Boston University) and. Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES Victor Aguirregabiria (Boston University) and Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota February 16, 2006 CONTEXT AND MOTIVATION Many interesting

More information

Political Economy of Institutions and Development: Problem Set 1. Due Date: Thursday, February 23, in class.

Political Economy of Institutions and Development: Problem Set 1. Due Date: Thursday, February 23, in class. Political Economy of Institutions and Development: 14.773 Problem Set 1 Due Date: Thursday, February 23, in class. Answer Questions 1-3. handed in. The other two questions are for practice and are not

More information

Network Infusion to Infer Information Sources in Networks Soheil Feizi, Ken Duffy, Manolis Kellis, and Muriel Medard

Network Infusion to Infer Information Sources in Networks Soheil Feizi, Ken Duffy, Manolis Kellis, and Muriel Medard Computer Science and Artificial Intelligence Laboratory Technical Report MIT-CSAIL-TR-214-28 December 2, 214 Network Infusion to Infer Information Sources in Networks Soheil Feizi, Ken Duffy, Manolis Kellis,

More information