Dynamics at the Boundary of Game Theory and Distributed Computing

Size: px
Start display at page:

Download "Dynamics at the Boundary of Game Theory and Distributed Computing"

Transcription

1 Dynamics at the Boundary of Game Theory and Distributed Computing Aaron D. Jaggard 1, Neil Lutz 2, Michael Schapira 3, and Rebecca N. Wright 4 1 U.S. Naval Research Laboratory, Washington, DC 20375, USA. aaron.jaggard@nrl.navy.mil 2 Rutgers University, Piscataway, NJ 08854, USA. njlutz@cs.rutgers.edu 3 Hebrew University of Jerusalem, Jerusalem 91904, Israel. schapiram@huji.ac.il 4 Rutgers University, Piscataway, NJ 08854, USA. rebecca.wright@rutgers.edu Abstract. We use ideas from distributed computing and game theory to study dynamic and decentralized environments in which computational nodes, or decision makers, interact strategically and with limited information. In such environments, which arise in many realworld settings, the participants act as both economic and computational entities. We exhibit a general non-convergence result for a broad class of dynamics in asynchronous settings. We consider implications of our result across a wide variety of interesting and timely applications: circuit design, social networks, Internet routing, and congestion control. We also study the computational and communication complexity of testing the convergence of asynchronous dynamics, as well as the effects of limited asynchrony. For uncoupled game dynamics, in which preferences are private inputs, we give new bounds on the recall necessary for self stabilization to an equilibrium. Our work opens a new avenue for research at the intersection of distributed computing and game theory. 1 Introduction Dynamic environments where decision makers repeatedly interact arise in a variety of settings, such as Internet protocols, large-scale markets, social networks, and multi-processor computer architectures. Study of these environments lies at the boundary of game theory and distributed computing. The decision makers are simultaneously strategic entities with individual economic preferences and computational entities with limited resources, working in a decentralized and uncertain environment. To understand the global behaviors that result from these interactions the dynamics of these systems we draw on ideas from both disciplines. The notion of self stabilization to a legitimate state in a distributed system parallels that of convergence to an equilibrium in a game. The foci, however, are very different. In game theory, there is extensive research on dynamics that result from what is perceived as natural strategic decision making (e.g., best- or better-response dynamics, fictitious play, or regret minimization). Even simple heuristics that require little information or computational resources can yield sophisticated behavior, such as the convergence of best-response dynamics to equilibrium points (see [Hart 2005] and references therein). These positive results for simple game dynamics are, with few exceptions (see Sec. 2), based on the sometimes implicit and often unrealistic premise of a controlled environment in which actions are synchronously coordinated. Distributed computing research emphasizes the environmental uncertainty that results from decentralization, but has no notion of natural rules of behavior. It has long been known that environmental uncertainty in the form of both asynchrony [Fischer et al. 1985; Lynch 1989] and arbitrary initialization [Dolev 2000] introduces Preliminary versions of some of this work appeared in Innovations in Computer Science ICS 2011 [Jaggard et al. 2011] and Symposium on Algorithmic Game Theory SAGT 2014 [Jaggard et al. 2014].

2 substantial difficulties for protocol termination in distributed systems. Our work bridges the gap between these two approaches by initiating the study of game dynamics in distributed computing settings. We take the first steps of this research agenda, focusing primarily on systems in which the decision makers, or computational nodes, are deterministic and have bounded recall, meaning that their behavior is based only on the recent history of system interaction. Our main contribution is a general impossibility result (Thm. 4.1) showing that a large and natural class of boundedrecall dynamics can fail to converge in asynchronous environments whenever there are at least two equilibrium points of the dynamics. We apply this result to several domains: convergence of game dynamics to pure Nash equilibria, stabilization of asynchronous circuits, convergence to a stable routing tree of the Border Gateway Protocol (BGP), and more. We begin in Sec. 2 with an overview of related work. Before giving our general model of asynchronous interaction, we describe in Sec. 3 a restricted model in which nodes behaviors are historyless, depending only on the current state of the system. We prove the main non-convergence result separately and directly for this special case, and this version of the result is sufficient for many of our applications. In Sec. 4 we present our full model and give the general version of our main theorem, which we show is essentially tight. The implications of this result for game dynamics are discussed in Sec. 5; we show that the historyless version of the main theorem is equivalent to the non-convergence of asynchronous best-response dynamics in the presence of multiple equilibria. In Sec. 6, we describe applications of our main theorem to asynchronous circuit design, social networks, interdomain routing protocols such as BGP, and congestion control in networks. We explore the impact of asynchrony that is either randomized (rather than adversarial) or bounded on convergence guarantees in Sec. 7. In Sec. 8, we present both communication and computational complexity hardness results for checking whether an asynchronous system will always converge: we show that it is PSPACE-hard and requires exponential communication, even in the historyless case. We consider the case where the decision makers are given private inputs that determine their behavior (i.e., the dynamics are uncoupled [Hart and Mas-Colell 2003]) in Sec. 9. We give both positive and negative results for self stabilization of game dynamics in this environment. In Sec. 10, we prove our main result by using a valency argument (a now-standard technique in distributed computing theory [Lynch 1989; Fich and Ruppert 2003]) to show that decision protocols cannot satisfy a condition we call independence of decisions. We discuss the connections of this work to consensus protocols in Sec. 11, showing that the famous result of Fischer et al. [1985] on the impossibility of resilient consensus similarly follows from the fact that such protocols must, yet cannot, satisfy independence of decisions. Finally, we outline interesting directions for future research in Sec Related Work Our work relates to many ideas in game theory and in distributed computing. Here, we discuss gametheoretic work on the dynamics of simple strategies and on asynchrony, distributed computing work on fault tolerance and self stabilization, and other connections between game theory and computer science (for more, see [Halpern 2003]). We also highlight the application areas we consider. Algorithmic game theory Since our work draws on both game theory and computer science, it may be considered part of the broader research program of algorithmic game theory (AGT), which merges concepts and ideas from those two fields. The three main areas of study in AGT have been algorithmic mechanism design, which applies concepts from computer science to economic mechanism 2

3 design; algorithmic and complexity research on the computation of equilibria; and studying the price of anarchy, which describes the efficiency of equilibria and draws on approximability research [Nisan et al. 2007]. Our work does not fall into any of these categories but creates a new link between game theory and computer science by exploring game dynamics in distributed computing settings. Adaptive heuristics. Much work in game theory and economics deals with adaptive heuristics (see [Hart 2005] and references therein). Generally speaking, this long line of research explores the convergence of simple and myopic rules of behavior (e.g., best-response/fictitious-play/no-regret dynamics) to an equilibrium. However, with few exceptions (see below), such analysis has so far primarily concentrated on synchronous environments in which steps take place simultaneously or in some other predetermined order. In this work, we explore dynamics of this type in asynchronous environments, which are more realistic for many applications. Game-theoretic work on asynchronous environments. Some game-theoretic work on repeated games considers asynchronous moves 5 (see [Lagunoff and Matsui 1997; Yoon 2004], among others, and references therein). Such work does not explore the behavior of dynamics, but has other research goals (e.g., characterizing equilibria, establishing folk theorems). To the best of our knowledge, we are the first to study the effects of asynchrony (in the broad distributed computing sense) on the convergence of game dynamics to equilibria. Fault-tolerant computation. We use ideas and techniques from work in distributed computing on protocol termination in asynchronous computational environments where nodes and communication channels are possibly faulty. Protocol termination in such environments, initially motivated by multiprocessor computer architectures, has been extensively studied in the past three decades [Fischer et al. 1985; Ben-Or 1986; Dolev et al. 1987; Borowsky and Gafni 1993; Herlihy and Shavit 1999; Saks and Zaharoglou 2000], as nicely surveyed in [Lynch 1989; Fich and Ruppert 2003]. Fischer, Lynch and Paterson [1985] showed, in a landmark paper, that a broad class of failure-resilient consensus protocols cannot provably terminate. Intuitively, the risk of protocol non-termination in that work stems from the possibility of failures; a computational node cannot tell whether another node is silent due to a failure or is simply taking a long time to react. Our non-convergence result, by contrast, applies to failure-free environments. In game-theoretic work that incorporated fault tolerance concepts, Abraham et al. [2006] studied equilibria that are robust to defection and collusion. Self stabilization. The concept of self stabilization is fundamental to distributed computing and dates back to Dijkstra [1974] (see [Dolev 2000] and references therein). Convergence of dynamics to an equilibrium in our model can be viewed as the self stabilization of such dynamics (where the equilibrium points are the legitimate configurations). Our formulation draws ideas from work in distributed computing (e.g., Burns distributed daemon model [1987]) and in networking research [Griffin et al. 2002] on self stabilization. Uncoupled dynamics. Hart and Mas-Colell [2003] introduced the concept of uncoupled game dynamics and gave a non-convergence result in a continuous-time setting. In a later paper [2006], they showed that this result held in a discrete-time setting, even in the presence of randomness, and addressed convergence to mixed Nash equilibria by bounded-recall uncoupled dynamics. Babichenko [2010; 2012] investigated the situation when the uncoupled nodes are finite-state automata, as well as completely uncoupled dynamics, in which each node can see only the history of its own actions and payoffs. Young [2009] and Pradelski and Young [2012] gave completely uncoupled 5 Often, the term asynchrony merely indicates that players are not all activated at each time step, and thus is used to describe environments where only one player is activated at a time ( alternating moves ), or where there is a probability distribution that determines who is activated when. 3

4 dynamics that achieve an equilibrium in a high proportion of steps but do not necessarily converge. Hart and Mansour [2010] analyzed the time to convergence for uncoupled dynamics. Applications. We discuss the implications of our non-convergence result across a wide variety of applications, that have previously been studied: convergence of game dynamics (see, e.g., [Hart and Mas-Colell 2003, 2006]); asynchronous circuits (see, e.g., [Davis and Nowick 1997]); diffusion of innovations, behaviors, etc., in social networks (see [Morris 2000] and [Immorlica et al. 2007]); interdomain routing [Griffin et al. 2002; Sami et al. 2009]; and congestion control [Godfrey et al. 2010]. 3 Historyless Dynamic Interaction We begin our exploration of dynamics with historyless dynamics, in which computational nodes react only to the current state of the system. While seemingly very restricted, historyless dynamics capture the prominent and extensively studied best- and better-response dynamics from game theory (as we discuss in Sec. 5). We show that historyless dynamics also encompass a host of other applications of interest, ranging from Internet protocols to the adoption of technologies in social network (as we explain in Sec. 6). In this section we formally define our model of historyless dynamic interaction, and we prove our main theorem in this special case. We present historyless dynamics separately from our general model of asynchronous interaction as an accessible and intuitive introduction to the structures we consider in this work. When we prove the general version of our main theorem (Thm. 4.1) in Sec. 10, we do so in a way that illuminates its kinship to consensus results, but this requires introducing more notation and an additional layer of abstraction. The direct proof of the historyless case given in this section contains many of the core ideas of that proof despite being more straightforward and less burdened by notation. 3.1 Dynamics of Historyless Systems Intuitively, a historyless interaction system consists of a collection of computational nodes, each capable of selecting actions that are visible to the other nodes. The state of the system at any time consists of each node s current action. Each node has a deterministic reaction function that maps states to actions. Definition 3.1. A historyless interaction system (or just historyless system) is characterized by a tuple (n, A, f): The system has n Z + computational nodes, labeled 1,..., n. A = A 1... A n, where each A i is a set called the action space of node i. A is called the state space of the system, and a state is an n-tuple a = (a 1,..., a n ) A. f : A A is a function given by f(a) = (f 1 (a),..., f n (a)), where f i : A A i is called node i s reaction function. We now describe the dynamics of our model: the ways that a system s state can evolve due to interactions between nodes. Informally, there is some initial state, and, in each discrete time step 1, 2, 3,..., a schedule allows some of the nodes to react to the current state. Definition 3.2. A schedule is a function σ : Z + 2 [n] that maps each t to a (possibly empty) subset of the computational nodes. 6 If i σ(t), then we say that node i is activated at time t. 6 [n] denotes {1,..., n}, and for any set S, 2 S is the set of all subsets of S. 4

5 The nodes that are activated in a given timestep react simultaneously; each applies its reaction function to the current state to choose a new action. This updated action is immediately observable to all other nodes. 7 Since the reaction functions are deterministic, an initial state and a schedule completely determine the resulting infinite state sequence; we call these state sequences trajectories. Definition 3.3. Let a 0 A be a state, and let σ be a schedule. The (a 0, σ)-trajectory of the system is the infinite state sequence a 0, a 1, a 2,... such that for every time t Z + and node i [n], { a t fi (a i = t 1 ) if i σ(t) a t 1 i otherwise. (1) We visualize a historyless interaction system as a directed multigraph whose vertices are the states. Each state a has outgoing edges for every possible activation set; the terminus of the edge labeled with a set S [n] is the state that would be reached by starting at a and activating the nodes in S. An example is given in Fig. 1. A trajectory corresponds to a path in this graph. a f 1(a) f 2(a) αα α α αβ β α βα α β ββ β β, {1}, {2}, {1, 2} αα {2} αβ {1, 2} {1} {1} {1, 2} βα ββ, {1}, {2}, {1, 2} {2} Fig. 1. Example of a historyless interaction system with two computational nodes, 1 and 2. Each node has the action space {α, β}, and the reaction functions are given in the table on the left. The stable states are αα and ββ. Our main theorem is an impossibility result, in which we show that an adversarially chosen initial state and schedule can prevent desirable system behavior. But an arbitrary schedule might never let some or all nodes react, or might stop activating them after some time. We limit this adversarial power (thereby strengthening our impossibility result) by restricting our attention to fair schedules, which never permanently stop activating any node. Definition 3.4. A schedule σ is fair if each node i is activated infinitely many times in σ, i.e., {t Z + : i σ(t)} is an infinite set. A trajectory is fair if it is the (a, σ)-trajectory for some state a and fair schedule σ. 7 This model has perfect monitoring. While this is clearly unrealistic in some important real-life contexts (e.g., some of the environments considered in Sec. 6), this restriction only strengthens our main results, which are impossibility results. 5

6 We are especially interested in whether a system s fair trajectories always converge, eventually remaining at a single stable state forever. Definition 3.5. A trajectory a 0, a 1, a 2,... converges to a state b if there exists some T Z + such that, for all t > T, a t = b. Notice that a historyless system can only converge to state that is a fixed point of the function f. We call such a state a stable state of the system; if the system reaches a stable state, then it converges to that state. In fact, convergence to a particular stable state may also be guaranteed by reaching some other states. We say that such states are committed to the stable state. Definition 3.6. A state a is committed to a state b if every fair trajectory from a converges to b. That is, if for every fair schedule σ, the (a, σ)-trajectory converges to b. An uncommitted state is one that is not committed to any state. 3.2 Non-convergence Result for Historyless Systems Our main theorem concerns systems in which the reaction functions are self-independent, meaning that each node ignores its own current action when reacting to the system s state. In discussing self-independent reaction functions, it is convenient to use the notation A i = A 1... A i 1 A i+1... A n, the state space of the system when i is ignored. Similarly, for a state a, a i A i denotes (a 1,..., a i 1, a i+1,..., a n ), i.e., a with i s action omitted. Using this notation, we formally define self independence in historyless systems. Definition 3.7. A reaction function f i in a historyless interaction system is self-independent if there exists a function g i : A i A i such that f i (a) = g i (a i ) for every state a A. In proving the historyless case of the main theorem, we will make repeated use of the following consequence of self independence. Lemma 3.1. In a historyless interaction system with self-independent reaction functions, if two committed states differ in only one node s action, then they are committed to the same stable state. Proof. To see this, let a and a be committed states that differ only in their i th coordinate, for some i [n], as in Fig. 2. Let σ be any fair schedule such that σ(1) = {i}, and consider the (a, σ)- and (a, σ)-trajectories. When node i is activated in the first timestep, it will choose the same action regardless of whether the initial state was a or a, by self independence. This means that both these trajectories have the same second state and are therefore identical after the first timestep. Thus since a and a are both committed, they must be committed to the same stable state. Theorem 3.1 (Special case of main theorem: historyless case). No historyless interaction system with self-independent reaction functions and more than one stable state is convergent. 6

7 a = (a 1,..., a i,..., a n) a = (a 1,..., a i,..., a n) {i} {i} Fig. 2. Activating {i} from two states that differ only in the ith coordinate will have the same outcome. To prove this theorem, we show that if such a system was convergent, then there would be a fair trajectory that never reaches a committed state, contradicting the assumption of convergence. The first step (Lem. 3.2) is to show that an uncommitted state exists. We then show (Lem. 3.3) that from any uncommitted state, it is always possible to activate any given node without visiting a committed state. This means that committed states can be avoided forever on a trajectory that activates every node infinitely many times, i.e., a fair trajectory. Lemma 3.2. Every convergent historyless system with self-independent reaction functions and more than one stable state has at least one uncommitted state. Proof. Suppose instead that every state in the system is committed. Then by inductively applying Lem. 3.1, it would follow that all states must be committed to the same stable state. This is impossible, though, since each stable state is committed to itself. Hence there must be an uncommitted state. Lemma 3.3. In every convergent historyless system with self-independent reaction functions and more than one stable state, for any uncommitted state a and any node i, there is a path from a that activates i and visits no committed state. Proof. Assume for contradiction that there is no such path, that every i-activating path from a reaches a committed state. Let σ be any schedule, and consider the subgraph shown in Fig. 3. The (a, σ)-trajectory is a, u 1, u 2,.... For each time t, v t is the state reached by following the schedule σ for t 1 steps, then activating σ(t) {i}. By self independence, activating {i} from u t or v t leads to same state, which we call w t. For every t Z +, the state v t is reached from a by a path that activates i, so by our assumption v t is a committed state. Let b be the state to which v 1 is committed. We now show that every v t is committed to b. Fix t 2, and suppose that v t 1 is committed to b. Since w t 1 is reachable from v t 1, it is also committed to b. Consider all states that result from activating a set containing i at u t. By assumption, all these states must be committed. As before, Lem. 3.1 can be applied inductively to show that v t must be committed to the same stable state as w t 1, namely b. If σ is a fair schedule, then there is some time t for which i σ(t). For this t, we have u t = v t, so u t is committed to b. Thus for any fair schedule σ, the (a, σ)-trajectory converges to b, contradicting the assumption that a is uncommitted. We conclude that there is an i-activating path from a to another uncommitted state. It follows immediately from these two lemmas that, in a convergent system satisfying the hypotheses of Thm. 3.1, it is be possible to activate each node infinitely many times without ever visiting a committed state. This means that there is a fair trajectory that never reaches a committed state. Since stable states are committed, this contradicts the assumption of convergence, proving the theorem. 7

8 σ(1) a u 1 σ(2) u 2 σ(3) u 3 σ(1) {i} {i} σ(2) {i} {i} σ(3) {i} {i}... v 1 w 1 {i} v 2 w 2 {i} v 3 w 3 {i} Fig. 3. The states in the bottom row are all committed to the same stable state. 4 Interaction systems with history In this section, we present our full model of asynchronous interaction, in which nodes reaction functions may take the system s past states into account in addition the current state. We also state our main theorem, a general impossibility result for convergence of nodes actions under boundedrecall dynamics in asynchronous, distributed computational environments. This theorem is proved in Sec Computational Nodes Interacting Our model for asynchronous interaction with history generalizes the historyless interaction systems of Sec. 3: at every discrete timestep, each node activated by a schedule simultaneously applies its deterministic reaction function to select a new action, which is immediately visible to all other nodes. But whereas the domain of reaction functions in a historyless system is the state space, we now consider reaction functions whose domain is the set of all histories. Definition 4.1. Given a state space A, a history is a nonempty finite sequence of states, H A l, for some l Z +. The set of all histories is A + = l Z + A l. We can now formally define general interaction systems. This definition differs from that of a historyless interaction system only in the domain of the reaction functions. Definition 4.2. An interaction system is characterized by a tuple (n, A, f): The system has n Z + computational nodes, labeled 1,..., n. A = A 1... A n, where each A i is a set called the action space of node i. A is called the state space of the system, and a state is an n-tuple a = (a 1,..., a n ) A. f : A + A is a function given by f(h) = (f 1 (H),..., f n (H)), where f i : A + A i is called node i s reaction function. Our definition of a trajectory needs to be adjusted for these systems. In this context, a trajectory begins with an initial history of arbitrary length rather than at a single initial state. Notice that the initial history might be erroneous; we do not require that it could have emerged from system interaction. Definition 4.3. Let H = (a 0,..., a l 1 ) be a history, for some l Z +, and let σ be a schedule. Then the (H, σ)-trajectory of the system is the infinite sequence a 0, a 1, a 2,... extending H such that for every t l and i [n], { fi (a 0,..., a t 1 ) if i σ(t) a t i = a t 1 i otherwise. (2) 8

9 The definitions of fairness and convergence are as in Defs. 3.4 and 3.5. However, fair trajectories in general interaction systems can converge to a larger class of states than in historyless systems. We call any such state a steady state. Definition 4.4. A state a is a steady state if there exists a history H and a fair schedule σ such that the (H, σ) trajectory converges to a. Unlike stable states in historyless systems, it is possible for the system to visit a steady state without converging to that state. In this sense steady states are not necessarily stable in general interaction systems; the convergence may be fragile and sensitive to small changes in the activation schedule. 4.2 Informational Restrictions on Reaction Functions This framework allows for very powerful reaction functions. We now present several possible restrictions on the information they may use. These are illustrated in Fig. 4. Informally, a node s reaction function is self-independent if it ignores the node s own past and current actions; this generalizes the notion of self independence given in Def Definition 4.5. A reaction function f i is self-independent if node i s own (past and present) actions do not affect the outcome of f i. For a history H = (a 0,..., a l 1 ), we write H i for (a 0 i,..., al 1 i ). Using this notation, f i is self-independent if there exists a function g i : A + i A i such that f i (H) = g i (H i ) for every history H A +. A reaction function has bounded recall if it ignores the distant past. Definition 4.6. Given k Z + and a history H = (a 0,..., a t 1 ) A t with t k, the k-history at H is H k := (a t k,..., a t 1 ), the k-tuple of most recent states. A reaction function f i has k-recall if it only depends on the k-history and the time counter, i.e., there exists a function g i : A k Z + A i such that f i (H) = g i (H k, t) for every time t k and history H A t. A reaction function with bounded recall is stationary if it also ignores the time counter. When it is clear that all reaction functions in a system are stationary and have k-recall, we may refer to a k-history as a history, since it captures all information that is relevant to the nodes future behavior. Definition 4.7. We say that a k-recall reaction function is stationary if the time counter t is of no importance. That is, if there exists a function g i : A k A i such that f i (H) = g i (H k ) for every time t k and history H A t. A reaction function f i is historyless if f i is both 1-recall and stationary. That is, if f i only depends on the nodes most recent actions. Naturally, a system in which every reaction function is historyless may be viewed as a historyless system in the sense of Def For all of these restrictions, we may slightly abuse notation by referring to the restricted-domain function g i, rather than f i, as the node s reaction function. 4.3 General Non-convergence Result We now present a general impossibility result for convergence of nodes actions under bounded-recall dynamics in asynchronous, distributed computational environments. 9

10 a a t 2 1 a t 1 1 a a t 2 1 a t 1 1 a a t 2 1 a t 1 1 a a t 2 2 a t 1 2 a a t 2 2 a t 1 2 a a t 2 2 a t 1 2 a a t 2 3 a t 1 3 a a t 2 3 a t 1 3 a a t 2 3 a t 1 3 Fig. 4. Shading shows the information about past and current actions available to node 3 at time t given different reaction function restrictions. Left: self-independent. Node 3 can see the entire record of other nodes past actions, but not its own. The length of this record gives the current timestamp t. Center: 2-recall. Node 3 can see only the two most recent states. Unless the reaction function is stationary, it may also use the value of the current timestamp. Right: self-independent and historyless. Node 3 can only see other nodes most recent actions and cannot even see the value of the current timestamp. Theorem 4.1 (Main theorem). In an interaction system where every reaction function is selfindependent and has bounded recall, the existence of multiple steady states implies that the system is not convergent. We note that this result holds even if nodes reaction functions are not stationary. A direct proof of Thm. 4.1 would be similar in structure to the proof of Thm In Sec. 10, we prove a more general result, Thm. 10.2, which states that no decision protocol satisfies a condition we call independence of decisions (IoD). We also show that a convergent system with multiple steady states and self-independent, bounded-recall reaction functions would be an IoD-satisfying decision protocol, so Thm. 4.1 is a consequence of Thm An advantage of this approach is that it illuminates the connection between our result and the seminal result of Fischer, Lynch, and Paterson [1985] on the impossibility of resilient consensus. We show in Sec. 11 that a resilient consensus protocol in their model would also be an IoD-satisfying decision protocol, so their result then also follows from Thm Note that system convergence is closely related to self stabilization, which is a guarantee that the system will reach and remaining within a set of legitimate states. For a set L A, we say that a system self-stabilizes to L if for every fair trajectory a 0, a 1, a 2,..., there is some T Z + such that, for every t > T, a t L. Theorem 4.1 precludes self stabilization to any singleton set in systems satisfying its hypotheses. 4.4 Neither Bounded Recall nor Self-independence Alone Implies Non-convergence We show that the statement of Thm. 4.1 does not hold if either the bounded-recall restriction or the self independence restriction is removed. Example 4.1. The bounded-recall restriction cannot be removed. Consider a system with two nodes, 1 and 2, each with the action space {α, β}. The self-independent reaction functions of the nodes are as follows: node 2 always chooses node 1 s action; node 1 will choose β if node 2 s action changed from α to β in the past, and α otherwise. Observe that node 1 s reaction function has unbounded recall: it depends on the entire history of interaction. We make the observations that the system is convergent and has two steady states. Observe that if node 1 chooses β at some point in time due to the fact that node 2 s action changed from α to β, then it will continue to do so thereafter; if, on the other hand, 1 never does so, then from some point in time 10

11 onwards, node 1 s action is constantly α. In both cases, node 2 will have the same action as node 1 eventually, and thus convergence to one of the two steady states, (α, α) and (β, β), is guaranteed. Hence, two steady states exist and the system is convergent nonetheless. Example 4.2. The self independence restriction cannot be removed. Consider a system with two nodes, 1 and 2, each with action space {α, β}. Each node i has a historyless reaction function f i, defined as follows: f i (α, α) = β, and in all other cases the node always (re)selects its own current action (e.g., f 1 (α, β) = α, f 2 (α, β) = β). Observe that the system has three steady states, namely all action profiles but (α, α), yet can easily be seen to be convergent. 5 Asynchronous Game Dynamics Traditionally, work in game theory on game dynamics (e.g., best-response dynamics) relies on the explicit or implicit premise that players behavior is somehow synchronized (in some contexts play is sequential, while in others it is simultaneous). In Sec. 9, we discuss simultaneous play when information about the game is distributed. Here, we consider the realistic scenario that there is no computational center than can synchronize players selection of strategies. We describe these dynamics in the setting of this work and exhibit an impossibility result for best-response, and more general, dynamics. A game is characterized by a triple (n, S, u). There are n players, 1,..., n. Each player i has a strategy set S i. S = S 1... S n is the space of strategy profiles s = (s 1,..., s n ). Each player i has a utility function u i : S R, where u = (u 1... u n ). Intuitively, player i prefers states for which u i is higher. Informally, a player is best-responding when it has no incentive to unilaterally change its strategy. Definition 5.1. In a game U = (n, S, u), player i is best-responding at s S if u i (s) u i (s ) for every s S such that s i = s i. We write s i BRi U (s). A strategy profile s S is a pure Nash equilibrium ( PNE) if every player is best-responding at s. There is a natural relationship between games and the interaction systems described in Sec. 4. A player with a strategy set corresponds directly to a node with an action space, and a strategy profile may be viewed as a state. These correspondences are so immediate that we often use these terms interchangeably. Consider the case of best-response dynamics for a game in which best responses are unique (a generic game): starting from some arbitrary strategy profile, each players chooses its unique best response to other players strategies when activated. Convergence to pure Nash equilibria under best-response dynamics is the subject of extensive research in game theory and economics, and both positive [Rosenthal 1973; Monderer and Shapley 1996] and negative [Hart and Mas-Colell 2003, 2006] results are known. If we view each player i in a game (n, S, u) as a node in an interaction system, then under best-response dynamics its utility function u i effectively induces a self-independent historyless reaction function f i : S i S i, as long as best responses are unique. Formally, f i (a i ) = arg max α S i u i (a 1,..., α,..., a n ). Conversely, any system with historyless and self-independent reaction functions can be described as following best-response dynamics for a game with unique best responses. Given reaction functions 11

12 f 1,..., f n, consider the game where each player i s utility function is given by { 1 if fi (a) = a u i (a) = i 0 otherwise. Best-response dynamics on this game replicate the dynamics induced by those reaction functions. Thus historyless and self-independent dynamics are exactly equivalent to best-response dynamics. Since pure Nash equilibria are fixed points of these dynamics, Thm. 3.1 may be restated in the following form. Theorem 5.1. If there are two or more pure Nash equilibria in a game with unique best responses, then asynchronous best-response dynamics can potentially oscillate indefinitely. In fact, best-response dynamics are just one way to derive reaction functions from utility functions, i.e., to translate preferences into behaviors. In general, a game dynamics protocol is a mapping from games to systems that makes this translation. Given a game (n, S, u) as input, the protocol selects reaction functions f = (f 1,..., f n ), and returns an interaction system (n, S, f). The above non-convergence result holds for a large class of these protocols. In particular, it holds for boundedrecall and self-independent game dynamics, whenever pure Nash equilibria are steady states. When cast into game-theoretic terminology, Thm. 4.1 says that if players choices of strategies are not synchronized, then the existence of two (or more) pure Nash equilibria implies that this broad class of game dynamics are not guaranteed to reach a pure Nash equilibrium. This result should be contrasted with positive results for such dynamics in the traditional synchronous game-theoretic environments. In particular, this result applies to best-response dynamics with bounded recall and consistent tie-breaking rules (studied by Zapechelnyuk [2008]). Theorem 5.2. If there are two or more pure Nash equilibria in a game with unique best responses, then all bounded-recall self-independent dynamics for which those equilibria are fixed points can fail to converge in asynchronous environments. 6 Applications: Circuits, Social Networks, and Routing We present implications of our impossibility result in Sec. 4 for several well-studied environments: circuit design, social networks and Internet protocols. 6.1 Asynchronous Circuits The implications of asynchrony for circuit design have been extensively studied in computer architecture research [Davis and Nowick 1997]. By regarding each logic gate as a node executing an inherently historyless and self-independent reaction function, we show that an impossibility result for stabilization of asynchronous circuits follows from Thm In this setting there is a Boolean circuit, represented as a directed graph G, in which the vertices represent the circuit s inputs and logic gates, and the edges represent the circuit s connections. The activation of the logic gates is asynchronous. That is, the gates outputs are initialized in some arbitrary way, and then the update of each gate s output, given its inputs, is uncoordinated and unsynchronized. A stable Boolean assignment in this framework is an assignment of Boolean values to the circuit inputs and the logic gates that is consistent with each gate s truth table. We say that 12

13 a Boolean circuit is inherently stable if it is guaranteed to converge to a stable Boolean assignment regardless of the initial Boolean assignment. To show how Thm. 3.1 applies to this setting, we model an asynchronous circuit with a fixed input as a historyless interaction system with self-independent reaction functions. Every node in the system has action space {0, 1}. There is a node for each input vertex, which has a constant (and thus self-independent) reaction function. For each logic gate, the system includes a node whose reaction function implements the logic gate on the actions of the nodes corresponding to its inputs. If any gate takes its own output directly as an input, we model this using an additional identity gate; this means that all reaction functions are self-independent. Since every stable Boolean assignment corresponds to a stable state of this system, this instability result follows from Thm. 3.1: Theorem 6.1. If two or more stable Boolean assignments exist for an asynchronous Boolean circuit with a given input, then that asynchronous circuit is not inherently stable on that input. 6.2 Diffusion of Technologies in Social Networks Understanding the ways in which innovations, ideas, technologies, and practices disseminate through social networks is fundamental to the social sciences. We consider the classic economic setting [Morris 2000] (which has lately also been approached by computer scientists [Immorlica et al. 2007]) where each decision maker has two technologies {X, Y } to choose from and wishes to have the same technology as the majority of its friends (neighboring nodes in the social network). We exhibit a general asynchronous instability result for this environment. In this setting there is a social network of users, represented by a connected graph in which users are the vertices and edges correspond to friendship relationships. There are two competing technologies, X and Y. Each user will repeatedly reassess his choice of technology, at timesteps separated by arbitrary finite intervals. When this happens, the user will select X if at least half of his friends are using X and otherwise select Y. A stable global state is a fixed point of these choice functions, meaning that no user will ever again switch technologies. Observe that if every user has chosen X or every user has chosen Y, then the system is in a stable global state. The dynamics of this diffusion can be described as asynchronous best-response dynamics for the game in which each player s utility is 1 if his choice of technology is consistent with the majority (with ties broken in favor of X) and 0 otherwise. This game has unique best responses, and the strategy profiles (X,..., X) and (Y,..., Y ) are both pure Nash equilibria for this game. Thus Thm. 5.1 implies the following result. Theorem 6.2. In every social network with at least one edge, the diffusion of technologies can potentially oscillate indefinitely. 6.3 Interdomain Routing Interdomain routing is the task of establishing routes between the smaller networks, or autonomous systems (ASes), that make up the Internet. It is handled by the Border Gateway Protocol (BGP). We abstract a recent result of Sami et al. [2009] concerning BGP non-convergence and show that this result extends to several BGP-based multipath routing protocols that have been proposed in the past few years. In the standard model for analyzing BGP dynamics [Griffin et al. 2002], there is a network of source ASes that wish to send traffic to a unique destination AS d. Each AS i has a ranking 13

14 function < i that specifies i s strict preferences over all simple (loop-free) routes leading from i to d. Each AS also has an export policy that specifies which routes it is willing to make available to each neighboring AS. Under BGP, each AS constantly selects the best route that is available to it (see [Griffin et al. 2002] for more details). BGP safety, i.e., guaranteed convergence to a stable routing outcome, is a fundamental desideratum that has been the subject of extensive work in both the networking and the standards communities. We now cast interdomain routing into the terminology of Sec. 3 to obtain a non-termination results for BGP as a corollary of Thm Each AS acts as a computational node. The action space of each node i is the set of all simple routes from i to the destination d, together with the empty route. For every state (R 1,..., R n ) of the system, i s reaction function f i considers the set of routes S = {(i, j)r j : j is i s neighbor and R j is exportable to i}. If S is empty, f i returns. Otherwise, f i selects the route in S that is optimal with respect to < i. Observe that this reaction function is deterministic, self-independent, and historyless, and that a stable routing tree is a stable state of this system. Thus Thm. 3.1 implies the following result of Sami et al. Theorem 6.3. [Sami et al. 2009] If there are multiple stable routing trees in a network, then BGP is not safe on that network. Importantly, the asynchronous model of Sec. 3 is significantly more restrictive than that of Sami et al., so the result implied by Thm. 3.1 is stronger. Thm. 4.1 implies an even more general non-safety result for routing protocols that depend on ASes that act self-independently and with bounded recall. In particular, this includes recent proposals for BGP-based multi-path routing protocols that allow each AS to send traffic along multiple routes, e.g., R-BGP [Kushman et al. 2007] and Neighbor- Specific BGP (NS-BGP) [Wang et al. 2009]. 6.4 Congestion Control We now consider the fundamental task of congestion control in communication networks, which is achieved through a combination of mechanisms on end-hosts (e.g., TCP) and on switches/routers (e.g., RED and WFQ). We briefly describe the model of congestion control studied by Godfrey et al. [2010]. There is a network of routers, represented by a directed graph G, in which vertices represent routers, and edges represent communication links. Each edge e has capacity c e. There are n sourcetarget pairs of vertices (s i, t i ), termed connections, that represent communicating pairs of endhosts. Each source-target pair (s i, t i ) is connected via some fixed route, R i. Each source s i transmits at a constant rate γ i > 0. 8 For each of a router s outgoing edges, the router has a queue management, or queuing, policy, that dictates how the edge s capacity should be allocated between the connections whose routes traverse that edge. The network is asynchronous, so routers queuing decisions can be made simultaneously. An equilibrium of the network is a global configuration of edges capacity allocation such that the incoming and outgoing flows on each edge are consistent with the queuing policy for that edge. Godfrey et al. show that, while one might expect flow to be received at a constant rate whenever it is transmitted at a constant rate, this is not necessarily the case. Indeed, 8 This is modeled via the addition of an edge e = (u, s i) to G, such that c e = γ i, and u has no incoming edges. 14

15 Godfrey et al. present examples in which connections throughputs can potentially fluctuate ad infinitum, never converging an equilibrium. We now model such a network as a historyless interaction system to show that every network with multiple equilibria can oscillate indefinitely. The computational nodes of the system are the edges. The action space of each edge e intuitively consists of all possible ways to divide traffic going through e between the connections whose routes traverse e. More formally, for every edge e, let N(e) be the number connections whose paths go through e. Then e s action space is A e = {( x 1,..., x N(e) ) : each xi 0 and Σ i x i c e }. Edge e s reaction function f e models the queuing policy according to which e s capacity is shared: for every N(e)-tuple of nonnegative incoming flows (w 1,..., w N(e) ), f e outputs an action (x 1,..., x N(e) ) A e such that for every i [N(e)], x i w i a connection s flow leaving the edge cannot exceed its flow entering the edge. These reaction functions are historyless and selfindependent, and an equilibrium of the network is a stable state of this system. Using Theorem 3.1, then, we can obtain the following impossibility result. Theorem 6.4. If there are multiple capacity-allocation equilibria in the network, then dynamics of congestion control can potentially oscillate indefinitely. 7 r-convergence and Randomness We now consider the implications for convergence of two natural restrictions on schedules: r- fairness and randomization. Recall from Sec. 3 that a fair schedule is one in which every node is activated infinitely many times, but without any restriction on the delay between these activations. An r-fair schedule is one in which no node ever has to wait more than r timesteps before its next activation. This allows us to define a weaker form of system convergence, r-convergence. Definition 7.1. For r Z +, a schedule σ is r-fair if every node is activated at least once in every r consecutive timesteps, i.e., for every t Z + and every node i, i σ(t)... σ(t + r 1). A trajectory is r-fair if it results from some r-fair schedule, and a system is r-convergent if every r-fair trajectory converges. 7.1 Snakes in Boxes and r-convergence. Theorem 3.1 deals with convergence and not r-convergence, and thus does not impose restrictions on the number of consecutive time steps in which a node can be nonactive. What happens if there is an upper bound r on this number? We now show that if r < n 1, then sometimes convergence of historyless and self-independent dynamics is achievable, even in the presence of multiple steady states (and so our impossibility result breaks). Example 7.1. A system that is r-convergent if and only if r < n 1. Consider a system with n 2 nodes, 1,..., n, each with the action space {α, β}. Nodes historyless and self-independent reaction functions are as follows. For every i [n], f i (a) = α if a i = α n 1 and β otherwise. Observe that there exist two stable states: α n and β n. 15

16 Observe that if r n 1, then the following oscillation is possible. Initially, only node 1 s action is β and all other nodes actions are α. Then, nodes 1 and 2 are activated and, consequently, node 1 s action becomes α and node 2 s action becomes β. Next, nodes 2 and 3 are activated, and thus 2 s action becomes α and 3 s action becomes β. Then nodes 3 and 4 are activated, then 4 and 5, and so on, traversing all nodes over and over again in cyclic order. This goes on indefinitely, never reaching one of the two stable states. Observe that each node is activated at least once within every sequence of n 1 consecutive time steps. If r < n 1, however, then convergence is guaranteed. To see this, observe that if at some point in time there are at least two nodes whose action is β, then convergence to β n is guaranteed. Clearly, if each node s action is α, then convergence to α n is guaranteed. Thus, an oscillation is possible only if, in each time step, exactly one node s action is β. Observe that, given our definition of nodes reaction functions, this can only be if the activation sequence is (essentially) as described above, i.e., exactly two nodes are activated at a time. Observe also that this kind of activation sequence is impossible for r < n 1. What about r > n? We use a classical combinatorial result due to Abbott and Katchalski regarding the size of a snake-in-the-box in a hypercube to address this question. Definition 7.2. A chordless path in a hypercube Q n is a path P = (v 0,..., v k ) such that for each v i, v j on P, if v i and v j are neighbors in Q n then v j {v i 1, v i+1 }. A snake in a hypercube is a simple chordless cycle. Theorem 7.1. [Abbott and Katchalski 1991] Let t Z + be sufficiently large. Then, the size S of a maximal snake in the z-hypercube Q z is at least λ 2 z for some λ 0.3. Using this theorem, we show that some systems are r-convergent for exponentially large r, but are not convergent in general. Theorem 7.2. Let n Z + be sufficiently large. There exists an integer r = Ω(2 n ) and a interaction system G with n nodes, in which each node i has two possible actions and each f i is historyless and self-independent, such that G is r-convergent but not (r + 1)-convergent. Proof. Let the action space of each of the n nodes be {0, 1}. Consider the possible action profiles of nodes 3,..., n, i.e., the set {0, 1} n 2. Observe that this set of actions can be regarded as the (n 2)-hypercube Q n 2, and thus can be visualized as the graph whose vertices are indexed by the binary (n 2)-tuples and such that two vertices are adjacent iff the corresponding (n 2)-tuples differ in exactly one coordinate. Hence, the size of a maximal snake in the Q n 2 hypercube is Ω(2 n ). Let S be a such a snake. Without loss of generality, we can assume that 0 n 2 is on S (otherwise we can rename nodes actions so as to achieve this). For nodes i = 1 and 2, the historyless and self-independent reaction functions are f i (a) = 0 when a i = 0 n 1 and 1 otherwise. Before giving the reaction functions for nodes 3,..., n, we describe an orientation of the edges in Q n 2, on which those functions will be based. Orient the edges in S to form a cycle. For any edge that has exactly one endpoint in S, orient the edge toward S. An example is given in Fig. 5. Orient all other edges arbitrarily. For each i = 3,..., n, this orientation defines a function g i : Q n 2 {0, 1}, and the historyless and self-independent reaction functions are defined as follows, for every state a = (a 1,..., a n ). { 1 if a1 = a f i (a) = 2 = 1 g i (a 3,..., a n ) otherwise. 16

Self-stabilizing uncoupled dynamics

Self-stabilizing uncoupled dynamics Self-stabilizing uncoupled dynamics Aaron D. Jaggard 1, Neil Lutz 2, Michael Schapira 3, and Rebecca N. Wright 4 1 U.S. Naval Research Laboratory, Washington, DC 20375, USA. aaron.jaggard@nrl.navy.mil

More information

Dynamics at the Boundary of Game Theory and Distributed Computing

Dynamics at the Boundary of Game Theory and Distributed Computing Dynamics at the Boundary of Game Theory and Distributed Computing AARON D. JAGGARD, U.S. Naval Research Laboratory NEIL LUTZ, Rutgers University MICHAEL SCHAPIRA, Hebrew University of Jerusalem REBECCA

More information

On Equilibria of Distributed Message-Passing Games

On Equilibria of Distributed Message-Passing Games On Equilibria of Distributed Message-Passing Games Concetta Pilotto and K. Mani Chandy California Institute of Technology, Computer Science Department 1200 E. California Blvd. MC 256-80 Pasadena, US {pilotto,mani}@cs.caltech.edu

More information

Valency Arguments CHAPTER7

Valency Arguments CHAPTER7 CHAPTER7 Valency Arguments In a valency argument, configurations are classified as either univalent or multivalent. Starting from a univalent configuration, all terminating executions (from some class)

More information

On the weakest failure detector ever

On the weakest failure detector ever On the weakest failure detector ever The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Guerraoui, Rachid

More information

Setting Lower Bounds on Truthfulness

Setting Lower Bounds on Truthfulness Setting Lower Bounds on Truthfulness Ahuva Mu alem Michael Schapira Abstract We present and discuss general techniques for proving inapproximability results for truthful mechanisms. We make use of these

More information

Price of Stability in Survivable Network Design

Price of Stability in Survivable Network Design Noname manuscript No. (will be inserted by the editor) Price of Stability in Survivable Network Design Elliot Anshelevich Bugra Caskurlu Received: January 2010 / Accepted: Abstract We study the survivable

More information

Combining Shared Coin Algorithms

Combining Shared Coin Algorithms Combining Shared Coin Algorithms James Aspnes Hagit Attiya Keren Censor Abstract This paper shows that shared coin algorithms can be combined to optimize several complexity measures, even in the presence

More information

Fault-Tolerant Consensus

Fault-Tolerant Consensus Fault-Tolerant Consensus CS556 - Panagiota Fatourou 1 Assumptions Consensus Denote by f the maximum number of processes that may fail. We call the system f-resilient Description of the Problem Each process

More information

Best-Response Mechanisms

Best-Response Mechanisms Best-Response Mechanisms Noam Nisan 1 Michael Schapira 2 Gregory Valiant 3 Aviv Zohar 4 1 School of Eng. and Computer Science, The Hebrew University of Jerusalem 2 Computer Science Dept., Princeton University

More information

Finally the Weakest Failure Detector for Non-Blocking Atomic Commit

Finally the Weakest Failure Detector for Non-Blocking Atomic Commit Finally the Weakest Failure Detector for Non-Blocking Atomic Commit Rachid Guerraoui Petr Kouznetsov Distributed Programming Laboratory EPFL Abstract Recent papers [7, 9] define the weakest failure detector

More information

On the Structure of Weakly Acyclic Games

On the Structure of Weakly Acyclic Games On the Structure of Weakly Acyclic Games Alex Fabrikant 1, Aaron D. Jaggard 2, and Michael Schapira 3 1 Princeton University, Princeton, NJ, USA alexf@cal.berkeley.edu 2 Colgate University, Hamilton, NY,

More information

Convergence Rate of Best Response Dynamics in Scheduling Games with Conflicting Congestion Effects

Convergence Rate of Best Response Dynamics in Scheduling Games with Conflicting Congestion Effects Convergence Rate of est Response Dynamics in Scheduling Games with Conflicting Congestion Effects Michal Feldman Tami Tamir Abstract We study resource allocation games with conflicting congestion effects.

More information

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden 1 Selecting Efficient Correlated Equilibria Through Distributed Learning Jason R. Marden Abstract A learning rule is completely uncoupled if each player s behavior is conditioned only on his own realized

More information

arxiv: v1 [cs.dc] 3 Oct 2011

arxiv: v1 [cs.dc] 3 Oct 2011 A Taxonomy of aemons in Self-Stabilization Swan ubois Sébastien Tixeuil arxiv:1110.0334v1 cs.c] 3 Oct 2011 Abstract We survey existing scheduling hypotheses made in the literature in self-stabilization,

More information

Game Theory: Spring 2017

Game Theory: Spring 2017 Game Theory: Spring 207 Ulle Endriss Institute for Logic, Language and Computation University of Amsterdam Ulle Endriss Plan for Today We have seen that every normal-form game has a Nash equilibrium, although

More information

The Weakest Failure Detector to Solve Mutual Exclusion

The Weakest Failure Detector to Solve Mutual Exclusion The Weakest Failure Detector to Solve Mutual Exclusion Vibhor Bhatt Nicholas Christman Prasad Jayanti Dartmouth College, Hanover, NH Dartmouth Computer Science Technical Report TR2008-618 April 17, 2008

More information

Information-Theoretic Lower Bounds on the Storage Cost of Shared Memory Emulation

Information-Theoretic Lower Bounds on the Storage Cost of Shared Memory Emulation Information-Theoretic Lower Bounds on the Storage Cost of Shared Memory Emulation Viveck R. Cadambe EE Department, Pennsylvania State University, University Park, PA, USA viveck@engr.psu.edu Nancy Lynch

More information

Exact and Approximate Equilibria for Optimal Group Network Formation

Exact and Approximate Equilibria for Optimal Group Network Formation Exact and Approximate Equilibria for Optimal Group Network Formation Elliot Anshelevich and Bugra Caskurlu Computer Science Department, RPI, 110 8th Street, Troy, NY 12180 {eanshel,caskub}@cs.rpi.edu Abstract.

More information

On the weakest failure detector ever

On the weakest failure detector ever Distrib. Comput. (2009) 21:353 366 DOI 10.1007/s00446-009-0079-3 On the weakest failure detector ever Rachid Guerraoui Maurice Herlihy Petr Kuznetsov Nancy Lynch Calvin Newport Received: 24 August 2007

More information

Implementing Uniform Reliable Broadcast with Binary Consensus in Systems with Fair-Lossy Links

Implementing Uniform Reliable Broadcast with Binary Consensus in Systems with Fair-Lossy Links Implementing Uniform Reliable Broadcast with Binary Consensus in Systems with Fair-Lossy Links Jialin Zhang Tsinghua University zhanggl02@mails.tsinghua.edu.cn Wei Chen Microsoft Research Asia weic@microsoft.com

More information

On the Impossibility of Black-Box Truthfulness Without Priors

On the Impossibility of Black-Box Truthfulness Without Priors On the Impossibility of Black-Box Truthfulness Without Priors Nicole Immorlica Brendan Lucier Abstract We consider the problem of converting an arbitrary approximation algorithm for a singleparameter social

More information

Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about

Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about 0368.4170: Cryptography and Game Theory Ran Canetti and Alon Rosen Lecture 7 02 December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about Two-Player zero-sum games (min-max theorem) Mixed

More information

AGREEMENT PROBLEMS (1) Agreement problems arise in many practical applications:

AGREEMENT PROBLEMS (1) Agreement problems arise in many practical applications: AGREEMENT PROBLEMS (1) AGREEMENT PROBLEMS Agreement problems arise in many practical applications: agreement on whether to commit or abort the results of a distributed atomic action (e.g. database transaction)

More information

Computational Tasks and Models

Computational Tasks and Models 1 Computational Tasks and Models Overview: We assume that the reader is familiar with computing devices but may associate the notion of computation with specific incarnations of it. Our first goal is to

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

1 Introduction. 1.1 The Problem Domain. Self-Stablization UC Davis Earl Barr. Lecture 1 Introduction Winter 2007

1 Introduction. 1.1 The Problem Domain. Self-Stablization UC Davis Earl Barr. Lecture 1 Introduction Winter 2007 Lecture 1 Introduction 1 Introduction 1.1 The Problem Domain Today, we are going to ask whether a system can recover from perturbation. Consider a children s top: If it is perfectly vertically, you can

More information

Section 6 Fault-Tolerant Consensus

Section 6 Fault-Tolerant Consensus Section 6 Fault-Tolerant Consensus CS586 - Panagiota Fatourou 1 Description of the Problem Consensus Each process starts with an individual input from a particular value set V. Processes may fail by crashing.

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Impossibility of Distributed Consensus with One Faulty Process

Impossibility of Distributed Consensus with One Faulty Process Impossibility of Distributed Consensus with One Faulty Process Journal of the ACM 32(2):374-382, April 1985. MJ Fischer, NA Lynch, MS Peterson. Won the 2002 Dijkstra Award (for influential paper in distributed

More information

CS505: Distributed Systems

CS505: Distributed Systems Cristina Nita-Rotaru CS505: Distributed Systems. Required reading for this topic } Michael J. Fischer, Nancy A. Lynch, and Michael S. Paterson for "Impossibility of Distributed with One Faulty Process,

More information

NBER WORKING PAPER SERIES PRICE AND CAPACITY COMPETITION. Daron Acemoglu Kostas Bimpikis Asuman Ozdaglar

NBER WORKING PAPER SERIES PRICE AND CAPACITY COMPETITION. Daron Acemoglu Kostas Bimpikis Asuman Ozdaglar NBER WORKING PAPER SERIES PRICE AND CAPACITY COMPETITION Daron Acemoglu Kostas Bimpikis Asuman Ozdaglar Working Paper 12804 http://www.nber.org/papers/w12804 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050 Massachusetts

More information

Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication

Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication Stavros Tripakis Abstract We introduce problems of decentralized control with communication, where we explicitly

More information

Asynchronous Models For Consensus

Asynchronous Models For Consensus Distributed Systems 600.437 Asynchronous Models for Consensus Department of Computer Science The Johns Hopkins University 1 Asynchronous Models For Consensus Lecture 5 Further reading: Distributed Algorithms

More information

Shared Memory vs Message Passing

Shared Memory vs Message Passing Shared Memory vs Message Passing Carole Delporte-Gallet Hugues Fauconnier Rachid Guerraoui Revised: 15 February 2004 Abstract This paper determines the computational strength of the shared memory abstraction

More information

Price and Capacity Competition

Price and Capacity Competition Price and Capacity Competition Daron Acemoglu, Kostas Bimpikis, and Asuman Ozdaglar October 9, 2007 Abstract We study the efficiency of oligopoly equilibria in a model where firms compete over capacities

More information

The Complexity of a Reliable Distributed System

The Complexity of a Reliable Distributed System The Complexity of a Reliable Distributed System Rachid Guerraoui EPFL Alexandre Maurer EPFL Abstract Studying the complexity of distributed algorithms typically boils down to evaluating how the number

More information

Two-Player Kidney Exchange Game

Two-Player Kidney Exchange Game Two-Player Kidney Exchange Game Margarida Carvalho INESC TEC and Faculdade de Ciências da Universidade do Porto, Portugal margarida.carvalho@dcc.fc.up.pt Andrea Lodi DEI, University of Bologna, Italy andrea.lodi@unibo.it

More information

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139 Upper and Lower Bounds on the Number of Faults a System Can Withstand Without Repairs Michel Goemans y Nancy Lynch z Isaac Saias x Laboratory for Computer Science Massachusetts Institute of Technology

More information

On Acyclicity of Games with Cycles

On Acyclicity of Games with Cycles On Acyclicity of Games with Cycles Daniel Andersson, Vladimir Gurvich, and Thomas Dueholm Hansen Dept. of Computer Science, Aarhus University, {koda,tdh}@cs.au.dk RUTCOR, Rutgers University, gurvich@rutcor.rutgers.edu

More information

Coordination. Failures and Consensus. Consensus. Consensus. Overview. Properties for Correct Consensus. Variant I: Consensus (C) P 1. v 1.

Coordination. Failures and Consensus. Consensus. Consensus. Overview. Properties for Correct Consensus. Variant I: Consensus (C) P 1. v 1. Coordination Failures and Consensus If the solution to availability and scalability is to decentralize and replicate functions and data, how do we coordinate the nodes? data consistency update propagation

More information

CS505: Distributed Systems

CS505: Distributed Systems Department of Computer Science CS505: Distributed Systems Lecture 10: Consensus Outline Consensus impossibility result Consensus with S Consensus with Ω Consensus Most famous problem in distributed computing

More information

cs/ee/ids 143 Communication Networks

cs/ee/ids 143 Communication Networks cs/ee/ids 143 Communication Networks Chapter 5 Routing Text: Walrand & Parakh, 2010 Steven Low CMS, EE, Caltech Warning These notes are not self-contained, probably not understandable, unless you also

More information

Distributed Optimization. Song Chong EE, KAIST

Distributed Optimization. Song Chong EE, KAIST Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links

More information

Exact and Approximate Equilibria for Optimal Group Network Formation

Exact and Approximate Equilibria for Optimal Group Network Formation Noname manuscript No. will be inserted by the editor) Exact and Approximate Equilibria for Optimal Group Network Formation Elliot Anshelevich Bugra Caskurlu Received: December 2009 / Accepted: Abstract

More information

Degradable Agreement in the Presence of. Byzantine Faults. Nitin H. Vaidya. Technical Report #

Degradable Agreement in the Presence of. Byzantine Faults. Nitin H. Vaidya. Technical Report # Degradable Agreement in the Presence of Byzantine Faults Nitin H. Vaidya Technical Report # 92-020 Abstract Consider a system consisting of a sender that wants to send a value to certain receivers. Byzantine

More information

Synthesis weakness of standard approach. Rational Synthesis

Synthesis weakness of standard approach. Rational Synthesis 1 Synthesis weakness of standard approach Rational Synthesis 3 Overview Introduction to formal verification Reactive systems Verification Synthesis Introduction to Formal Verification of Reactive Systems

More information

Lower Bound on the Step Complexity of Anonymous Binary Consensus

Lower Bound on the Step Complexity of Anonymous Binary Consensus Lower Bound on the Step Complexity of Anonymous Binary Consensus Hagit Attiya 1, Ohad Ben-Baruch 2, and Danny Hendler 3 1 Department of Computer Science, Technion, hagit@cs.technion.ac.il 2 Department

More information

Towards a General Theory of Non-Cooperative Computation

Towards a General Theory of Non-Cooperative Computation Towards a General Theory of Non-Cooperative Computation (Extended Abstract) Robert McGrew, Ryan Porter, and Yoav Shoham Stanford University {bmcgrew,rwporter,shoham}@cs.stanford.edu Abstract We generalize

More information

A Price-Based Approach for Controlling Networked Distributed Energy Resources

A Price-Based Approach for Controlling Networked Distributed Energy Resources A Price-Based Approach for Controlling Networked Distributed Energy Resources Alejandro D. Domínguez-García (joint work with Bahman Gharesifard and Tamer Başar) Coordinated Science Laboratory Department

More information

CS364A: Algorithmic Game Theory Lecture #13: Potential Games; A Hierarchy of Equilibria

CS364A: Algorithmic Game Theory Lecture #13: Potential Games; A Hierarchy of Equilibria CS364A: Algorithmic Game Theory Lecture #13: Potential Games; A Hierarchy of Equilibria Tim Roughgarden November 4, 2013 Last lecture we proved that every pure Nash equilibrium of an atomic selfish routing

More information

Network Algorithms and Complexity (NTUA-MPLA) Reliable Broadcast. Aris Pagourtzis, Giorgos Panagiotakos, Dimitris Sakavalas

Network Algorithms and Complexity (NTUA-MPLA) Reliable Broadcast. Aris Pagourtzis, Giorgos Panagiotakos, Dimitris Sakavalas Network Algorithms and Complexity (NTUA-MPLA) Reliable Broadcast Aris Pagourtzis, Giorgos Panagiotakos, Dimitris Sakavalas Slides are partially based on the joint work of Christos Litsas, Aris Pagourtzis,

More information

Weakening Failure Detectors for k-set Agreement via the Partition Approach

Weakening Failure Detectors for k-set Agreement via the Partition Approach Weakening Failure Detectors for k-set Agreement via the Partition Approach Wei Chen 1, Jialin Zhang 2, Yu Chen 1, Xuezheng Liu 1 1 Microsoft Research Asia {weic, ychen, xueliu}@microsoft.com 2 Center for

More information

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash Equilibrium Price of Stability Coping With NP-Hardness

More information

The Las-Vegas Processor Identity Problem (How and When to Be Unique)

The Las-Vegas Processor Identity Problem (How and When to Be Unique) The Las-Vegas Processor Identity Problem (How and When to Be Unique) Shay Kutten Department of Industrial Engineering The Technion kutten@ie.technion.ac.il Rafail Ostrovsky Bellcore rafail@bellcore.com

More information

Brown s Original Fictitious Play

Brown s Original Fictitious Play manuscript No. Brown s Original Fictitious Play Ulrich Berger Vienna University of Economics, Department VW5 Augasse 2-6, A-1090 Vienna, Austria e-mail: ulrich.berger@wu-wien.ac.at March 2005 Abstract

More information

CS 573: Algorithmic Game Theory Lecture date: Feb 6, 2008

CS 573: Algorithmic Game Theory Lecture date: Feb 6, 2008 CS 573: Algorithmic Game Theory Lecture date: Feb 6, 2008 Instructor: Chandra Chekuri Scribe: Omid Fatemieh Contents 1 Network Formation/Design Games 1 1.1 Game Definition and Properties..............................

More information

Realization Plans for Extensive Form Games without Perfect Recall

Realization Plans for Extensive Form Games without Perfect Recall Realization Plans for Extensive Form Games without Perfect Recall Richard E. Stearns Department of Computer Science University at Albany - SUNY Albany, NY 12222 April 13, 2015 Abstract Given a game in

More information

Can an Operation Both Update the State and Return a Meaningful Value in the Asynchronous PRAM Model?

Can an Operation Both Update the State and Return a Meaningful Value in the Asynchronous PRAM Model? Can an Operation Both Update the State and Return a Meaningful Value in the Asynchronous PRAM Model? Jaap-Henk Hoepman Department of Computer Science, University of Twente, the Netherlands hoepman@cs.utwente.nl

More information

K-NCC: Stability Against Group Deviations in Non-Cooperative Computation

K-NCC: Stability Against Group Deviations in Non-Cooperative Computation K-NCC: Stability Against Group Deviations in Non-Cooperative Computation Itai Ashlagi, Andrey Klinger, and Moshe Tennenholtz Technion Israel Institute of Technology Haifa 32000, Israel Abstract. A function

More information

Consistent Global States of Distributed Systems: Fundamental Concepts and Mechanisms. CS 249 Project Fall 2005 Wing Wong

Consistent Global States of Distributed Systems: Fundamental Concepts and Mechanisms. CS 249 Project Fall 2005 Wing Wong Consistent Global States of Distributed Systems: Fundamental Concepts and Mechanisms CS 249 Project Fall 2005 Wing Wong Outline Introduction Asynchronous distributed systems, distributed computations,

More information

Generalized Efficiency Bounds In Distributed Resource Allocation

Generalized Efficiency Bounds In Distributed Resource Allocation 1 Generalized Efficiency Bounds In Distributed Resource Allocation Jason R. Marden Tim Roughgarden Abstract Game theory is emerging as a popular tool for distributed control of multiagent systems. To take

More information

6.207/14.15: Networks Lecture 24: Decisions in Groups

6.207/14.15: Networks Lecture 24: Decisions in Groups 6.207/14.15: Networks Lecture 24: Decisions in Groups Daron Acemoglu and Asu Ozdaglar MIT December 9, 2009 1 Introduction Outline Group and collective choices Arrow s Impossibility Theorem Gibbard-Satterthwaite

More information

On the Complexity of Computing an Equilibrium in Combinatorial Auctions

On the Complexity of Computing an Equilibrium in Combinatorial Auctions On the Complexity of Computing an Equilibrium in Combinatorial Auctions Shahar Dobzinski Hu Fu Robert Kleinberg April 8, 2014 Abstract We study combinatorial auctions where each item is sold separately

More information

Convergence and Approximation in Potential Games

Convergence and Approximation in Potential Games Convergence and Approximation in Potential Games George Christodoulou 1, Vahab S. Mirrokni 2, and Anastasios Sidiropoulos 2 1 National and Kapodistrian University of Athens Dept. of Informatics and Telecommunications

More information

Complexity Bounds for Muller Games 1

Complexity Bounds for Muller Games 1 Complexity Bounds for Muller Games 1 Paul Hunter a, Anuj Dawar b a Oxford University Computing Laboratory, UK b University of Cambridge Computer Laboratory, UK Abstract We consider the complexity of infinite

More information

Consensus and Universal Construction"

Consensus and Universal Construction Consensus and Universal Construction INF346, 2015 So far Shared-memory communication: safe bits => multi-valued atomic registers atomic registers => atomic/immediate snapshot 2 Today Reaching agreement

More information

The price of anarchy of finite congestion games

The price of anarchy of finite congestion games The price of anarchy of finite congestion games George Christodoulou Elias Koutsoupias Abstract We consider the price of anarchy of pure Nash equilibria in congestion games with linear latency functions.

More information

Latches. October 13, 2003 Latches 1

Latches. October 13, 2003 Latches 1 Latches The second part of CS231 focuses on sequential circuits, where we add memory to the hardware that we ve already seen. Our schedule will be very similar to before: We first show how primitive memory

More information

Definition Existence CG vs Potential Games. Congestion Games. Algorithmic Game Theory

Definition Existence CG vs Potential Games. Congestion Games. Algorithmic Game Theory Algorithmic Game Theory Definitions and Preliminaries Existence of Pure Nash Equilibria vs. Potential Games (Rosenthal 1973) A congestion game is a tuple Γ = (N,R,(Σ i ) i N,(d r) r R) with N = {1,...,n},

More information

Overview. Discrete Event Systems Verification of Finite Automata. What can finite automata be used for? What can finite automata be used for?

Overview. Discrete Event Systems Verification of Finite Automata. What can finite automata be used for? What can finite automata be used for? Computer Engineering and Networks Overview Discrete Event Systems Verification of Finite Automata Lothar Thiele Introduction Binary Decision Diagrams Representation of Boolean Functions Comparing two circuits

More information

Distributed Computing in Shared Memory and Networks

Distributed Computing in Shared Memory and Networks Distributed Computing in Shared Memory and Networks Class 2: Consensus WEP 2018 KAUST This class Reaching agreement in shared memory: Consensus ü Impossibility of wait-free consensus 1-resilient consensus

More information

6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Potential Games

6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Potential Games 6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Asu Ozdaglar MIT March 2, 2010 1 Introduction Outline Review of Supermodular Games Reading: Fudenberg and Tirole, Section 12.3.

More information

Atomic m-register operations

Atomic m-register operations Atomic m-register operations Michael Merritt Gadi Taubenfeld December 15, 1993 Abstract We investigate systems where it is possible to access several shared registers in one atomic step. We characterize

More information

1 Equilibrium Comparisons

1 Equilibrium Comparisons CS/SS 241a Assignment 3 Guru: Jason Marden Assigned: 1/31/08 Due: 2/14/08 2:30pm We encourage you to discuss these problems with others, but you need to write up the actual homework alone. At the top of

More information

Equilibria in Games with Weak Payoff Externalities

Equilibria in Games with Weak Payoff Externalities NUPRI Working Paper 2016-03 Equilibria in Games with Weak Payoff Externalities Takuya Iimura, Toshimasa Maruta, and Takahiro Watanabe October, 2016 Nihon University Population Research Institute http://www.nihon-u.ac.jp/research/institute/population/nupri/en/publications.html

More information

Algorithms, Games, and Networks January 17, Lecture 2

Algorithms, Games, and Networks January 17, Lecture 2 Algorithms, Games, and Networks January 17, 2013 Lecturer: Avrim Blum Lecture 2 Scribe: Aleksandr Kazachkov 1 Readings for today s lecture Today s topic is online learning, regret minimization, and minimax

More information

Lower Bounds for Achieving Synchronous Early Stopping Consensus with Orderly Crash Failures

Lower Bounds for Achieving Synchronous Early Stopping Consensus with Orderly Crash Failures Lower Bounds for Achieving Synchronous Early Stopping Consensus with Orderly Crash Failures Xianbing Wang 1, Yong-Meng Teo 1,2, and Jiannong Cao 3 1 Singapore-MIT Alliance, 2 Department of Computer Science,

More information

Failure detectors Introduction CHAPTER

Failure detectors Introduction CHAPTER CHAPTER 15 Failure detectors 15.1 Introduction This chapter deals with the design of fault-tolerant distributed systems. It is widely known that the design and verification of fault-tolerent distributed

More information

Computing with voting trees

Computing with voting trees Computing with voting trees Jennifer Iglesias Nathaniel Ince Po-Shen Loh Abstract The classical paradox of social choice theory asserts that there is no fair way to deterministically select a winner in

More information

Approximation Algorithms and Mechanism Design for Minimax Approval Voting

Approximation Algorithms and Mechanism Design for Minimax Approval Voting Approximation Algorithms and Mechanism Design for Minimax Approval Voting Ioannis Caragiannis RACTI & Department of Computer Engineering and Informatics University of Patras, Greece caragian@ceid.upatras.gr

More information

Randomized Protocols for Asynchronous Consensus

Randomized Protocols for Asynchronous Consensus Randomized Protocols for Asynchronous Consensus Alessandro Panconesi DSI - La Sapienza via Salaria 113, piano III 00198 Roma, Italy One of the central problems in the Theory of (feasible) Computation is

More information

SF2972 Game Theory Exam with Solutions March 15, 2013

SF2972 Game Theory Exam with Solutions March 15, 2013 SF2972 Game Theory Exam with s March 5, 203 Part A Classical Game Theory Jörgen Weibull and Mark Voorneveld. (a) What are N, S and u in the definition of a finite normal-form (or, equivalently, strategic-form)

More information

Introduction to Game Theory

Introduction to Game Theory COMP323 Introduction to Computational Game Theory Introduction to Game Theory Paul G. Spirakis Department of Computer Science University of Liverpool Paul G. Spirakis (U. Liverpool) Introduction to Game

More information

Consistent Fixed Points and Negative Gain

Consistent Fixed Points and Negative Gain 2009 International Conference on Parallel and Distributed Computing, Applications and Technologies Consistent Fixed Points and Negative Gain H. B. Acharya The University of Texas at Austin acharya @ cs.utexas.edu

More information

CS364A: Algorithmic Game Theory Lecture #16: Best-Response Dynamics

CS364A: Algorithmic Game Theory Lecture #16: Best-Response Dynamics CS364A: Algorithmic Game Theory Lecture #16: Best-Response Dynamics Tim Roughgarden November 13, 2013 1 Do Players Learn Equilibria? In this lecture we segue into the third part of the course, which studies

More information

1 Extensive Form Games

1 Extensive Form Games 1 Extensive Form Games De nition 1 A nite extensive form game is am object K = fn; (T ) ; P; A; H; u; g where: N = f0; 1; :::; ng is the set of agents (player 0 is nature ) (T ) is the game tree P is the

More information

Prioritized Sweeping Converges to the Optimal Value Function

Prioritized Sweeping Converges to the Optimal Value Function Technical Report DCS-TR-631 Prioritized Sweeping Converges to the Optimal Value Function Lihong Li and Michael L. Littman {lihong,mlittman}@cs.rutgers.edu RL 3 Laboratory Department of Computer Science

More information

Chapter 3 Deterministic planning

Chapter 3 Deterministic planning Chapter 3 Deterministic planning In this chapter we describe a number of algorithms for solving the historically most important and most basic type of planning problem. Two rather strong simplifying assumptions

More information

Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication 1

Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication 1 Decentralized Control of Discrete Event Systems with Bounded or Unbounded Delay Communication 1 Stavros Tripakis 2 VERIMAG Technical Report TR-2004-26 November 2004 Abstract We introduce problems of decentralized

More information

SINCE the passage of the Telecommunications Act in 1996,

SINCE the passage of the Telecommunications Act in 1996, JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. XX, NO. XX, MONTH 20XX 1 Partially Optimal Routing Daron Acemoglu, Ramesh Johari, Member, IEEE, Asuman Ozdaglar, Member, IEEE Abstract Most large-scale

More information

Optimal Resilience Asynchronous Approximate Agreement

Optimal Resilience Asynchronous Approximate Agreement Optimal Resilience Asynchronous Approximate Agreement Ittai Abraham, Yonatan Amit, and Danny Dolev School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel {ittaia, mitmit,

More information

DIMACS Technical Report March Game Seki 1

DIMACS Technical Report March Game Seki 1 DIMACS Technical Report 2007-05 March 2007 Game Seki 1 by Diogo V. Andrade RUTCOR, Rutgers University 640 Bartholomew Road Piscataway, NJ 08854-8003 dandrade@rutcor.rutgers.edu Vladimir A. Gurvich RUTCOR,

More information

Topics of Algorithmic Game Theory

Topics of Algorithmic Game Theory COMP323 Introduction to Computational Game Theory Topics of Algorithmic Game Theory Paul G. Spirakis Department of Computer Science University of Liverpool Paul G. Spirakis (U. Liverpool) Topics of Algorithmic

More information

Friendship and Stable Matching

Friendship and Stable Matching Friendship and Stable Matching Elliot Anshelevich Onkar Bhardwaj Martin Hoefer June 10, 2013 Abstract We study stable matching problems in networks where players are embedded in a social context, and may

More information

Pay or Play. Abstract. 1 Introduction. Moshe Tennenholtz Technion - Israel Institute of Technology and Microsoft Research

Pay or Play. Abstract. 1 Introduction. Moshe Tennenholtz Technion - Israel Institute of Technology and Microsoft Research Pay or Play Sigal Oren Cornell University Michael Schapira Hebrew University and Microsoft Research Moshe Tennenholtz Technion - Israel Institute of Technology and Microsoft Research Abstract We introduce

More information

R u t c o r Research R e p o r t. On Acyclicity of Games with Cycles. Daniel Andersson a Vladimir Gurvich b Thomas Dueholm Hansen c

R u t c o r Research R e p o r t. On Acyclicity of Games with Cycles. Daniel Andersson a Vladimir Gurvich b Thomas Dueholm Hansen c R u t c o r Research R e p o r t On Acyclicity of Games with Cycles Daniel Andersson a Vladimir Gurvich b Thomas Dueholm Hansen c RRR 8-8, November 8 RUTCOR Rutgers Center for Operations Research Rutgers

More information

Congestion Games with Load-Dependent Failures: Identical Resources

Congestion Games with Load-Dependent Failures: Identical Resources Congestion Games with Load-Dependent Failures: Identical Resources Michal Penn Technion - IIT Haifa, Israel mpenn@ie.technion.ac.il Maria Polukarov Technion - IIT Haifa, Israel pmasha@tx.technion.ac.il

More information

Lecture notes for Analysis of Algorithms : Markov decision processes

Lecture notes for Analysis of Algorithms : Markov decision processes Lecture notes for Analysis of Algorithms : Markov decision processes Lecturer: Thomas Dueholm Hansen June 6, 013 Abstract We give an introduction to infinite-horizon Markov decision processes (MDPs) with

More information

1 More finite deterministic automata

1 More finite deterministic automata CS 125 Section #6 Finite automata October 18, 2016 1 More finite deterministic automata Exercise. Consider the following game with two players: Repeatedly flip a coin. On heads, player 1 gets a point.

More information