Computing Equilibria in Anonymous Games

Size: px
Start display at page:

Download "Computing Equilibria in Anonymous Games"

Transcription

1 Computing Equilibria in Anonymous Games Constantinos Dasalais Christos Papadimitriou University of California, Bereley Computer Science {costis, Abstract We present efficient approximation algorithms for finding Nash equilibria in anonymous games, that is, games in which the players utilities, though different, do not differentiate between other players. Our results pertain to such games with many players but few strategies. We show that any such game has an approximate pure Nash equilibrium, computable in polynomial time, with approximation Os λ, where s is the number of strategies and λ is the Lipschitz constant of the utilities. Finally, we show that there is a PTAS for finding an ɛ-approximate Nash equilibrium when the number of strategies is two. Introduction Will you come to FOCS? This decision depends on many factors, but one of them is how many other theoreticians will come. Now, whether each of them will come depends in a small way on what you will do, and hence this aspect of the decision to go to FOCS is gametheoretic and in fact of a particular specialized sort explored in this paper: Each player has a small number of strategies in this example, two, and the utility of each player depends on her/his own decision, as well as on how many other players will choose each of these strategies. It is crucial that the utilities do not depend on the identity of the players maing these choices that is, we ignore here your interest in whether your friends will come. Such games are called anonymous games, and in this paper we give two polynomial algorithms for computing approximate equilibria in these games. In fact, our algorithms wor in a generalized framewor: The players can be divided into a few types e.g., colleagues, students, big shots, etc., and your utility depends on how many of the players of each type choose each of the The authors were supported through NSF grant CCF , a gift from Yahoo! Research and a MICRO grant. strategies. Notice that this is a much more general framewor than that of symmetric games where all players are identical; each player can have a very individual way of evaluating the situation, and her/his utility can depend on the choices of others in an individual arbitrary way; in particular, there may be no monotonicity: For example, a player may prefer a mob with 000 attendees, mostly students, to a tiny worshop of 0, while a medium-sized conference of 00 may be more attractive than either; a second player may order these in the exact opposite way. Anonymous games comprise a broad and well studied class of games see e.g. [4, 5, 6] for recent wor on this subect by economists which are of special interest to the Algorithmic Game Theory community, as they capture important aspects of auctions and marets, as well as of Internet congestion. Our interest lies in computing Nash equilibria in such games. The problem of computing Nash equilibria in a game was recently shown to be PPAD-complete in general [9], even for the two-player case [6]. Since that negative result, the research effort in this area was, quite predictably, directed towards two goals: computing approximate equilibria mixed strategy profiles from which no player has incentive more than ɛ to defect, and exploring the algorithmic properties of special cases of games. The approximation front has been especially fertile, with several positive and negative results shown recently [9, 7, 8, 0,, 3]. What is nown about special cases of the Nash equilibrium problem? Several important cases are now nown to be generic; these include, beyond the aforementioned -player games, win-lose games games with 0- utilities [], and several inds of succinctly representable games such as graphical games and anonymous games actually, the even more specialized symmetric games [4]. For anonymous games, the genericity argument goes as follows: Any general game can be made anonymous by expanding the strategy space so that each player first chooses an identity and is punished is s/he fails to choose her/his own and then a strategy; it is easy

2 to see that, in this expanded strategy space, the utilities can be rendered in the anonymous fashion. Note, however, that this manoeuvre requires a large strategy space; in contrast, for other succinct games such as the graphical ones, genericity persists even when the number of strategies is two [5]. Are anonymous games easier when the number of strategies is fixed? We shall see that this is indeed the case. How about tractable special cases? Here there is a relative poverty of results. The zero-sum two-player case is, of course, well nown []. It was generalized in [7] to low-ran games the matrix A + B is not quite zero, but has fixed ran, a case in which a PTAS for the Nash equilibrium problem is possible. It was also nown that symmetric games with about logarithmically few strategies per player can be solved exactly in polynomial time by a reduction to the theory of real closed fields [3]. For congestion games we can find in polynomial time a pure Nash equilibrium if the game is a symmetric networ congestion game [], and an approximate pure Nash equilibrium if the congestion game is symmetric but not necessarily networ and the utilities are somehow continuous [8]. Finally, in [0] Milchtaich showed that anonymous congestion games in graphs consisting of parallel lins equivalently, anonymous games in which the utility of a player, for each choice made by the player, is a nondecreasing function of the number of players who have chosen the same strategy have pure Nash equilibria which can be computed in polynomial time by a natural greedy algorithm. In this paper we prove two positive approximation results for anonymous games with a fixed number s of strategies. Our first result states that any such game has a pure Nash equilibrium that is ɛ-approximate, where ɛ is bounded from above by a function of the form fsλ. Here λ is the Lipschitz constant of the utility functions a measure of continuity of the utility functions of the players, assumed to be such that for any partitions x and y of the players into the s strategies, ux uy λ x y. To get a sense of scale for λ note that the arguments of u i range from 0 to n and so, if u i were a linear function in the range [0, ], λ would be at most n. fs is unfortunately, still at the time of writing a quadratic function of the number of strategies. That ɛ cannot be smaller than λ is easy to see the matching pennies problem provides an easy example; the results of [8] for congestion games show a similar dependence on λ what they call the bounded ump property. We conecture that the dependence on s can be improved to sλ. Our proof uses Brouwer s fixed point theorem on an interpolation of the discrete bestresponse function to identify a simplex of pure strategy profiles and from that produce, by a geometric argument, a pure strategy profile that is ɛ-approximate, with ɛ bounded as above. Our second result is a PTAS for the case of two strategies. The main idea is to round the mixed strategies of the players to some nearby multiple of ɛ; then each such quantized mixed strategy can be considered a pure strategy, and, with finitely many in particular O/ɛ pure strategies, an anonymous game can be solved exhaustively in polynomial time in n, the number of players. The only problem is, why should the expected utilities before and after the quantization be close? Here we rely on a probabilistic lemma Theorem 3. that may be of much more general interest: Given n Bernoulli random variables with probabilities p,..., p n, there is a way to round the probabilities to multiples of /, for any, so that the distribution of the sum of these n variables is affected only by an additive O / in total variational distance no dependence on n. This implies that the expected utilities of the quantized version are within an additive ±O / of the original ones, and an O n /ɛ PTAS for two-strategy anonymous games is immediate. We feel that a more sophisticated proof of the same ind can establish a similar result for multinomial distributions, thus extending our PTAS to games with any fixed number of strategies.. Definitions and Notation An anonymous game G = n, s, {u p i } consists of a set [n] = {,..., n} of n of players, a set [s] = {,..., s} of s strategies, and a set of ns utility functions, where u p i with p [n] and i [s] is the utility of player p when she plays strategy i, a function mapping the set of partitions Π s n = {x,..., x s : x i N 0 for all i [s], s x i = n } to the interval [0, ]. Our woring assumptions are that n is large and s is fixed; notice that, in this case, anonymous games are succinctly representable [3], in the sense that their representation requires specifying On s+ numbers, as opposed the ns n numbers required for general games arguably, succinct games are the only multiplayer games that are computationally meaningful, see [3] for an extensive discussion of this point. For our approximate pure Nash equilibrium result we shall also be assuming that the utility functions are continuous, in the following sense: There is a real λ > 0, presumably very small, such that u p i x up i y λ x y for every p [n], i [s], and x, y Π s n. This continuity concept is similar to the bounded ump assumption of [8]. The convex hull of the set Π s n will be denoted by s n = {x,..., x s : x i 0, i = In the literature on Nash approximation utilities are usually normalized this way so that the approximation error is additive.

3 ,..., s, s x i = n }. A pure strategy profile in such a game is a mapping S from [n] to [s]. A pure strategy profile S is an ɛ-approximate pure Nash equilibrium, where ɛ > 0, if, for all p [n], u p Sp x[s, p] + ɛ up i x[s, p] for all i [s], where x[s, p] Π s n is the partition x,..., x s such that x i is the number of players q [n] {p} such that Sq = i. A mixed strategy profile is a set of n distributions δ p, p [n], over [s]. A mixed strategy profile is an ɛ- approximate mixed Nash equilibrium if, for all p [n] and [s], E δ,...,δ n u p i x+ɛ E δ,...,δ n u p x where, for the purposes of the expectation, i is drawn from [s] according to δ p and x is drawn from Π s n by drawing n random samples from [s] independently according to the distributions δ q, q p and forming the induced partition. Anonymous games can be extended to ones in which there is also a finite number of types of players, and utilities depend on how each type is partitioned into strategies; all our algorithms, being exhaustive, can be easily generalized to this framewor, with the number of types multiplying the exponent. Approximate Pure Equilibria In this section we prove the following result: Theorem. In any anonymous game with s strategies and Lipschitz constant λ there is an ɛ-approximate pure Nash equilibrium, where ɛ = s + 6λ. Proof: We first define a function φ from Π s n to itself: For any x Π s n, φx is defined to be y,..., y s Π s n such that, for all i [s], y i is the number of all those players p among {,..., n } notice that player n is excluded such that, for all < i, u p i [x] > up [x], and, for all > i, u p i [x] up [x]. In other words, φx is the partition induced among the first n players by their best response to x, where ties are broen lexicographically. We next interpolate φ to obtain a continuous function ˆφ from s n, the convex hull of Π s n, to itself as follows: For each x s n let us brea x into its integer and fractional parts x = x I + x F, where x I Π s n and 0 x F i for all i =,..., s. Let C[x I ] be the cell of x I, the set of all x Π s n such that, for all i s, x i = xi i or x i = xi i +. Then it is clear that x can be written as a convex combination of the elements of c[x I ]: x = x c[x I ] α x. We define ˆφx to be x c[x I ] α φx. It is possible to define the interpolation at each point x s n in a consistent way so that the resulting ˆφx is a continuous function from the compact set s n to itself, and so, by Brouwer s Theorem, it must have a fixed point, that is, a point x s n such that ˆφx = x. That is, α φx = x = α x. x c[x I ] x c[x I ] By Carathéodory s lemma, equation can be expressed as the sum of only s of the φx s x = s = γ φx, and it is easy to see that it can be rewritten as x = φx + s γ φx φx, = for some γ 0 with γ =. Recall that, in order to prove the theorem, we need to exhibit ɛ-approximate pure strategy profile. If x were an integer point, then we would be almost done modulo the n-th player, of whom we tae care last, and x itself actually, the strategy profile suggested by the partition φx would be essentially a pure Nash equilibrium, because of the equation x = φx. But in general x will be fractional, and the various φx s will be very far from x except that they happen to have x in their convex hull. Our plan is to show that x a vertex in the cell of x is an approximate pure Nash equilibrium again, considered as a pure strategy profile and forgetting for a moment the n-th player. The term φx in equation can be seen as a pure strategy profile P : Each of the n players chooses the strategy that is her/his best response to x. Therefore, in this strategy profile everybody would be happy if everybody else played according to x. The problem is, of course, that φx can be very far from x. We shall next use equation to move it close to x more precisely, close to x which we now is s-close in L distance to x without changing the utilities much. Looing at one of the other terms of, φx φx, we can thin of it as the act of switching certain players from their best response to x to their best response to x. The crucial observation is that, since x and x are at most s apart in L distance they both belong to the same cell, the change in utility for the switching players would be at most 4sλ. So, equation suggests that a strategy profile close to x can be obtained from P by combining these s flows, with little harm in utility for all players involved. The problem is how to combine them so that the right individual players are switched the situation is ain to integer multicommodity flow. We write each flow φx φx as the sum of Os terms of the form f i,i, signifying the number of individual players moved from strategy i negative components to strategy i positive components. We now that, for each 3

4 such nonzero flow, there is a set of players S i,i [n] which can be moved with only 4sλc loss in utility. The union of the s S i,i is denoted by S i,i. Now we need this lemma which can be proved by a maximum flow argument: Lemma. For any i, i there are disoint subsets T i,i S i,i with cardinality γ S i,i, =,..., s. Thus, by moving the players in T i,i from i to i for =,..., s we obtain from strategy profile P a new strategy profile P in which each player s best response is within 4sλ away from the response to x, and such that the corresponding partition x is, by equation and the roundings in the lemma, at most s s / away from x, and hence at most s more away from x. By a slightly more sophisticated argument involving an improved lemma and taing into account that s = γ can be assumed at most s /overs by choosing γ to be the largest one, whose details we omit, this can be improved to s. Furthermore, we have argued that, for all players except for the last, of course P is an 4sλapproximate response to x. Thus x is an s + 4λapproximate best response to itself. Finally, we turn to player n. Adding the best response of player n to x 0, and subtracting what player i plays in x 0, we get a profile that is two away, in L distance, from x 0, thus maing x 0 a s + 6λ-approximate Nash equilibrium and completing the proof. Since Π s n has On s points, and this is the length of the input, the algorithmic implication is immediate: Corollary.3 In any anonymous game, an ɛ- approximate pure Nash equilibrium, where ɛ is as in the Theorem, can be found in linear time. 3 Approximate Mixed Nash Equilibria 3. A Probabilistic Lemma We start by a definition. The total variation distance between two distributions P and Q supported on a finite set A is P Q = Pα Qα. α A Theorem 3. Let {p i } n be arbitrary probabilities, p i [0, ], for i =,..., n, and let {X i } n be independent indicator random variables, such that X i has expectation E[X i ] = p i, and let be a positive integer. Then there exists another set of probabilities {q i } n, q i [0, ], i =,..., n, which satisfy the following properties:. q i p i = O/, for all i =,..., n. q i is an integer multiple of, for all i =,..., n 3. if {Y i } n are independent indicator random variables such that Y i has expectation E[Y i ] = q i, then, X i Y i = O /. i i and, moreover, for all =,..., n, X i Y i i i = O /. From this, the main result of this section follows: Corollary 3. There is a PTAS for the mixed Nash equilibrium problem for two-strategy anonymous games. Proof: Let p,..., p n be a mixed Nash equilibrium of the game. We claim that q,..., q n, where the q i s are the multiples of / specified by Theorem 3., constitute a O/ -approximate mixed Nash equilibrium. Indeed, for every player i [n] and every strategy m {, } for that player let us trac the change in the expected utility of the player when the distribution over P i n defined by the {p } i is replaced by the distribution defined by the {q } i. It is not hard to see that the absolute change is bounded by the total variation distance between the distributions of the i X and the i Y where X, Y are indicators corresponding to whether player plays strategy in the distribution defined by the p i s and the q i s respectively, i.e. E[X ] = p and E[Y ] = q. Hence, the change in utility is at most O/, which implies that the q i s constitute an O/ -approximate Nash equilibrium of the game, modulo the following observation: with the slight modification in the proof of Theorem 3. we can mae sure that, when switching from p i s to q i s, for every i, the support of q i is a subset of the support of p i. To compute a quantized approximate Nash equilibrium of the original game, we proceed to define a related + -strategy game, where = O ɛ, and treat the problem as a pure Nash equilibrium problem. It is not hard to see that the latter is efficiently solvable if the number of strategies is a constant. The new game is defined as follows: the i-th pure strategy, i = 0,...,, corresponds to a player in the original game playing strategy with probability i. Naturally, the payoffs resulting from a pure strategy profile in the new game are defined to be equal to the corresponding payoffs in the original [0, ]. Recall that all utilities have been normalized to tae values in 4

5 game, by the translation of the pure strategy profile of the former into a mixed strategy profile of the latter. In particular, for any player p, we can compute its payoff given any strategy i {0,..., } for that player and any partition x + n of the other players into + strategies, in time n O/ɛ overall, by a straightforward dynamic programming algorithm, see for example [4]. The remaining details are omitted. Remar: Note that it is crucial for the proof of the Corollary that the bound on the total variation distance between the i X i and the i Y i in the statement of the theorem does not depend on the number n of random variables which are being rounded, but only on the accuracy of the rounding. Because of this requirement, several simple methods of rounding are easily seen to fail: Rounding to the Closest Multiple of /: An easy counterexample for this method arises when p i := n, for all i. In this case, the trivial rounding would mae q i := 0, for all i, and the total variation distance between the i X i and the i Y i would become arbitrarily close to e, as n goes to infinity. Randomized Rounding: An argument employing the probabilistic method could start by independently rounding each p i to some random q i which is multiple of in such a way that E[q i] = p i. This seems promising since, by independence, for any l = 0,..., n, the random variable Pr[ i Y i = l], which is a function of the q i s, has the correct expectation, i.e. E[Pr[ i Y i] = l] = Pr[ i X i = l]. The trouble is that the expectation of the random variable Pr[ i Y i = l] is very small: less than for all l and, in fact, in the order of of O/n for many terms. Moreover, the function itself comprises of sums of products on the random variables q i, in fact exponentially many terms for some values of l. Concentration seems to require which scales polynomially in n. Proof Technique: We follow instead a completely different approach which aims at directly approximating the distribution of the i X i. The intuition is the following: The distribution of the i X i should be close in total variation distance to a Poisson distribution of the same mean i p i. Hence, it seems that, if we define q i s which are multiples of in such a way that the means i p i and i q i are close, then the distribution of the i Y i should be close in total variation distance to the same Poisson distribution and hence to the distribution of the i X i by triangle inequality. There are several complications, of course, the main one being that the distribution of the i X i can be well approximated by a Poisson distribution of the same mean only when the p i s are relatively small. When the p i s tae arbitrary values in [0, ] and n scales, the Poisson distribution can be very far from the distribution of the i X i. In fact, we wouldn t expect that the Poisson distribution can approximate the distribution of arbitrary sums of indicators since its mean and variance are the same. To counter this we resort to a special ind of distributions, called translated Poisson distributions, which are Poisson distributions appropriately shifted on their domain. An arbitrary sum of indicators can be now approximated as follows: a Poisson distribution is defined with mean and, hence, variance equal to the variance of the sum of the indicators; then the distribution is appropriately shifted on its domain so that its new mean coincides with the mean of the sum of the indicators being approximated. The translated Poisson approximation will outperform the Poisson approximation for intermediate values of the p i s, while the Poisson approximation will remain better near the boundaries, i.e. for values of p i close to 0 or. Even for the intermediate region of values for the p i s, the translated Poisson approximation is not sufficient since it only succeeds when the number of the indicators being summed over is relatively large, compared to the minimum expectation. A different argument is required when this is not the case. Our bounding technique has to interleave these considerations in a very delicate fashion to achieve the approximation result. At a high level, we treat separately the X i s with small, medium or large expectation; in particular, for some α 0, to be fixed later, we define the following subintervals of [0, ]: [. L := 0, α : interval of small expectations; [. M := α, / : first interval of medium expectations; [ / 3. M := medium expectations; [ α, ]: interval of high expecta- 4. H := tions., α : second interval of Denoting L := {i E[X i ] L}, we establish Lemma 3.9 that X i Y i = O / i L i L 5

6 and similarly for M Lemma 3.. Symmetric arguments setting X i = X i and Y i = Y i imply the same bounds for the intervals M and H. Therefore, an application of the coupling lemma implies that X i Y i = O /, i i which concludes the proof. The details of the proof are postponed to Section 3.3. The proof for the partial sums i X i and i Y i follows easily from the analysis of Section 3.3 and its details are sipped for this extended abstract. The next section provides the required bacground on Poisson approximations. 3. Poisson Approximations The following theorem is classical in the theory of Poisson approximations. Theorem 3.3 [] Let J,..., J n be a sequence of independent random indicators with E[J i ] = p i. Then n n J i P oisson p i n p i n p. i As discussed in the previous section, the above bound is sharp when the indicators have small expectations, but loose when the indicators are arbitrary. The following approximation bound becomes sharp when the previous is not. But first let us formally define the translated Poisson distribution. Definition 3.4 [5] We say that an integer random variable Y has a translated Poisson distribution with paremeters µ and σ and write LY = T P µ, σ if LY µ σ = P oissonσ + {µ σ }, where {µ σ } represents the fractional part of µ σ. Theorem 3.5 provides an approximation result for the translated Poisson distribution using Stein s method. Theorem 3.5 [5] Let J,..., J n be a sequence of independent random indicators with E[J i ] = p i. Then n J i T P µ, σ n p3 i p i + n p, i p i where µ = n p i and σ = n p i p i. Lemmas 3.6 and 3.7 provide respectively bounds for the total variation distance between two Poisson distributions and two translated Poisson distributions with different parameters. The proof of 3.6 is postponed to the appendix, while the proof of 3.7 is provided in [3]. Lemma 3.6 Let λ, λ R + \ {0}. Then P oissonλ P oissonλ e λ λ e λ λ. Lemma 3.7 [3] Let µ, µ R and σ, σ R + \ {0} be such that µ σ µ σ. Then T P µ, σ T P µ, σ µ µ + σ σ +. σ 3.3 Proof of the Main Result In this section we complete the proof of Theorem 3.. As argued above, it is enough to round the random variables {X i } i L into random variables {Y i } i L so that the total variation distance between the random variables X L := i L X i and Y L := i L Y i is small and similarly for the subinterval M. Our rounding will have different obective in the two regions. When rounding the X i s with i L we aim to approximate the mean of X L as tightly as possible. On the other hand, when rounding the X i s with i M, we give up on approximating the mean very tightly in order to also approximate well the variance. The details of the rounding follow. Some notation first: Let us partition the interval [0, /] into / subintervals I 0, I,..., I / where I 0 = [ 0, [, I =, [ ] /,..., I / =, /. The intervals I 0,..., I / define the partition of L M into the subsets I0, I,..., I /, where σ I = {i E[X i ] I }, = 0,,..., /. For all {0,..., / } with I, let I = {,,..., n } and, for all i {,..., n }, let p i := E[X i ] and δ i := p i. We proceed to define the rounding of the X i s into the Y i s in the intervals L and M separately. 6

7 [ Interval L := 0, α of small expectations. Observe first that L I 0... I α and define the corresponding subset of the indices L := I 0... I α. We define the Y i, i L, via the following iterative procedure. Our ultimate goal is to round the X i s into Y i s appropriately so that the sum of the expectations of the X i s and of the Y i s are as close as possible. The rounding procedure is as follows. i. ɛ 0 := 0; ii. for := 0 to α a S := ɛ + n b m := δ i ; S ; ɛ + := S m {assertion: m n - see ustification next} c set q + i := for i =,..., m and q i := for i = m +,..., n ; d for all i {,..., n }, let Y i be a {0, }- random variable with expectation q i ; iii. Suppose that the random variables Y i, i L, are mutually independent. It is easy to see that, for all {0,..., α }, ɛ < ; this follows immediately from the description of the procedure, in particular Steps i and iib. This further implies that m n, for all, since at Step iib we have S = ɛ n + δ i ; < + n = + n. Hence, the assertion following Step iib is satisfied. Finally, note that, for all, n q i = m + + n m Therefore, = n + m = n + S ɛ + n n = n + δ i + ɛ ɛ + = p i + ɛ ɛ +. α =0 which implies n q i = α =0 n p i + ɛ 0 ɛ α, Lemma 3.8 i L E[Y i] i L E[X i]. The following lemma characterizes the total variation distance between i L X i and i L Y i. Lemma 3.9 i L X i i L Y i 3 Proof: By lemma 3.3 we have that i L and i L X i P oisson i L Y i P oisson i L p i q i α i L p i i L p i i L q i i L q, i where p i := E[X i ] and q i := E[Y i ] for all i. The following lemma is proven in the appendix. Lemma 3.0 For any u > 0 and any set {p i } i I, where p i [0, u], for all i I, i I p i i I p i u. Using Lemmas 3.0, 3.8 and 3.6 and the triangle inequality we get that i L X i i L Y i α + e e 3 α, where we used the fact that e e 3 α, for sufficiently large. [ α, / : medium expecta- Interval M := tions. Observe first that M I α... I / and define the corresponding subset of the indices M := I α... I /. We define the Y i, i M, via the following procedure which is slightly different than the one we used for the set of indices L. Our goal here is to approximate well both the mean and the variance of the sum i M X i. In fact, we will give up on approximating the mean as tightly as possible, which we did above, in order achieve a good approximation of the variance. The rounding procedure is as follows. for := α to 7

8 a S := n S b m := ; δ i ; c set q + i := for i =,..., m and q i := for i = m +,..., n ; d for all i {,..., n }, let Y i be a {0, }- random variable with expectation q i ; Suppose that the random variables Y i, i M, are mutually independent. Lemma 3. characterizes the quality of the rounding procedure in terms of mean and variance. Defining ζ := i I E[X i ] i I E[Y i ], we have Lemma 3. For all { α,..., } a ζ = n δ i m. b 0 ζ. c i I Var[X i ] = n n δ i n δ i d i I Var[Y i ] = n e i I Var[X i ] i I m n δ i. + m Var[Y i ] = + + ζ + The following lemma bounds the total variation distance between the random variables i M X i and i M Y i. Lemma 3. i M X i i M Y i O α+β + O α + O + O β Proof: We distinguish two cases for the size of M. For some β 0, such that α + β >, let us distinguish two possibilities for the size of M : a. M β b. M > β Let us treat each interval separately in the following lemmas. Proof: The proof follows from the coupling lemma and an easy coupling argument. The details are postponed to the appendix. Lemma 3.4 If M > β then X i Y i i M i M O α+β + O α + O. Proof: By lemma 3.5 we have that and X i T P µ, σ i M i M p3 i p i + i M p i p i Y i T P µ, σ i M i M q3 i q i + i M q i q i where µ = i M p i, µ = i M q i, σ = i M p i p i, σ = i M q i q i and p i := E[X i ], q i := E[Y i ] for all i. The following lemma is proven in the appendix. Lemma 3.5 For any u 0, and any set {p i} i I, where p i [u, ], for all i I, i I p3 i p i + u + 4u 8u 3 i I p i p i 6 I u u 4u + 4u 3 Applying the above lemma with u = α and I = M, where recall M > β the above bound becomes + u + 4u 8u 3 6 I u u 4u + 4u 3 = O α+β, Lemma 3.3 If M β then X i Y i i M i M β. which implies X i T P µ, σ i M = O α+β 8

9 and Y i T P µ, σ i M = O α+β, where we used that, for any set of values {p i M } i M, i M p i p i M [α [α = O α+β and similarly for any set of values {q i M } i M. All that remains to do is to bound the total variation distance between the distributions T P µ, σ and T P µ, σ for the parameters µ, σ, µ, σ specified above. The following claim is proved in the appendix. Claim 3.6 For the parameters specified above T P µ, σ T P µ, σ O α + O /. Claim 3.6 along with Equations and and the triangle inquality imply that X i i M = O i M Y i α+β Putting Everything Together. + O α + O /. Suppose that the random variables {Y i } i defined above are mutually independent. It follows that X i i i Y i = O α + O α+β + O α + O + O β. Setting α = β = 3 4 we get a total variation distance of O /4. A more delicate argument establishes an exponent of. 4 Open Problems Can our PTAS be extended to arbitrary fixed number of strategies? We believe so. A more sophisticated technique would subdivide, instead of the interval [0, ] as our proof did, the s -dimensional simplex into domains in which multinomial instead of binomial distributions would be approximated in different ways, possibly using the techniques of [6]. This way of extending our result already seems to wor for s = 3, and we are hopeful that it will wor for general fixed s. Can the quadratic, in s, approximation bound of our pure Nash equilibrium algorithm be improved to linear? We believe so, and we conecture that sλ is a lower bound. Finally, we hope that the ways of thining about anonymous games introduced in this paper will eventually lead to algorithms for the practical solution of this important class of games. Acnowledgment: We want to than Uri Feige for a helpful discussion. References [] T. Abbott, D. Kane, P. Valiant. On the Complexity of Two-Player Win-Lose Games. FOCS, 005. [] A. D. Barbour, L. Holst and S. Janson. Poisson Approximation. Oxford University Press, New Yor, 99. [3] A. D. Barbour and T. Lindvall. Translated Poisson Approximation for Marov Chains. Journal of Theoretical Probability, 93, July 006. [4] M. Blonsi. Characterization of equilibria in large anonymous games. University of Mannheim, 000. [5] M. Blonsi. Equilibrium Characterization in Large Anonymous Games. University of Mannheim, 00. [6] X. Chen and X. Deng. Settling the complexity of two-player nash equilibrium. FOCS, 006. [7] X. Chen, X. Deng, and S.-H. Teng. Computing nash equilibria:approximation and smoothed complexity. FOCS, 006. [8] S. Chien and A. Sinclair. Convergence to Approximate Nash Equilibria in Congestion Games. SODA, 007. [9] C. Dasalais, P. Goldberg, and C. Papadimitriou. The complexity of computing a nash equilibrium. STOC,

10 [0] C. Dasalais, A. Mehta, and C. Papadimitriou. A note on approximate nash equilibria. WINE, 006. [] C. Dasalais, A. Mehta, and C. Papadimitriou. Progress in Approximate Nash Equilibria. EC, 007. [] A. Fabriant, C.H. Papadimitriou and K. Talwar. The Complexity of Pure Nash Equilibria. STOC, 004. [3] T. Feder, H. Nazerzadeh, and A. Saberi. Approximating Nash Equilibria Using Small-Support Strategies. EC, 007. [4] D. Gale, H. W. Kuhn, and A. W. Tucer. On symmetric games. In H. W. Kuhn and A. W. Tucer, editors, Contributions to the Theory of Games, :8 87, Princeton University Press, 950. [5] P. Goldberg and C. Papadimitriou. Reducibility among equilibrium problems. STOC, 006. [6] E. Kalai. Partially-Specified Large Games. WINE, 005. [7] R. Kannan and T. Theobald. Games of fixed ran: A hierarchy of bimatrix games. SODA, 007. [9] R. Lipton, E. Marais, and A. Mehta. Playing large games using simple strategies. Electronic Commerce, 003. [0] I. Milchtaich. Congestion games with playerspecific payoff functions. Games and Economic Behavior, 3: 4. [] J. Nash. Noncooperative Games. Annals of Mathematics, 54:89 95, 95. [] J. von Neumann and O. Morgenstern. Theory of Games and Economic Behavior. Princeton University Press, 944. [3] C. H. Papadimitriou and T. Roughgarden. Computing equilibria in multi-player games. SODA, 005. [4] C. H. Papadimitriou. Computing Correlated Equilibria in Multiplayer Games. STOC, 005. [5] A. Röllin. Translated Poisson approximation using exchangeable pair couplings. ArXiv Report, 006. [6] B. Roos. Multinomial and Krawtchou Approximations to the Generalized Multinomial Distribution. Theory of Probability and Its Applications, 46:03 7, 00. [8] S. C. Kontogiannis, P. N. Panagopoulou, and P. G. Spirais. Polynomial algorithms for approximating nash equilibria of bimatrix games. WINE,

11 APPENDIX A Missing Proofs Proof of lemma 3.6: Without loss of generality assume that 0 < λ λ and denote δ = λ λ. For all i {0,,...}, denote Finally, define I = {i : p i q i }. p i = e λ λi i! and q i = e λ λi i!. We have On the other hand p i q i = p i q i i! e λ λ i e λ δ λ i i I i I i I = i I i! e λ λ i e δ e δ n i=0 i! e λ λ i = e δ. p i q i = q i p i λ i! e λ + δ i e λ λ i i/ I i/ I i / I = λ i! e λ + δ i λ i i / I n λ i! e λ + δ i λ i i=0 = e δ n i=0 = e δ. i! e λ+δ λ + δ i n i=0 i! e λ λ i Combining the above we get the result. Proof of lemma 3.0: For all i I and any choice of values p [0, u], I \ {i}, define the function fx = x + A x + B where A = I\{i} p and B = I\{i} p ; observe that A ub. If A = B = 0 then fx = x so f achieves its maximum at x = u. If A 0 B, the derivative of f is f A + xb + x x = B + x. Denoting by hx the numerator of the above expression, the derivative of h is h x = B + x > 0, x. Therefore, h is increasing which implies that hx = 0 has at most one root and hence f x = 0 has at most one root since the denominator in the above expression for f x is always positive. Note that f 0 < 0 whereas f u > 0.

12 Therefore, there exists a unique ρ 0, u such that f x < 0, x 0, ρ, f ρ = 0 and f x > 0, x ρ, u, i.e. f is decreasing in 0, ρ and increasing in ρ, u. This implies that But fu = u +A u+b max fx = max{f0, fu}. x [0,u] A B = f0 since A ub. Therefore, max x fx = fu. From the above it follows that, independent of the values of I\{i} p i and I\{i} p i, fx is maximized at x = u. Therefore, the expression P P i I p i i I pi Proof of lemma 3.: For a we have ζ = i I To get b we notice that E[X i ] i I is maximized when p i = u for all i I. This implies i I p i i I p i n I u I u u. n E[Y i ] = p i n = n = + δ i q i δ i m. + m + n m n δ i m = S S = S S S S = S S. For c, d and e, observe that, for all i I, i {,..., n }, Var[X i ] = p i p i = p i p i = E[X i ] + δ i = + δ i + δ i Var[Y i ] = q i q i = = {E[Y i ] +, i {,..., m } E[Y i ], i {m +,..., n } { + +, i {,..., m }, i {m +,..., n }

13 Hence, Var[X i ] = Var[X i ] = E[X i ] n n δ i i I n n = n + = n n n n δ i δ i n n δ i + n n δ i δ i δ i. Var[Y i ] = Var[Y i ] = E[Y i ] n m i I n n + = n + m n m = n + m + +. and i I Var[X i ] i I = n Var[Y i ] = + n n δ i δ i n + m + δ i n = n δ i m + m = n ζ + m δ i. Proof of Lemma 3.3: The coupling lemma implies that for any oint distribution on {X i } i {Y i } i the following is satisfied X i Y i i M i M Pr X i. i M A union bound further implies Pr X i i M i M Y i Pr X i Y i i M i M Y i i M Pr [X i Y i ]. Hence for any oint distribution on {X i } i {Y i } i the following is satisfied X i Y i i M i M Pr [X i Y i ]. 3 i M Let us now choose a oint distribution on {X i } i {Y i } i in which, for all i, X i and Y i are coupled in such a way that Pr[X i Y i ]. 3

14 This is easy to do since by construction p i q i, for all i. Plugging in Formula 3 the particular oint distribution ust described yields X i Y i i M i M M β. Proof of lemma 3.5: For all i I and any choice of values p [u, ], I \ {i}, define the function [ fx = x3 x + A x x + B, x u, ] where A = I\{i} p3 p and B = I\{i} p p. For the sae of the argument let us extend the range of f to [0, ]. The derivative of f is f x = A + 4x + x 3B + x 4Bx x B + x x 3, where note that the denominator is positive for all x [0, ]. Denoting by hx the numerator of the above expression, the derivative of h is [ h x = 4A + xx + xx x + 6Bx x > 0, x 0, ]. Therefore, h is increasing in 0, which implies that hx = 0 has at most one root in 0, and hence f x = 0 has at most one root in 0, since the denominator in the above expression for f x is always positive. Note that f 0 < 0 whereas f > 0. Therefore, there exists a unique ρ 0, such that f x < 0, x 0, ρ, f ρ = 0 and f x > 0, x ρ,, i.e. f is decreasing in 0, ρ and increasing in ρ,. This implies that Hence, the expression P i I p3 i pi P i I pi pi is bounded by max fx = max {fu, f/}. x [u,/] max m {0,..., I } m 6 + I mu3 u + I mu u m 4 which we further bound by where The derivative of g is gx := x max gx x [0, I ] 6 + I xu3 u x. 4 + I xu u g x = 4 I u + u 6u + u 3 8u 4 u 6u u 4 3u 5 x u 4 I uu + u x 3, where note that the denominator is positive for all u [0, ] and the numerator is of the form Au Bux where Au := 4 I u + u 6u + u 3 8u 4 > 0, u 0, and Bu := u 6u u 4 3u 5 > 0, u 0,. 4

15 Hence, if we tae ζ := Au Bu, g x > 0, x 0, ζ, g ζ = 0 and g x < 0, x ζ, +, i.e. g is increasing in 0, ζ and decreasing in ζ, +. This implies that g achieves its maximum at x := ζ. The maximum value itself is This concludes the proof. Proof of claim 3.6: From Lemma 3.7 if follows that gζ = + u + 4u 8u 3 6 I u u 4u + 4u 3. T P µ, σ T P µ, σ max { µ µ σ + σ σ + σ, µ µ + σ σ } + σ σ. Denoting J = { α,..., / }, we have that µ µ = E[X i ] E[Y i ] = E[X i ] E[Y i ] = ζ. i M i M J J σ = J Var[X i ] = J i I J J n n i I + n n. n n δ i δ i Similarly σ = J Var[Y i ] = J i I J n n + m + J J n n. Finally, σ σ = J i I Var[X i ] i I Var[Y i ] = J ζ + m n δ i, where observe that σ σ 0, since n ζ + m δ i ζ + m n = ζ ζ = = + ζ 0. δ i 5

16 We proceed to bound each of the terms µ µ σ, σ σ + σ, µ µ σ and σ σ + separately. We have σ Note that µ µ = µ µ σ σ +Z For = α, let us define J J ζ n J ζ J:n Z J:n where Z = { J n } = J:n α +Z = α Z min Z/ α Z { α +Z = α }. Z = = 6 Z Z Z + 3Z Z. 4 F Z := +Z Z 6Z Z Z + 3Z Z = The derivative of F is F Z = Z 6Z = α α + Z 6Z < 0, for large enough. Hence F is decreasing in [, / α ] so it achieves its minimum at Z = / α. The minimum itself is F / α = α 4 α α + 6 α = Ω. Hence, µ µ σ = O /. Similarly, we get It remains to bound the terms σ σ + σ µ µ σ = O /. and σ σ +. We have σ 6

17 σ σ + σ + J ζ + m n J + J ζ + M J n where M := m + Z + M J n where Z = { J n } Z = J n + + M J n The second term of the above expression is bounded as follows + M J n J n + M J n J n + M J n α α + m J M α α α α = O α. The first term is bounded as follows From above +Z For = α, let us define Z J n J:n α +Z = α Z Z min Z/ α { α +Z = α }. Z = = 6 Z Z Z + 3Z Z. 5 GZ := +Z Z Z Z + 3Z Z = The derivative of G is G Z = Z 6 = 3 6 α + 3 4Z 6 > 0, for large enough since Z /. Hence G is increasing in [, / α ] so it achieves its minimum at Z =. The minimum itself is G = 6 6 α + 6 α 6 = Ω α. 7

18 Hence, Z J n = O α, which together with the above implies σ σ + σ = O α Similarly, we get σ σ + σ = O α. Putting everything together we get that T P µ, σ T P µ, σ O α + O /. 8

1 PROBLEM DEFINITION. i=1 z i = 1 }.

1 PROBLEM DEFINITION. i=1 z i = 1 }. Algorithms for Approximations of Nash Equilibrium (003; Lipton, Markakis, Mehta, 006; Kontogiannis, Panagopoulou, Spirakis, and 006; Daskalakis, Mehta, Papadimitriou) Spyros C. Kontogiannis University

More information

A Note on Approximate Nash Equilibria

A Note on Approximate Nash Equilibria A Note on Approximate Nash Equilibria Constantinos Daskalakis, Aranyak Mehta, and Christos Papadimitriou University of California, Berkeley, USA. Supported by NSF grant CCF-05559 IBM Almaden Research Center,

More information

Approximate Nash Equilibria with Near Optimal Social Welfare

Approximate Nash Equilibria with Near Optimal Social Welfare Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 015) Approximate Nash Equilibria with Near Optimal Social Welfare Artur Czumaj, Michail Fasoulakis, Marcin

More information

Theoretical Computer Science

Theoretical Computer Science Theoretical Computer Science 0 (009) 599 606 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: www.elsevier.com/locate/tcs Polynomial algorithms for approximating

More information

Large Supports are required for Well-Supported Nash Equilibria

Large Supports are required for Well-Supported Nash Equilibria Large Supports are required for Well-Supported Nash Equilibria Yogesh Anbalagan 1, Hao Huang 2, Shachar Lovett 3, Sergey Norin 4, Adrian Vetta 5, and Hehui Wu 6 Abstract We prove that for any constant

More information

CS364A: Algorithmic Game Theory Lecture #16: Best-Response Dynamics

CS364A: Algorithmic Game Theory Lecture #16: Best-Response Dynamics CS364A: Algorithmic Game Theory Lecture #16: Best-Response Dynamics Tim Roughgarden November 13, 2013 1 Do Players Learn Equilibria? In this lecture we segue into the third part of the course, which studies

More information

arxiv: v1 [cs.gt] 11 Feb 2011

arxiv: v1 [cs.gt] 11 Feb 2011 On Oblivious PTAS s for Nash Equilibrium Constantinos Daskalakis EECS and CSAIL, MIT costis@mit.edu Christos Papadimitriou Computer Science, U.C. Berkeley christos@cs.berkeley.edu arxiv:1102.2280v1 [cs.gt]

More information

New Algorithms for Approximate Nash Equilibria in Bimatrix Games

New Algorithms for Approximate Nash Equilibria in Bimatrix Games New Algorithms for Approximate Nash Equilibria in Bimatrix Games Hartwig Bosse Jaroslaw Byrka Evangelos Markakis Abstract We consider the problem of computing additively approximate Nash equilibria in

More information

6.891 Games, Decision, and Computation February 5, Lecture 2

6.891 Games, Decision, and Computation February 5, Lecture 2 6.891 Games, Decision, and Computation February 5, 2015 Lecture 2 Lecturer: Constantinos Daskalakis Scribe: Constantinos Daskalakis We formally define games and the solution concepts overviewed in Lecture

More information

Well-Supported vs. Approximate Nash Equilibria: Query Complexity of Large Games

Well-Supported vs. Approximate Nash Equilibria: Query Complexity of Large Games Well-Supported vs. Approximate Nash Equilibria: Query Complexity of Large Games Xi Chen 1, Yu Cheng 2, and Bo Tang 3 1 Columbia University, New York, USA xichen@cs.columbia.edu 2 University of Southern

More information

Game Theory and Algorithms Lecture 7: PPAD and Fixed-Point Theorems

Game Theory and Algorithms Lecture 7: PPAD and Fixed-Point Theorems Game Theory and Algorithms Lecture 7: PPAD and Fixed-Point Theorems March 17, 2011 Summary: The ultimate goal of this lecture is to finally prove Nash s theorem. First, we introduce and prove Sperner s

More information

Polynomial-time Computation of Exact Correlated Equilibrium in Compact Games

Polynomial-time Computation of Exact Correlated Equilibrium in Compact Games Polynomial-time Computation of Exact Correlated Equilibrium in Compact Games Albert Xin Jiang Kevin Leyton-Brown Department of Computer Science University of British Columbia Outline 1 Computing Correlated

More information

Symmetries and the Complexity of Pure Nash Equilibrium

Symmetries and the Complexity of Pure Nash Equilibrium Symmetries and the Complexity of Pure Nash Equilibrium Felix Brandt a Felix Fischer a, Markus Holzer b a Institut für Informatik, Universität München, Oettingenstr. 67, 80538 München, Germany b Institut

More information

Theorem (Special Case of Ramsey s Theorem) R(k, l) is finite. Furthermore, it satisfies,

Theorem (Special Case of Ramsey s Theorem) R(k, l) is finite. Furthermore, it satisfies, Math 16A Notes, Wee 6 Scribe: Jesse Benavides Disclaimer: These notes are not nearly as polished (and quite possibly not nearly as correct) as a published paper. Please use them at your own ris. 1. Ramsey

More information

arxiv: v1 [cs.gt] 4 Apr 2017

arxiv: v1 [cs.gt] 4 Apr 2017 Communication Complexity of Correlated Equilibrium in Two-Player Games Anat Ganor Karthik C. S. Abstract arxiv:1704.01104v1 [cs.gt] 4 Apr 2017 We show a communication complexity lower bound for finding

More information

Bipartite decomposition of random graphs

Bipartite decomposition of random graphs Bipartite decomposition of random graphs Noga Alon Abstract For a graph G = (V, E, let τ(g denote the minimum number of pairwise edge disjoint complete bipartite subgraphs of G so that each edge of G belongs

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Efficient Nash Equilibrium Computation in Two Player Rank-1 G

Efficient Nash Equilibrium Computation in Two Player Rank-1 G Efficient Nash Equilibrium Computation in Two Player Rank-1 Games Dept. of CSE, IIT-Bombay China Theory Week 2012 August 17, 2012 Joint work with Bharat Adsul, Jugal Garg and Milind Sohoni Outline Games,

More information

The complexity of uniform Nash equilibria and related regular subgraph problems

The complexity of uniform Nash equilibria and related regular subgraph problems The complexity of uniform Nash equilibria and related regular subgraph problems Vincenzo Bonifaci a,b,1,, Ugo Di Iorio b, Luigi Laura b a Dipartimento di Ingegneria Elettrica, Università dell Aquila. Monteluco

More information

Polynomial-time Computation of Exact Correlated Equilibrium in Compact Games

Polynomial-time Computation of Exact Correlated Equilibrium in Compact Games Polynomial-time Computation of Exact Correlated Equilibrium in Compact Games ABSTRACT Albert Xin Jiang Department of Computer Science University of British Columbia Vancouver, BC, Canada jiang@cs.ubc.ca

More information

3 Nash is PPAD Complete

3 Nash is PPAD Complete Electronic Colloquium on Computational Complexity, Report No. 134 (2005) 3 Nash is PPAD Complete Xi Chen Department of Computer Science Tsinghua University Beijing, P.R.China xichen00@mails.tsinghua.edu.cn

More information

On the Complexity of Two-Player Win-Lose Games

On the Complexity of Two-Player Win-Lose Games On the Complexity of Two-Player Win-Lose Games Tim Abbott, Daniel Kane, Paul Valiant April 7, 2005 Abstract The efficient computation of Nash equilibria is one of the most formidable computational-complexity

More information

Algorithmic Game Theory

Algorithmic Game Theory Nash Equilibria in Zero-Sum Games Algorithmic Game Theory Algorithmic game theory is not satisfied with merely an existence result for an important solution concept such as Nash equilibrium in mixed strategies.

More information

Part II: Integral Splittable Congestion Games. Existence and Computation of Equilibria Integral Polymatroids

Part II: Integral Splittable Congestion Games. Existence and Computation of Equilibria Integral Polymatroids Kombinatorische Matroids and Polymatroids Optimierung in der inlogistik Congestion undgames im Verkehr Tobias Harks Augsburg University WINE Tutorial, 8.12.2015 Outline Part I: Congestion Games Existence

More information

On the Impossibility of Black-Box Truthfulness Without Priors

On the Impossibility of Black-Box Truthfulness Without Priors On the Impossibility of Black-Box Truthfulness Without Priors Nicole Immorlica Brendan Lucier Abstract We consider the problem of converting an arbitrary approximation algorithm for a singleparameter social

More information

Single parameter FPT-algorithms for non-trivial games

Single parameter FPT-algorithms for non-trivial games Single parameter FPT-algorithms for non-trivial games Author Estivill-Castro, Vladimir, Parsa, Mahdi Published 2011 Journal Title Lecture Notes in Computer science DOI https://doi.org/10.1007/978-3-642-19222-7_13

More information

Algorithmic Game Theory and Economics: A very short introduction. Mysore Park Workshop August, 2012

Algorithmic Game Theory and Economics: A very short introduction. Mysore Park Workshop August, 2012 Algorithmic Game Theory and Economics: A very short introduction Mysore Park Workshop August, 2012 PRELIMINARIES Game Rock Paper Scissors 0,0-1,1 1,-1 1,-1 0,0-1,1-1,1 1,-1 0,0 Strategies of Player A Strategies

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

A Min-Max Theorem for k-submodular Functions and Extreme Points of the Associated Polyhedra. Satoru FUJISHIGE and Shin-ichi TANIGAWA.

A Min-Max Theorem for k-submodular Functions and Extreme Points of the Associated Polyhedra. Satoru FUJISHIGE and Shin-ichi TANIGAWA. RIMS-1787 A Min-Max Theorem for k-submodular Functions and Extreme Points of the Associated Polyhedra By Satoru FUJISHIGE and Shin-ichi TANIGAWA September 2013 RESEARCH INSTITUTE FOR MATHEMATICAL SCIENCES

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Estimating Gaussian Mixture Densities with EM A Tutorial

Estimating Gaussian Mixture Densities with EM A Tutorial Estimating Gaussian Mixture Densities with EM A Tutorial Carlo Tomasi Due University Expectation Maximization (EM) [4, 3, 6] is a numerical algorithm for the maximization of functions of several variables

More information

Asymptotically optimal induced universal graphs

Asymptotically optimal induced universal graphs Asymptotically optimal induced universal graphs Noga Alon Abstract We prove that the minimum number of vertices of a graph that contains every graph on vertices as an induced subgraph is (1+o(1))2 ( 1)/2.

More information

Monotonic ɛ-equilibria in strongly symmetric games

Monotonic ɛ-equilibria in strongly symmetric games Monotonic ɛ-equilibria in strongly symmetric games Shiran Rachmilevitch April 22, 2016 Abstract ɛ-equilibrium allows for worse actions to be played with higher probability than better actions. I introduce

More information

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash Equilibrium Price of Stability Coping With NP-Hardness

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden 1 Selecting Efficient Correlated Equilibria Through Distributed Learning Jason R. Marden Abstract A learning rule is completely uncoupled if each player s behavior is conditioned only on his own realized

More information

DS-GA 1002 Lecture notes 2 Fall Random variables

DS-GA 1002 Lecture notes 2 Fall Random variables DS-GA 12 Lecture notes 2 Fall 216 1 Introduction Random variables Random variables are a fundamental tool in probabilistic modeling. They allow us to model numerical quantities that are uncertain: the

More information

Basic Game Theory. Kate Larson. January 7, University of Waterloo. Kate Larson. What is Game Theory? Normal Form Games. Computing Equilibria

Basic Game Theory. Kate Larson. January 7, University of Waterloo. Kate Larson. What is Game Theory? Normal Form Games. Computing Equilibria Basic Game Theory University of Waterloo January 7, 2013 Outline 1 2 3 What is game theory? The study of games! Bluffing in poker What move to make in chess How to play Rock-Scissors-Paper Also study of

More information

Computing Equilibria by Incorporating Qualitative Models 1

Computing Equilibria by Incorporating Qualitative Models 1 Computing Equilibria by Incorporating Qualitative Models 1 Sam Ganzfried March 3, 2010 CMU-CS-10-105 Tuomas Sandholm School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 1 This material

More information

CS364A: Algorithmic Game Theory Lecture #13: Potential Games; A Hierarchy of Equilibria

CS364A: Algorithmic Game Theory Lecture #13: Potential Games; A Hierarchy of Equilibria CS364A: Algorithmic Game Theory Lecture #13: Potential Games; A Hierarchy of Equilibria Tim Roughgarden November 4, 2013 Last lecture we proved that every pure Nash equilibrium of an atomic selfish routing

More information

Notes on Poisson Approximation

Notes on Poisson Approximation Notes on Poisson Approximation A. D. Barbour* Universität Zürich Progress in Stein s Method, Singapore, January 2009 These notes are a supplement to the article Topics in Poisson Approximation, which appeared

More information

Maximum subsets of (0, 1] with no solutions to x + y = kz

Maximum subsets of (0, 1] with no solutions to x + y = kz Maximum subsets of 0, 1] with no solutions to x + y = z Fan R. K. Chung Department of Mathematics University of Pennsylvania Philadelphia, PA 19104 John L. Goldwasser West Virginia University Morgantown,

More information

The Computational Aspect of Risk in Playing Non-Cooperative Games

The Computational Aspect of Risk in Playing Non-Cooperative Games The Computational Aspect of Risk in Playing Non-Cooperative Games Deeparnab Chakrabarty Subhash A. Khot Richard J. Lipton College of Computing, Georgia Tech Nisheeth K. Vishnoi Computer Science Division,

More information

The price of anarchy of finite congestion games

The price of anarchy of finite congestion games The price of anarchy of finite congestion games George Christodoulou Elias Koutsoupias Abstract We consider the price of anarchy of pure Nash equilibria in congestion games with linear latency functions.

More information

Solving Zero-Sum Security Games in Discretized Spatio-Temporal Domains

Solving Zero-Sum Security Games in Discretized Spatio-Temporal Domains Solving Zero-Sum Security Games in Discretized Spatio-Temporal Domains APPENDIX LP Formulation for Constant Number of Resources (Fang et al. 3) For the sae of completeness, we describe the LP formulation

More information

Asymptotically optimal induced universal graphs

Asymptotically optimal induced universal graphs Asymptotically optimal induced universal graphs Noga Alon Abstract We prove that the minimum number of vertices of a graph that contains every graph on vertices as an induced subgraph is (1 + o(1))2 (

More information

1 Basic Game Modelling

1 Basic Game Modelling Max-Planck-Institut für Informatik, Winter 2017 Advanced Topic Course Algorithmic Game Theory, Mechanism Design & Computational Economics Lecturer: CHEUNG, Yun Kuen (Marco) Lecture 1: Basic Game Modelling,

More information

Chapter 9. Mixed Extensions. 9.1 Mixed strategies

Chapter 9. Mixed Extensions. 9.1 Mixed strategies Chapter 9 Mixed Extensions We now study a special case of infinite strategic games that are obtained in a canonic way from the finite games, by allowing mixed strategies. Below [0, 1] stands for the real

More information

Lecture 6: April 25, 2006

Lecture 6: April 25, 2006 Computational Game Theory Spring Semester, 2005/06 Lecture 6: April 25, 2006 Lecturer: Yishay Mansour Scribe: Lior Gavish, Andrey Stolyarenko, Asaph Arnon Partially based on scribe by Nataly Sharkov and

More information

Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about

Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about 0368.4170: Cryptography and Game Theory Ran Canetti and Alon Rosen Lecture 7 02 December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about Two-Player zero-sum games (min-max theorem) Mixed

More information

Homework 4 Solutions

Homework 4 Solutions CS 174: Combinatorics and Discrete Probability Fall 01 Homework 4 Solutions Problem 1. (Exercise 3.4 from MU 5 points) Recall the randomized algorithm discussed in class for finding the median of a set

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

The Game World Is Flat: The Complexity of Nash Equilibria in Succinct Games

The Game World Is Flat: The Complexity of Nash Equilibria in Succinct Games The Game World Is Flat: The Complexity of Nash Equilibria in Succinct Games Constantinos Daskalakis, Alex Fabrikant, and Christos H. Papadimitriou UC Berkeley, Computer Science Division costis@cs.berkeley.edu

More information

Lecture 18: March 15

Lecture 18: March 15 CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 18: March 15 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They may

More information

Lecture Notes on Game Theory

Lecture Notes on Game Theory Lecture Notes on Game Theory Levent Koçkesen Strategic Form Games In this part we will analyze games in which the players choose their actions simultaneously (or without the knowledge of other players

More information

Efficiency, Fairness and Competitiveness in Nash Bargaining Games

Efficiency, Fairness and Competitiveness in Nash Bargaining Games Efficiency, Fairness and Competitiveness in Nash Bargaining Games Deeparnab Chakrabarty, Gagan Goel 2, Vijay V. Vazirani 2, Lei Wang 2 and Changyuan Yu 3 Department of Combinatorics and Optimization, University

More information

Computing Solution Concepts of Normal-Form Games. Song Chong EE, KAIST

Computing Solution Concepts of Normal-Form Games. Song Chong EE, KAIST Computing Solution Concepts of Normal-Form Games Song Chong EE, KAIST songchong@kaist.edu Computing Nash Equilibria of Two-Player, Zero-Sum Games Can be expressed as a linear program (LP), which means

More information

Notes on Discrete Probability

Notes on Discrete Probability Columbia University Handout 3 W4231: Analysis of Algorithms September 21, 1999 Professor Luca Trevisan Notes on Discrete Probability The following notes cover, mostly without proofs, the basic notions

More information

ORF 363/COS 323 Final Exam, Fall 2017

ORF 363/COS 323 Final Exam, Fall 2017 Name: Princeton University Instructor: A.A. Ahmadi ORF 363/COS 323 Final Exam, Fall 2017 January 17, 2018 AIs: B. El Khadir, C. Dibek, G. Hall, J. Zhang, J. Ye, S. Uysal 1. Please write out and sign the

More information

Approximate Well-supported Nash Equilibria Below Two-thirds

Approximate Well-supported Nash Equilibria Below Two-thirds Algorithmica (06) 76:97 9 DOI 0.007/s0045-05-009- Approximate Well-supported Nash Equilibria Below Two-thirds John Fearnley Paul W. Goldberg Rahul Savani Troels Bjerre Sørensen Received: December 04 /

More information

Computational Game Theory Spring Semester, 2005/6. Lecturer: Yishay Mansour Scribe: Ilan Cohen, Natan Rubin, Ophir Bleiberg*

Computational Game Theory Spring Semester, 2005/6. Lecturer: Yishay Mansour Scribe: Ilan Cohen, Natan Rubin, Ophir Bleiberg* Computational Game Theory Spring Semester, 2005/6 Lecture 5: 2-Player Zero Sum Games Lecturer: Yishay Mansour Scribe: Ilan Cohen, Natan Rubin, Ophir Bleiberg* 1 5.1 2-Player Zero Sum Games In this lecture

More information

Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds

Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds Randomized Algorithms Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized

More information

Game Theory Fall 2003

Game Theory Fall 2003 Game Theory Fall 2003 Problem Set 1 [1] In this problem (see FT Ex. 1.1) you are asked to play with arbitrary 2 2 games just to get used to the idea of equilibrium computation. Specifically, consider the

More information

Introduction: The Perceptron

Introduction: The Perceptron Introduction: The Perceptron Haim Sompolinsy, MIT October 4, 203 Perceptron Architecture The simplest type of perceptron has a single layer of weights connecting the inputs and output. Formally, the perceptron

More information

Weak Dominance and Never Best Responses

Weak Dominance and Never Best Responses Chapter 4 Weak Dominance and Never Best Responses Let us return now to our analysis of an arbitrary strategic game G := (S 1,...,S n, p 1,...,p n ). Let s i, s i be strategies of player i. We say that

More information

56 CHAPTER 3. POLYNOMIAL FUNCTIONS

56 CHAPTER 3. POLYNOMIAL FUNCTIONS 56 CHAPTER 3. POLYNOMIAL FUNCTIONS Chapter 4 Rational functions and inequalities 4.1 Rational functions Textbook section 4.7 4.1.1 Basic rational functions and asymptotes As a first step towards understanding

More information

Price of Stability in Survivable Network Design

Price of Stability in Survivable Network Design Noname manuscript No. (will be inserted by the editor) Price of Stability in Survivable Network Design Elliot Anshelevich Bugra Caskurlu Received: January 2010 / Accepted: Abstract We study the survivable

More information

A Note on the Existence of Ratifiable Acts

A Note on the Existence of Ratifiable Acts A Note on the Existence of Ratifiable Acts Joseph Y. Halpern Cornell University Computer Science Department Ithaca, NY 14853 halpern@cs.cornell.edu http://www.cs.cornell.edu/home/halpern August 15, 2018

More information

Settling Some Open Problems on 2-Player Symmetric Nash Equilibria

Settling Some Open Problems on 2-Player Symmetric Nash Equilibria Settling Some Open Problems on 2-Player Symmetric Nash Equilibria Ruta Mehta Vijay V. Vazirani Sadra Yazdanbod College of Computing, Georgia Tech rmehta,vazirani,syazdanb@cc.gatech.edu Abstract. Over the

More information

ter. on Can we get a still better result? Yes, by making the rectangles still smaller. As we make the rectangles smaller and smaller, the

ter. on Can we get a still better result? Yes, by making the rectangles still smaller. As we make the rectangles smaller and smaller, the Area and Tangent Problem Calculus is motivated by two main problems. The first is the area problem. It is a well known result that the area of a rectangle with length l and width w is given by A = wl.

More information

Algebra Exam. Solutions and Grading Guide

Algebra Exam. Solutions and Grading Guide Algebra Exam Solutions and Grading Guide You should use this grading guide to carefully grade your own exam, trying to be as objective as possible about what score the TAs would give your responses. Full

More information

Zero-Sum Games Public Strategies Minimax Theorem and Nash Equilibria Appendix. Zero-Sum Games. Algorithmic Game Theory.

Zero-Sum Games Public Strategies Minimax Theorem and Nash Equilibria Appendix. Zero-Sum Games. Algorithmic Game Theory. Public Strategies Minimax Theorem and Nash Equilibria Appendix 2013 Public Strategies Minimax Theorem and Nash Equilibria Appendix Definition Definition A zero-sum game is a strategic game, in which for

More information

IB Mathematics HL Year 2 Unit 11: Completion of Algebra (Core Topic 1)

IB Mathematics HL Year 2 Unit 11: Completion of Algebra (Core Topic 1) IB Mathematics HL Year Unit : Completion of Algebra (Core Topic ) Homewor for Unit Ex C:, 3, 4, 7; Ex D: 5, 8, 4; Ex E.: 4, 5, 9, 0, Ex E.3: (a), (b), 3, 7. Now consider these: Lesson 73 Sequences and

More information

Separable and Low-Rank Continuous Games

Separable and Low-Rank Continuous Games LIDS Technical Report 2760 1 Separable and Low-Rank Continuous Games Noah D. Stein, Asuman Ozdaglar, and Pablo A. Parrilo July 23, 2007 Abstract In this paper, we study nonzero-sum separable games, which

More information

Computing an Extensive-Form Correlated Equilibrium in Polynomial Time

Computing an Extensive-Form Correlated Equilibrium in Polynomial Time Computing an Extensive-Form Correlated Equilibrium in Polynomial Time Wan Huang and Bernhard von Stengel Department of Mathematics, London School of Economics, London WC2A 2AE, United Kingdom wan.huang@gmail.com,

More information

The Complexity of the Permanent and Related Problems

The Complexity of the Permanent and Related Problems The Complexity of the Permanent and Related Problems Tim Abbott and Alex Schwendner May 9, 2007 Contents 1 The Permanent 2 1.1 Variations on the Determinant...................... 2 1.2 Graph Interpretation...........................

More information

The Complexity of Games on Highly Regular Graphs (Extended Abstract)

The Complexity of Games on Highly Regular Graphs (Extended Abstract) The Complexity of Games on Highly Regular Graphs (Extended Abstract) Constantinos Daskalakis Christos H. Papadimitriou July 7, 2005 Abstract We present algorithms and complexity results for the problem

More information

Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016

Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016 Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall 206 2 Nov 2 Dec 206 Let D be a convex subset of R n. A function f : D R is convex if it satisfies f(tx + ( t)y) tf(x)

More information

GAMES OF FIXED RANK: A HIERARCHY OF BIMATRIX GAMES

GAMES OF FIXED RANK: A HIERARCHY OF BIMATRIX GAMES GAMES OF FIXED RANK: A HIERARCHY OF BIMATRIX GAMES RAVI KANNAN AND THORSTEN THEOBALD Abstract. We propose and investigate a hierarchy of bimatrix games (A, B), whose (entry-wise) sum of the pay-off matrices

More information

PRGs for space-bounded computation: INW, Nisan

PRGs for space-bounded computation: INW, Nisan 0368-4283: Space-Bounded Computation 15/5/2018 Lecture 9 PRGs for space-bounded computation: INW, Nisan Amnon Ta-Shma and Dean Doron 1 PRGs Definition 1. Let C be a collection of functions C : Σ n {0,

More information

DOUBLE SERIES AND PRODUCTS OF SERIES

DOUBLE SERIES AND PRODUCTS OF SERIES DOUBLE SERIES AND PRODUCTS OF SERIES KENT MERRYFIELD. Various ways to add up a doubly-indexed series: Let be a sequence of numbers depending on the two variables j and k. I will assume that 0 j < and 0

More information

The Folk Theorem for Finitely Repeated Games with Mixed Strategies

The Folk Theorem for Finitely Repeated Games with Mixed Strategies The Folk Theorem for Finitely Repeated Games with Mixed Strategies Olivier Gossner February 1994 Revised Version Abstract This paper proves a Folk Theorem for finitely repeated games with mixed strategies.

More information

Optimal Auctions with Correlated Bidders are Easy

Optimal Auctions with Correlated Bidders are Easy Optimal Auctions with Correlated Bidders are Easy Shahar Dobzinski Department of Computer Science Cornell Unversity shahar@cs.cornell.edu Robert Kleinberg Department of Computer Science Cornell Unversity

More information

Problem 1: Compactness (12 points, 2 points each)

Problem 1: Compactness (12 points, 2 points each) Final exam Selected Solutions APPM 5440 Fall 2014 Applied Analysis Date: Tuesday, Dec. 15 2014, 10:30 AM to 1 PM You may assume all vector spaces are over the real field unless otherwise specified. Your

More information

GENERALIZED CANTOR SETS AND SETS OF SUMS OF CONVERGENT ALTERNATING SERIES

GENERALIZED CANTOR SETS AND SETS OF SUMS OF CONVERGENT ALTERNATING SERIES Journal of Applied Analysis Vol. 7, No. 1 (2001), pp. 131 150 GENERALIZED CANTOR SETS AND SETS OF SUMS OF CONVERGENT ALTERNATING SERIES M. DINDOŠ Received September 7, 2000 and, in revised form, February

More information

Realization Plans for Extensive Form Games without Perfect Recall

Realization Plans for Extensive Form Games without Perfect Recall Realization Plans for Extensive Form Games without Perfect Recall Richard E. Stearns Department of Computer Science University at Albany - SUNY Albany, NY 12222 April 13, 2015 Abstract Given a game in

More information

NETS 412: Algorithmic Game Theory March 28 and 30, Lecture Approximation in Mechanism Design. X(v) = arg max v i (a)

NETS 412: Algorithmic Game Theory March 28 and 30, Lecture Approximation in Mechanism Design. X(v) = arg max v i (a) NETS 412: Algorithmic Game Theory March 28 and 30, 2017 Lecture 16+17 Lecturer: Aaron Roth Scribe: Aaron Roth Approximation in Mechanism Design In the last lecture, we asked how far we can go beyond the

More information

A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits

A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits Ran Raz Amir Shpilka Amir Yehudayoff Abstract We construct an explicit polynomial f(x 1,..., x n ), with coefficients in {0,

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

and the compositional inverse when it exists is A.

and the compositional inverse when it exists is A. Lecture B jacques@ucsd.edu Notation: R denotes a ring, N denotes the set of sequences of natural numbers with finite support, is a generic element of N, is the infinite zero sequence, n 0 R[[ X]] denotes

More information

SUMMER REVIEW PACKET. Name:

SUMMER REVIEW PACKET. Name: Wylie East HIGH SCHOOL SUMMER REVIEW PACKET For students entering Regular PRECALCULUS Name: Welcome to Pre-Calculus. The following packet needs to be finished and ready to be turned the first week of the

More information

arxiv:cs/ v1 [cs.gt] 26 Feb 2006

arxiv:cs/ v1 [cs.gt] 26 Feb 2006 On the Approximation and Smoothed Complexity of Leontief Market Equilibria arxiv:cs/0602090v1 [csgt] 26 Feb 2006 Li-Sha Huang Department of Computer Science Tsinghua University Beijing, China Abstract

More information

arxiv: v1 [math.co] 18 Dec 2018

arxiv: v1 [math.co] 18 Dec 2018 A Construction for Difference Sets with Local Properties Sara Fish Ben Lund Adam Sheffer arxiv:181.07651v1 [math.co] 18 Dec 018 Abstract We construct finite sets of real numbers that have a small difference

More information

The number of edge colorings with no monochromatic cliques

The number of edge colorings with no monochromatic cliques The number of edge colorings with no monochromatic cliques Noga Alon József Balogh Peter Keevash Benny Sudaov Abstract Let F n, r, ) denote the maximum possible number of distinct edge-colorings of a simple

More information

Propagating terraces and the dynamics of front-like solutions of reaction-diffusion equations on R

Propagating terraces and the dynamics of front-like solutions of reaction-diffusion equations on R Propagating terraces and the dynamics of front-like solutions of reaction-diffusion equations on R P. Poláčik School of Mathematics, University of Minnesota Minneapolis, MN 55455 Abstract We consider semilinear

More information

Maximization of Submodular Set Functions

Maximization of Submodular Set Functions Northeastern University Department of Electrical and Computer Engineering Maximization of Submodular Set Functions Biomedical Signal Processing, Imaging, Reasoning, and Learning BSPIRAL) Group Author:

More information

Random Variable. Pr(X = a) = Pr(s)

Random Variable. Pr(X = a) = Pr(s) Random Variable Definition A random variable X on a sample space Ω is a real-valued function on Ω; that is, X : Ω R. A discrete random variable is a random variable that takes on only a finite or countably

More information

Statistics for Economists. Lectures 3 & 4

Statistics for Economists. Lectures 3 & 4 Statistics for Economists Lectures 3 & 4 Asrat Temesgen Stockholm University 1 CHAPTER 2- Discrete Distributions 2.1. Random variables of the Discrete Type Definition 2.1.1: Given a random experiment with

More information

Massachusetts Institute of Technology 6.854J/18.415J: Advanced Algorithms Friday, March 18, 2016 Ankur Moitra. Problem Set 6

Massachusetts Institute of Technology 6.854J/18.415J: Advanced Algorithms Friday, March 18, 2016 Ankur Moitra. Problem Set 6 Massachusetts Institute of Technology 6.854J/18.415J: Advanced Algorithms Friday, March 18, 2016 Ankur Moitra Problem Set 6 Due: Wednesday, April 6, 2016 7 pm Dropbox Outside Stata G5 Collaboration policy:

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information