Survival of Dominated Strategies under Evolutionary Dynamics

Similar documents
Survival of Dominated Strategies under Evolutionary Dynamics. Josef Hofbauer University of Vienna. William H. Sandholm University of Wisconsin

Survival of dominated strategies under evolutionary dynamics

Population Games and Evolutionary Dynamics

Evolutionary Game Theory: Overview and Recent Results

Irrational behavior in the Brown von Neumann Nash dynamics

Irrational behavior in the Brown von Neumann Nash dynamics

Local Stability of Strict Equilibria under Evolutionary Game Dynamics

Stable Games and their Dynamics

Deterministic Evolutionary Dynamics

Distributional stability and equilibrium selection Evolutionary dynamics under payoff heterogeneity (II) Dai Zusai

Pairwise Comparison Dynamics and Evolutionary Foundations for Nash Equilibrium

Population Games and Evolutionary Dynamics

Pairwise Comparison Dynamics for Games with Continuous Strategy Space

Evolution & Learning in Games

The projection dynamic and the replicator dynamic

Stochastic Evolutionary Game Dynamics: Foundations, Deterministic Approximation, and Equilibrium Selection

Stochastic Imitative Game Dynamics with Committed Agents

DISCRETIZED BEST-RESPONSE DYNAMICS FOR THE ROCK-PAPER-SCISSORS GAME. Peter Bednarik. Josef Hofbauer. (Communicated by William H.

Evolutionary Dynamics and Extensive Form Games by Ross Cressman. Reviewed by William H. Sandholm *

Gains in evolutionary dynamics: unifying rational framework for dynamic stability

Gains in evolutionary dynamics. Dai ZUSAI

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics

Population Dynamics Approach for Resource Allocation Problems. Ashkan Pashaie

Economics 201B Economic Theory (Spring 2017) Bargaining. Topics: the axiomatic approach (OR 15) and the strategic approach (OR 7).

1 Lyapunov theory of stability

DETERMINISTIC AND STOCHASTIC SELECTION DYNAMICS

Decompositions and Potentials for Normal Form Games

Evolutionary Internalization of Congestion Externalities When Values of Time are Unknown

Self-stabilizing uncoupled dynamics

Brown s Original Fictitious Play

LEARNING IN CONCAVE GAMES

Spatial Economics and Potential Games

Large Deviations and Stochastic Stability in the Small Noise Double Limit

Optimality, Duality, Complementarity for Constrained Optimization

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

Distributed Learning based on Entropy-Driven Game Dynamics

Deterministic Evolutionary Game Dynamics

Sophisticated imitation in cyclic games

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin

Deterministic Evolutionary Game Dynamics

Preliminary Results on Social Learning with Partial Observations

Large Deviations and Stochastic Stability in the Small Noise Double Limit, I: Theory

Nonaggregable evolutionary dynamics under payoff heterogeneity

epub WU Institutional Repository

Static and dynamic stability conditions for structurally stable signaling games

arxiv: v1 [cs.sy] 13 Sep 2017

Ergodicity and Non-Ergodicity in Economics

Shapley Polygons in 4 4 Games

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively);

Game Theory Fall 2003

LECTURE 8: DYNAMICAL SYSTEMS 7

Entropic Selection of Nash Equilibrium

Lecture notes for Analysis of Algorithms : Markov decision processes

Title: The existence of equilibrium when excess demand obeys the weak axiom

Introduction. 1 University of Pennsylvania, Wharton Finance Department, Steinberg Hall-Dietrich Hall, 3620

Logit Dynamic for Continuous Strategy Games: Existence of Solutions.

Handout 2: Invariant Sets and Stability

Computing Solution Concepts of Normal-Form Games. Song Chong EE, KAIST

Sample Path Large Deviations for Stochastic Evolutionary Game Dynamics

Sequential Equilibria of Multi-Stage Games with Infinite Sets of Types and Actions

p-dominance and Equilibrium Selection under Perfect Foresight Dynamics

Evolutionary Games and Periodic Fitness

Lectures 6, 7 and part of 8

Linear Programming Redux

The Dynamics of Generalized Reinforcement Learning

Large deviations and stochastic stability in the small noise double limit

Equilibria in Games with Weak Payoff Externalities

The Dynamic Instability of Dispersed Price Equilibria.

Static Stability in Games Part I: Symmetric and Population Games

Computational Evolutionary Game Theory and why I m never using PowerPoint for another presentation involving maths ever again

Strongly Consistent Self-Confirming Equilibrium

Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about

Variational Inequalities. Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003

Sampling best response dynamics and deterministic equilibrium selection

Repeated Downsian Electoral Competition

September Math Course: First Order Derivative

Supplementary appendix to the paper Hierarchical cheap talk Not for publication

A technical appendix for multihoming and compatibility

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

Feedback control for a chemostat with two organisms

Definitions and Proofs

1 Basic Game Modelling

Decompositions of normal form games: potential, zero-sum, and stable games

A Note on the Existence of Ratifiable Acts

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Title: The Castle on the Hill. Author: David K. Levine. Department of Economics UCLA. Los Angeles, CA phone/fax

arxiv:math/ v1 [math.oc] 29 Jun 2004

On Krause s Multi-Agent Consensus Model With State-Dependent Connectivity

Computation of Efficient Nash Equilibria for experimental economic games

A Necessary and Sufficient Condition for a Unique Maximum with an Application to Potential Games

Evolutionary Dynamics of Lewis Signaling Games: Signaling Systems vs. Partial Pooling

Game interactions and dynamics on networked populations

Monotonic ɛ-equilibria in strongly symmetric games

Unified Convergence Proofs of Continuous-time Fictitious Play

Comment on The Veil of Public Ignorance

A LaSalle version of Matrosov theorem

Fast Convergence in Evolutionary Equilibrium Selection

The Interdisciplinary Center, Herzliya School of Economics Advanced Microeconomics Fall Bargaining The Axiomatic Approach

A Generic Bound on Cycles in Two-Player Games

Transcription:

Survival of Dominated Strategies under Evolutionary Dynamics Josef Hofbauer and William H. Sandholm September 27, 2010 Abstract We prove that any deterministic evolutionary dynamic satisfying four mild requirements fails to eliminate strictly dominated strategies in some games. We also show that existing elimination results for evolutionary dynamics are not robust to small changes in the specifications of the dynamics. Numerical analysis reveals that dominated strategies can persist at nontrivial frequencies even when the level of domination is not small. 1. Introduction One fundamental issue in evolutionary game theory concerns the relationship between its predictions and those provided by traditional, rationality-based solution concepts. Indeed, much of the early interest in the theory among economists is due to its ability to justify traditional equilibrium predictions as consequences of myopic decisions made by simple agents. Some of the best known results in this vein link the rest points of a deterministic evolutionary dynamic with the Nash equilibria of the game being played. Under most dynamics considered in the literature, the set of rest points includes all Nash equilibria of We thank Drew Fudenberg, Larry Samuelson, a number of anonymous referees, a co-editor, and many seminar audiences for helpful comments, and Emin Dokumacı for outstanding research assistance. Many figures in this article were created using the Dynamo open-source software suite (Sandholm and Dokumacı (2007)). Financial support from NSF Grants SES-0092145, SES-0617753, and SES-0851580, the Centre for Economic Learning and Social Evolution (ELSE) at University College London, and the University of Vienna is gratefully acknowledged. Department of Mathematics, University of Vienna, Nordbergstrasse 15, A-1090 Vienna, Austria. e-mail: josef.hofbauer@univie.ac.at; website: http://homepage.univie.ac.at/josef.hofbauer. Department of Economics, University of Wisconsin, 1180 Observatory Drive, Madison, WI 53706, USA. e-mail: whs@ssc.wisc.edu; website: http://www.ssc.wisc.edu/ whs.

the underlying game, and under many of these dynamics the sets of rest points and Nash equilibria are identical. 1 In order to improve upon these results, one might look for dynamics that converge to Nash equilibrium from most initial conditions regardless of the game at hand. Such a finding would provide a strong defense of the Nash prediction, as agents who began play at some disequilibrium state could be expected to find their way to Nash equilibrium. Unfortunately, results of this kind cannot be proved. Hofbauer and Swinkels (1996) and Hart and Mas-Colell (2003) show that no reasonable evolutionary dynamic converges to Nash equilibrium in all games: there are some games in which cycling or more complicated limit behavior far from any Nash equilibrium is the only plausible long run prediction. These negative results lead us to consider a more modest question. Rather than seek evolutionary support for equilibrium play, we instead turn our attention to a more basic rationality requirement: namely, the avoidance of strategies that are strictly dominated by a pure strategy. Research on this question to date has led to a number of positive results. Two of the canonical evolutionary dynamics are known to eliminate strictly dominated strategies, at least from most initial conditions. Akin (1980) shows that starting from any interior population state, the replicator dynamic (Taylor and Jonker (1978)) eliminates strategies that are strictly dominated by a pure strategy. Samuelson and Zhang (1992), building on work of Nachbar (1990), extend this result to a broad class of evolutionary dynamics driven by imitation: namely, dynamics under which strategies percentage growth rates are ordered by their payoffs. 2 Elimination results are also available for dynamics based on traditional choice criteria: the best response dynamic (Gilboa and Matsui (1991)) eliminates strictly dominated strategies by construction, as under this dynamic revising agents always switch to optimal strategies. Since the elimination of strategies strictly dominated by a pure strategy is the mildest requirement employed in standard gametheoretic analyses, it may seem unsurprising that two basic evolutionary dynamics obey this dictum. In this paper, we argue that evolutionary support for the elimination of dominated strategies is more tenuous than the results noted above suggest. In particular, we prove that all evolutionary dynamics satisfying four mild conditions continuity, positive correlation, Nash stationarity, and innovation must fail to eliminate strictly dominated strategies in some games. Dynamics satisfying these conditions include not only well- 1 For results of the latter sort, see Brown and von Neumann (1950), Smith (1984), and Sandholm (2005, 2010a). 2 Samuelson and Zhang (1992) and Hofbauer and Weibull (1996) also introduce classes of imitative dynamics under which strategies strictly dominated by a mixed strategy are eliminated. 2

known dynamics from the evolutionary literature, but also slight modifications of the dynamics under which elimination is known to occur. In effect, this paper shows that the dynamics known to eliminate strictly dominated strategies in all games are the only ones one should expect to do so, and that even these elimination results are knife-edge cases. An important predecessor of this study is the work of Berger and Hofbauer (2006), who present a game in which a strictly dominated strategy survives under the Brown-von Neumann-Nash (BNN) dynamic (Brown and von Neumann (1950)). We begin the present study by showing how Berger and Hofbauer s (2006) analysis can be extended to a variety of other dynamics, including the Smith dynamic (Smith (1984)) as well as generalizations of both the BNN and Smith dynamics (Hofbauer (2000), Sandholm (2005, 2010a)). While this analysis is relatively simple, it is not general, as it depends on the functional forms of the dynamics at issue. Since in practice it is difficult to know exactly how agents will update their choices over time, a more compelling elimination result would require only minimal structure. Our main theorem provides such a result. Rather than specifying functional forms for the evolutionary dynamics under consideration, the theorem allows for any dynamic satisfying four mild conditions. The first, continuity, asks that the dynamic change continuously as a function of the payoff vector and the population state. The second, positive correlation, is a weak montonicity condition: it demands that away from equilibrium, the correlation between strategies payoffs and growth rates always be positive. The third condition, Nash stationarity, asks that states that are not Nash equilibria that is, states where payoff improvement opportunities are available are not rest points of the dynamic. The final condition, innovation, is a requirement that has force only at non-nash boundary states: if at such a state some unused strategy is a best response, the growth rate of this strategy must be positive. The last two conditions rule out the replicator dynamic and the other purely imitative dynamics noted above; at the same time, they allow arbitrarily close approximations of these dynamics, under which agents usually imitate successful opponents, but occasionally select new strategies directly. To prove the main theorem, we construct a four-strategy game in which one strategy is strictly dominated by another pure strategy. We show that under any dynamic satisfying our four conditions, the strictly dominated strategy survives along solution trajectories starting from most initial conditions. Because evolutionary dynamics are defined by nonlinear differential equations, our formal results rely on topological properties, and so provide limited quantitative information about the conditions under which dominated strategies survive. We therefore supplement our formal approach with numerical analysis. This analysis reveals that 3

dominated strategies with payoffs substantially lower than those of their dominating strategies can be played at nontrivial frequencies in perpetuity. Since elimination of dominated strategies is a basic requirement of traditional game theory, the fact that such strategies can persist under evolutionary dynamics may seem counterintuitive. A partial resolution of this puzzle lies in the fact that survival of dominated strategies is intrinsically a disequilibrium phenomenon. To understand this point, remember that evolutionary dynamics capture the aggregate behavior of agents who follow simple myopic rules. These rules lead agents to switch to strategies whose current payoffs are good, though not necessarily optimal. When a solution trajectory of an evolutionary dynamic converges, the payoffs to each strategy converge as well. Because payoffs become fixed, even simple rules are enough to ensure that only optimal strategies are chosen. In formal terms, the limits of convergent solution trajectories must be Nash equilibria; it follows a fortiori that when these limits are reached, strictly dominated strategies are not chosen. Of course, it is well understood that solutions of evolutionary dynamics need not converge, but instead may enter limit cycles or more complicated limit sets. 3 When solutions do not converge, payoffs remain in flux. In this situation, it is not obvious whether choice rules favoring strategies whose current payoffs are relatively high will necessarily eliminate strategies that perform well at many states, but that are never optimal. To the contrary, the analysis in this paper demonstrates that if play remains in disequilibrium, even strategies that are strictly dominated by other pure strategies can persist indefinitely. One possible reaction to our results is to view them as an argument against the relevance of evolutionary dynamics for modeling economic behavior. If an agent notices that a strategy is strictly dominated, then he would do well to avoid playing it, whatever his rule of thumb might suggest. We agree with the latter sentiment: we do not expect agents, even simple ones, to play strategies they know to be dominated. At the same time, we feel that the ability to recognize dominated strategies should not be taken for granted. In complicated games with large numbers of participants, it may not always be reasonable to expect agents to know the payoffs to all strategies at every population state, or to be able to make all the comparisons needed to identify a dominated strategy. It is precisely in such large, complex games that agents might be expected to make decisions by applying rules of thumb. Our analysis suggests that if agents cannot directly exclude dominated strategies from their repertoire of choices, then these strategies need not fade from use through a lack of positive reinforcement. 3 For specific nonconvergence results, see Shapley (1964), Jordan (1993), Gaunersdorfer and Hofbauer (1995), Hofbauer and Swinkels (1996), Hart and Mas-Colell (2003), and Sparrow et al. (2008); see Sandholm (2009a) for a survey. 4

To prove our main result, we must show that for each member of a large class of deterministic evolutionary dynamics, there is a game in which dominated strategies survive. To accomplish this most directly, we use the same construction for all dynamics in the class. We begin by introducing a three-strategy game with nonlinear payoffs the hypnodisk game under which solution trajectories of all dynamics in the class enter cycles from almost all initial conditions. We then modify this game by adding a dominated fourth strategy, and show that the proportion of the population playing this strategy stays bounded away from zero along solutions starting from most initial conditions. Since the game we construct to ensure cyclical behavior is rather unusual, one might wonder whether our survival results are of practical relevance, rather than being a mere artifact of a pathological construction. In fact, while introducing a special game is quite convenient for proving the main result, we feel that our basic message that in the absence of convergence, myopic heuristics need not root out dominated strategies in large games is of broader relevance. In Section 5.1, we explain why the proof of the main theorem does not depend on the introduction of a complicated game in an essential way. Analyses there and elsewhere in the paper suggest that in any game for which some dynamic covered by the theorem fails to converge, there are augmented games with dominated strategies that the dynamic allows to survive. Section 2 introduces population games and evolutionary dynamics. Section 3 establishes the survival results for excess payoff dynamics and pairwise comparison dynamics, families that contain the BNN and Smith dynamics, respectively. Section 4 states and proves the main result. Section 5 presents our numerical analyses, and illustrates the sensitivity of existing elimination results to slight modifications of the dynamics in question. Section 6 concludes. Auxiliary results and proofs omitted from the text are provided in the Appendix. 2. The Model 2.1 Population Games We consider games played by a single unit mass population of agents. 4 All agents choose from the finite set of strategies S = {1,..., n}. The set of population states is therefore the simplex X = {x R n + : i S x i = 1}, where x i is the proportion of agents choosing strategy i S. The standard basis vector e i R n represents the state at which all agents choose strategy i. 4 Versions of our results can also be proved in multipopulation models. 5

If we take the set of strategies as fixed, we can identify a game with a Lipschitz continuous payoff function F: X R n, which assigns each population state x X a vector of payoffs F(x) R n. The component F i : X R represents the payoffs to strategy i alone. We also let F(x) = i S x i F i (x) denote the population s average payoff, and let B F (x) = argmax y X y F(x) denote the set of (mixed) best responses at population state x. The simplest examples of population games are generated by random matching in symmetric normal form games. An n-strategy symmetric normal form game is defined by a payoff matrix A R n n. A ij is the payoff a player obtains when he chooses strategy i and his opponent chooses strategy j; this payoff does not depend on whether the player in question is called player 1 or player 2. When agents are randomly matched to play this game, the (expected) payoff to strategy i at population state is x is F i (x) = j S A ij x j ; hence, the population game associated with A is the linear game F(x) = Ax. While random matching generates population games with linear payoffs, many population games that arise in applications have payoffs that are nonlinear in the population state see Section 5.1. Games with nonlinear payoff functions will play a leading role in the analysis to come. 2.2 Evolutionary Dynamics An evolutionary dynamic assigns each population game F an ordinary differential equation ẋ = V F (x) on the simplex X. One simple and general way of defining an evolutionary dynamic is via a growth rate function g: R n X R n ; here g i (π, x) represents the (absolute) growth rate of strategy i as a function of the current payoff vector π R n and the current population state x X. Our notation suppresses the dependence of g on the number of strategies n. To ensure that the simplex is forward invariant under the induced differential equations, the function g must satisfy g i (π, x) 0 whenever x i = 0, and g i (π, x) = 0. In words: strategies that are currently unused cannot become less common, and the sum of all strategies growth rates must equal zero. A growth rate function g satisfying these conditions defines an evolutionary dynamic as follows: i S ẋ i = V F i (x) = g i(f(x), x). One can also build evolutionary dynamics from a more structured model that not 6

Revision protocol Evolutionary dynamic Name Origin ρ ij = x j [F j F i ] + ẋ i = x i (F i (x) F(x)) replicator ρ ij = B F(x) j ẋ BF (x) x best response ρ ij = [F j F] + x i [F j (x) F(x)] + BNN ẋ i = [F i (x) F(x)] + j S Taylor and Jonker (1978) Gilboa and Matsui (1991) Brown and von Neumann (1950) ρ ij = [F j F i ] + ẋ i = x j [F i (x) F j (x)] + j S Smith Smith (1984) x i [F j (x) F i (x)] + j S Table I: Four evolutionary dynamics and their revision protocols. only provides explicit microfoundations for the dynamics, but also is inclusive enough to encompass all dynamics considered in the literature. 5 In this model, the growth rate function g is replaced by a revision protocol ρ: R n X R n n +, which describes the process through which individual agents make decisions. As time passes, agents are chosen at random from the population and granted opportunities to switch strategies. When an i player receives such an opportunity, he switches to strategy j with probability proportional to the conditional switch rate ρ ij (π, x). Aggregate behavior in the game F is then described by the differential equation (1) ẋ i = V F i (x) = x j ρ ji (F(x), x) x i ρ ij (F(x), x), j S j S which is known as the mean dynamic generated by ρ and F. The first term in (1) captures the inflow of agents into strategy i from other strategies, while the second term captures the outflow of agents from strategy i to other strategies. Table I presents four basic examples of evolutionary dynamics, along with revision protocols that generate them. Sections 3.1, 5.3, and 5.4 below. Further discussion of these dynamics can be found in 5 For explicit accounts of microfoundations, see Benaïm and Weibull (2003) and Sandholm (2003). 7

3. Survival under the BNN, Smith, and Related Dynamics Using a somewhat informal analysis, Berger and Hofbauer (2006) argue that strictly dominated strategies can survive under the BNN dynamic (Brown and von Neumann (1950)). To prepare for our main result, we formalize and extend Berger and Hofbauer s (2006) arguments to prove a survival result for two families of evolutionary dynamics; these families include the BNN dynamic and the Smith dynamic (Smith (1984)) as their simplest members. 3.1 Excess Payoff Dynamics and Pairwise Comparison Dynamics The two families of dynamics we consider are based on revision protocols of the forms (2) (3) ρ ij = φ(f j F) and ρ ij = φ(f j F i ), where in each case, φ: R R + is a Lipschitz continuous function satisfying (4) sgn(φ(u)) = sgn([u] + ) and d du + φ(u) u=0 > 0. The families of evolutionary dynamics obtained by substituting expressions (2) and (3) into the mean dynamic (1) are called excess payoff dynamics (Weibull (1996); Hofbauer (2000); Sandholm (2005)) and pairwise comparison dynamics (Sandholm (2010a)), respectively. The BNN and Smith dynamics are the prototypical members of these two families: examining Table I, we see that these two dynamics are those obtained from protocols (2) and (3) when φ is the semilinear function φ(u) = [u] +. Protocols of forms (2) and (3) describe distinct revision processes. Under (2), an agent who receives a revision opportunity has a positive probability of switching to any strategy whose payoff exceeds the population s average payoff; the agent s current payoff has no bearing on his switching rates. Under (3), an agent who receives a revision opportunity has a positive probability of switching to any strategy whose payoff exceeds that of his current strategy. While the latter protocols lead to mean dynamics with more complicated functional forms (compare the BNN and Smith dynamics in Table I), they also seem more realistic than those of form (2): protocols satisfying (3) make an agent s decisions depend on his current payoffs, and do not require him to know the average payoff obtained in the population as a whole. 8

3.2 Theorem and Proof Theorem 3.1 shows that excess payoff dynamics and pairwise comparison dynamics allow dominated strategies to survive in some games. 6 Theorem 3.1. Suppose that V is an evolutionary dynamic based on a revision protocol ρ of form (2) or (3), where the function φ satisfies condition (4). Then there is a game F d such that under V F d, along solutions from most initial conditions, there is a strictly dominated strategy played by a fraction of the population that is bounded away from 0 and that exceeds 1 infinitely often as time 6 approaches infinity. While the computations needed to prove Theorem 3.1 differ according to the dynamic under consideration, the three main steps are always the same. First, we show that for each of the relevant dynamics, play converges to a limit cycle in the bad Rock-Paper- Scissors game (Figure 1). Second, we introduce a new strategy, Twin, which duplicates the strategy Scissors, and show that in the resulting four-strategy game, solutions to the dynamic from almost all initial conditions converge to a cycling attractor; this attractor sits on the plane where Scissors and Twin are played by equal numbers of agents, and has regions where both Scissors and Twin are played by more than one sixth of the population (Figure 2). Third, we uniformly reduce the payoff of the new strategy by d, creating a feeble twin, and use a continuity argument to show that the attractor persists (Figure 3). Since the feeble twin is a strictly dominated strategy, this last step completes the proof of the theorem. We now present the proof in more detail, relegating some details of the argument to the Appendix. Proof. Fix a dynamic V (i.e., a map from population games F to differential equations ẋ = V F (x)) generated by a revision protocol ρ that satisfies the conditions of the theorem. We will construct a game F d in which a dominated strategy survives under V F d. To begin, we introduce the bad Rock-Paper-Scissors game: 0 b a G(x) = Ax = a 0 b b a 0 x 1 x 2 x 3, where b > a > 0. (Since b > a, the cost of losing a match exceeds the benefit of winning a match.) For any choices of b > a > 0, the unique Nash equilibrium of G is y = ( 1, 1, 1 ). Although our 3 3 3 6 In the statements of Theorems 3.1 and 4.1, most initial conditions means all initial conditions outside an open set of measure ε, where ε > 0 is specified before the choice of the game F d. 9

R P S Figure 1: The Smith dynamic in bad RPS. Colors represent speeds of motion: red is faster, blue is slower. proof does not require this fact, it can be shown as a corollary of Lemma 3.2 below that y is unstable under the dynamic V G. Next, following Berger and Hofbauer (2006), we introduce a four-strategy game F, which we obtain from bad RPS by introducing an identical twin of Scissors. 0 b a a a 0 b b (5) F(x) = Ãx = b a 0 0 b a 0 0 x 1 x 2 x 3 x 4 The set of Nash equilibria of F is the line segment NE = {x X: x = ( 1 3, 1 3, α, 1 3 α)}.. We now present two lemmas that describe the behavior of the dynamic V F for game F. The first lemma concerns the local stability of the set of Nash equilibria NE. Lemma 3.2. The set NE is a repellor under the dynamic V F : there is a neighborhood U of NE such that all trajectories starting in U NE leave U and never return. The proof of this lemma, which is based on construction of appropriate Lyapunov functions, is presented in Appendix B. Since V is an excess payoff dynamic or a pairwise comparison dynamic, the rest points of V F are precisely the Nash equilibria of F (see Sandholm (2005, 2010a)). Therefore, 10

R P S T Figure 2: The Smith dynamic in bad RPS with a twin. R P S T Figure 3: The Smith dynamic in bad RPS with a feeble twin. 11

Lemma 3.2 implies that solutions of V F from initial conditions outside NE do not converge to rest points. Our next lemma constrains the limit behavior of these solutions. Since the revision protocol ρ treats strategies symmetrically, and since Scissors and Twin always earn the same payoffs (F 3 (x) F 4 (x)), it follows that ρ j3 (F(x), x) = ρ j4 (F(x), x) and ρ 3j (F(x), x) = ρ 4j (F(x), x) for all x X. These equalities yield a simple expression for the rate of change of the difference in utilizations of strategies 3 and 4: (6) ẋ 3 ẋ 4 = x j ρ j3 x 3 ρ 3j x j ρ j4 x 4 ρ 4j j S j S j S j S = (x 3 x 4 ) ρ 3j (F(x), x). j S Since conditional switch rates ρ ij are nonnegative by definition, equation (6) implies that the plane P = {x X: x 3 = x 4 } on which the identical twins receive equal weight is invariant under V F, and that distance from P is nonincreasing under V F. In fact, we can show further that Lemma 3.3. Solutions of the dynamic V F starting outside the set NE converge to the plane P. Proving Lemma 3.3 is straightforward when ρ is of the excess payoff form (2), since in this case it can be shown that ẋ 3 < ẋ 4 whenever x 3 > x 4 and x NE, and that ẋ 3 ẋ 4 > 0 whenever x 3 < x 4 and x NE. But when ρ is of the pairwise comparison form (3), one needs to establish that solutions to V F cannot become stuck in regions where ẋ 3 = ẋ 4. The proof of Lemma 3.3 is provided in Appendix B. Lemmas 3.2 and 3.3 imply that all solutions of V F other than those starting in NE converge to an attractor A, a set that is compact (see Appendix A), is disjoint from the set NE, is contained in the invariant plane P, and encircles the Nash equilibrium x = ( 1, 1, 1, 1) 3 3 6 6 (see Figure 2). It follows that there are portions of A where more than one sixth of the population plays Twin. Finally, we modify the game F by making Twin feeble : in other words, by uniformly reducing its payoff by d: 0 b a a a 0 b b F d (x) = Ã d x = b a 0 0 b d a d d d 12 x 1 x 2 x 3 x 4.

If d > 0, strategy 4 is strictly dominated by strategy 3. Increasing d from 0 continuously changes the game from F to F d, and so continuously changes the dynamic from V F to V F d (where continuity is with respect to the supremum norm topology). It thus follows from results on continuation of attractors (Theorem A.1 in Appendix A) that for small domination levels d, the attractor A of V F continues to an attractor A d that is contained in a neighborhood of A, and that the basin of attraction of A d contains all points outside of a thin tube around the set NE. On the attractor A, the speed of rotation under V F around the segment NE is bounded away from 0. Therefore, by continuity, the attractor A d of V F d must encircle NE, and so must contain states at which x 4, the weight on the strictly dominated strategy Twin, is more than 1 6. By the same logic, solutions of VF d that converge to Ad have ω-limit sets with these same properties. In conclusion, we have shown that most solutions of V F d converge to the attractor A d, a set on which x 4 is bounded away from 0, and that these solutions satisfy x 4 > 1 infinitely often in the long run. This completes the proof of Theorem 3.1. 6 It is worth noting that the number 1, the bound that the weight on the dominated 6 strategy continually exceeds, is not as large as possible. By replacing A, a cyclically symmetric version of bad Rock-Paper-Scissors, with an asymmetric version of this game, we can move the unstable Nash equilibium from y = ( 1, 1, 1 ) to a state where the fraction 3 3 3 of the population choosing Scissors is as close to 1 as desired (see Gaunersdorfer and Hofbauer (1995)). Then repeating the rest of the proof above, we find that the bound of 1 6 in the statement of Theorem 3.1 can be replaced by any number less than 1 2. The analysis above makes explicit use of the functional forms of excess payoff and pairwise comparison dynamics. This occurs first in the proof of Lemma 3.2, which states that the set of Nash equilibria of bad RPS with a twin is a repellor. The Lyapunov functions used to prove this lemma depend on the dynamics functional forms; indeed, there are evolutionary dynamics for which the equilibrium of bad RPS is attracting instead of repelling. Functional forms are also important in proving Lemma 3.3, which states that almost all solutions to dynamics from the two classes lead to the plane on which the identical twins receive equal weights. For arbitrary dynamics, particularly ones that do not respect the symmetry of the game, convergence to this plane is not guaranteed. To establish our main result, in which nothing is presumed about functional forms, both of these steps from the proof above will need to be replaced by more general arguments. 13

4. The Main Theorem 4.1 Statement of the Theorem While the proof of Theorem 3.1 takes advantage of the functional forms of excess payoff and pairwise comparison dynamics, the survival of dominated strategies is a more general phenomenon. We now introduce a set of mild conditions that are enough to yield this result. (C) Continuity g is Lipschitz continuous. (PC) Positive correlation If V F (x) 0, then V F (x) F(x) > 0. (NS) Nash stationarity If V F (x) = 0, then x NE(F). (IN) Innovation If x NE(F), x i = 0, and e i B F (x), then V F (x) > 0. i Continuity (C) requires that small changes in aggregate behavior or payoffs not lead to large changes in the law of motion V F (x) = g(f(x), x). Since discontinuous revision protocols can only be executed by agents with extremely accurate information, this condition seems natural in most contexts where evolutionary models are appropriate. Of course, this condition excludes the best response dynamic from our analysis, but it does not exclude continuous approximations thereof see Section 5.4. Positive correlation (PC) is a mild payoff monotonicity condition. It requires that whenever the population is not at rest, there is a positive correlation between strategies growth rates and payoffs. 7 From a geometric point of view, condition (PC) requires that the directions of motion V F (x) and the payoff vectors F(x) always form acute angles with one another. This interpretation will be helpful for understanding the constructions to come. Nash stationarity (NS) requires that the dynamic V F only be at rest at Nash equilibria of F. This condition captures the idea that agents eventually recognize payoff improvement opportunities, preventing the population from settling down at a state where such opportunities are present. 8 In a similar spirit, innovation (IN) requires that when a non-nash population state includes an unused optimal strategy, this strategy s growth rate must be strictly positive. In other words, if an unplayed strategy is sufficiently rewarding, some members of the population will discover it and select it. 7 Requiring growth rates to respect payoffs appears to work against the survival of dominated strategies. At the same time, some structure must be imposed on the dynamics in order to make headway with our analysis, and we would hesitate to consider a dynamic that did not satisfy a condition in the spirit of (PC) as a general model of evolution in games. Even so, we discuss the prospects for omitting this condition in Section 5.5. 8 The converse of this condition, that all Nash equilibria are rest points, follows easily from condition (PC); see Sandholm (2001). 14

A few further comments about conditions (PC), (NS), and (IN) may be helpful in interpreting our results. First, condition (PC) is among the weakest monotonicity conditions proposed in the evolutionary literature. 9 Thus, our arguments that appeal to this condition are robust, in that they apply to any dynamic that respects the payoffs from the underlying game to some weak extent. Second, since condition (PC) requires a positive correlation between growth rates and payoffs at all population states, it rules out evolutionary dynamics under which the boundary of the state space is repelling due to mutations or other forms of noise. Consequently, condition (PC) excludes the possibility that a dominated strategy survives for trivial reasons of this sort. Third, conditions (NS) and (IN) all rule out dynamics based exclusively on imitation. At the same time, all of these conditions are satisfied by dynamics under which agents usually imitate, but occasionally evaluate strategies in a more direct fashion. We will present this idea in some detail in Section 5.3. The main result of this paper is Theorem 4.1. Theorem 4.1. Suppose the evolutionary dynamic V satisfies (C), (PC), (NS), and (IN). Then there is a game F d such that under V F d, along solutions from most initial conditions, there is a strictly dominated strategy played by a fraction of the population bounded away from 0. Before proceeding, we should point out that the conclusion of Theorem 4.1 is weaker than that of Theorem 3.1 in one notable respect: while Theorem 3.1 ensured that at least 1 6 of the population would play the dominated strategy infinitely often, Theorem 4.1 only ensures that the strategy is always used by a proportion of the population bounded away from 0. The reason for this weaker conclusion is the absence of any assumption that the dynamic V F treats different strategies symmetrically. Adding such a symmetry assumption would allow us to recover the stronger conclusion. further discussion. 10 See Section 4.2.4 for 4.2 Proof of the Theorem As we noted earlier, the proof of Theorem 3.1 took advantage of the functional forms of the dynamics at issue. Since Theorem 4.1 provides no such structure, its proof will 9 Conditions similar to (PC) have been proposed, for example, in Friedman (1991), Swinkels (1993), and Sandholm (2001). 10 The proof of Theorem 4.1 establishes that the dynamic V F d for the game Fd admits an attractor on which the proportion of agents using a dominated strategy is bounded away from zero, and whose basin contains all initial conditions in X outside a set of small but positive measure. It therefore follows from Theorem A.1 that the dominated strategy would continue to survive if the dynamic were subject to small perturbations representing evolutionary drift, as studied by Binmore and Samuelson (1999). 15

require some new ideas. Our first task is to construct a replacement for the bad RPS game. More precisely, we seek a three-strategy game in which dynamics satisfying condition (PC) will fail to converge to Nash equilibrium from almost all initial conditions. Our construction relies on the theory of potential games, developed in the normal form context by Monderer and Shapley (1996) and Hofbauer and Sigmund (1998) and in the population game context by Sandholm (2001, 2009b). 4.2.1 Potential Games A population game F is a potential game if there exists a continuously differentiable function f : R n + R satisfying f (x) = F(x) for all x X. Put differently, each strategy s payoff function must equal the appropriate partial derivative of the potential function: f x i (x) = F i (x) for all i S and x X. Games satisfying this condition include common interest games and congestion games, among many others. A basic fact about potential games is that reasonable evolutionary dynamics increase potential: if the dynamic V F satisfies condition (PC), then along each solution trajectory {x t } we have that d f (x dt t) = f (x t ) ẋ t = F(x t ) V F (x t ) 0, with equality only at Nash equilibria. This observation along with standard results from dynamical systems imply that each solution trajectory of V F converges to a connected set of Nash equilibria see Sandholm (2001). As an example, suppose that agents are randomly matched to play the pure coordination game 1 0 0 C = 0 1 0 0 0 1. The resulting population game, F C (x) = Cx = x, is a potential game; its potential 16

(i) The potential function (ii) The projected payoff vector field Figure 4: A coordination game. function, f C (x) = 1 2 x Cx = 1 2 ((x 1) 2 + (x 2 ) 2 + (x 3 ) 2 ), is the convex function pictured in Figure 4(i). Solutions to any evolutionary dynamic satisfying condition (PC) ascend this function. Indeed, solutions from almost all initial conditions converge to a vertex of X that is, to a strict equilibrium of F C. The ability to draw the game F C itself will prove useful in the analysis to come. Notice that F C is a map from the simplex X R 3 to R 3, and so can be viewed as a vector field. Rather than draw F C as a vector field in R 3, we draw a projected version of F C on the hyperplane in R 3 that contains the simplex. 11 The vectors drawn in Figure 4(ii) represent the directions of maximal increase of the function f C, and so point outward from the center of the simplex. Dynamics satisfying condition (PC) always travel at acute angles to the vectors in Figure 4(ii), and so tend toward the vertices of X, and solutions from almost all initial conditions converge to a vertex of X As a second example, suppose that agents are randomly matched to play the anticoordination game C. In Figures 5(i) and 5(ii), we draw the resulting population game F C (x) = Cx = x and its concave potential function f C (x) = 1 2 x Cx = 1 2 ((x 1) 2 + (x 2 ) 2 + (x 3 ) 2 ). Both pictures reveal that under any evolutionary dynamic satisfying condition (PC), all solution trajectories converge to the unique Nash equilibrium x = ( 1 3, 1 3, 1 3 ). 11 More precisely, we draw the vector field ΦF C, where Φ = I 1 3 11 R 3 3 is the orthogonal projection of R 3 onto TX = {z R 3 : i S z i = 0}, the tangent space of the simplex X. The projection Φ forces the components of ΦF C (x) to sum to zero while preserving their differences, so that ΦF C (x) preserves all information about 17

(i) The potential function (ii) The projected payoff vector field Figure 5: An anticoordination game. 4.2.2 The Hypnodisk Game We now use the coordination game F C and the anticoordination game F C to construct our replacement for bad RPS. While F C and F C are potential games with linear payoffs, our new game will have neither of these properties. The construction is easiest to describe in geometric terms. Begin with the coordination game F C (x) = Cx pictured in Figure 4(ii). Then draw two circles centered at state x = ( 1, 1, 1) with radii 0 < r < R < 1 3 3 3 6, as shown in Figure 6(i); the second inequality ensures that both circles are contained in the simplex. Finally, twist the portion of the vector field lying outside of the inner circle in a clockwise direction, excluding larger and larger circles as the twisting proceeds, so that the outer circle is reached when the total twist is 180. The resulting vector field is pictured in Figure 6(ii). It is described analytically by x 1 1 x 3 2 x 3 3 H(x) = cos(θ(x)) x 2 1 3 x 3 + 3 sin(θ(x)) x 3 x 1 1 x 3 1 x + 1 1 1 3 2 1, where θ(x) equals 0 when x x x r, equals π when x R, and varies linearly in between. We call the game H the hypnodisk game. incentives contained in payoff vector F C (x). 18

(i) Projected payoff vector field for the coordination game (ii) Projected payoff vector field for the hypnodisk game Figure 6: Construction of the hypnodisk game. 19

What does this construction accomplish? Inside the inner circle, H is identical to the coordination game F C. Thus, solutions to dynamics satisfying (PC) starting at states in the inner circle besides x must leave the inner circle. At states outside the outer circle, the drawing of H is identical to the drawing of the anticoordination game F C. 12 Therefore, solutions to dynamics satisfying (PC) that begin outside the outer circle must enter the outer circle. Finally, at each state x in the annulus bounded by the two circles, H(x) is not a componentwise constant vector. Therefore, states in the annulus are not Nash equilibria, and so are not rest points of dynamics satisfying (PC). We assemble these observations in the following lemma. Lemma 4.2. Suppose that V is an evolutionary dynamic that satisfies conditions (C) and (PC), and let H be the hypnodisk game. Then every solution to V H other than the stationary solution at x enters the annulus with radii r and R and never leaves. In fact, since there are no rest points in the annulus, the Poincaré-Bendixson Theorem implies that every nonstationary solution to V H converges to a limit cycle. 4.2.3 The Twin Now, let F be the four-strategy game obtained from H by adding a twin: F i (x 1, x 2, x 3, x 4 ) = H i (x 1, x 2, x 3 + x 4 ) for i {1, 2, 3}, and F 4 (x) = F 3 (x). The set of Nash equilibria of F is the line segment Let NE = { x X: x 1 = x 2 = x 3 + x 4 = 1 3}. I = { x X: (x 1 1 3 )2 + (x 2 1 3 )2 + (x 3 + x 4 1 3 )2 r 2} and O = { x X: (x 1 1 3 )2 + (x 2 1 3 )2 + (x 3 + x 4 1 3 )2 R 2} be concentric cylindrical regions in X surrounding NE, as pictured in Figure 7. By construction, we have that 1 0 0 0 0 1 0 0 F(x) = Cx = 0 0 1 1 0 0 1 1 x 1 x 2 x 3 x 4. 12 At states x outside the outer circle, H(x) = x + 2 3 1 x = F C (x). But since ΦH(x) = x + 1 3 1 = ΦF C (x) at these states, the pictures of H and F C, and hence the incentives in the two games, are the same. 20

Figure 7: Regions O, I, and D = O I. at all x I. Therefore, solutions to dynamics satisfying (PC) starting in I NE ascend the potential function f C (x) = 1((x 2 1) 2 +(x 2 ) 2 +(x 3 +x 4 ) 2 ) until leaving the set I. At states outside the set O, we have that F(x) = Cx, so solutions starting in X O ascend f C (x) = f C (x) until entering O. In summary: Lemma 4.3. Suppose that V is an evolutionary dynamic that satisfies conditions (C) and (PC), and let F be the hypnodisk with a twin game. Then every solution to V F other than the stationary solutions at states in NE enter region D = O I and never leave. 4.2.4 The Feeble Twin To prove Theorem 3.1, we argued in Lemma 3.3 that under any of the dynamics addressed by the theorem, nonstationary solution trajectories equalize the utilization levels of identical twin strategies. If we presently focus on dynamics that not only satisfy conditions (C), (PC), and (IN), but also treat different strategies symmetrically, we can argue that in the hypnodisk with a twin game F, all nonstationary solutions of V F converge not only to region D, but also to the plane P = {x X: x 3 = x 4 }. Continuing with the argument from Section 3 then allows us to conclude that in F d, the game obtained from F by turning strategy 4 into a feeble twin (that is, by reducing the payoff to strategy 21

Figure 8: The best response correspondence of the hypnodisk game. 4 uniformly by d > 0), the fraction x 4 playing the feeble twin exceeds 1 infinitely often. 6 Since we would like a result that imposes as little structure as possible on permissible evolutionary dynamics, Theorem 4.1 avoids the assumption that different strategies are treated symmetrically. Since this means that agents may well be biased against choosing the dominated strategy, we can no longer prove that the fraction playing it will repeatedly exceed 1. But we can still prove that the dominated strategy survives. To accomplish 6 this, it is enough to show that in game F, most solutions of the dynamic V F converge to a set on which x 4 is bounded away from 0. If we can do this, then repeating the continuity argument that concluded the proof of Theorem 3.1 shows that in the game F d, the dominated strategy 4 survives. A complete proof that most solutions of V F converge to a set on which x 4 bounded away from 0 is presented in Appendix C. We summarize the argument here. To begin, it can be shown that all solutions to V F starting outside a small neighborhood of the segment of Nash equilibria NE converge to an attractor A, a compact set that is contained in region D and that is an invariant set of the dynamic V F. Now suppose by way of contradiction that the attractor A intersects Z = {x X: x 4 = 0}, the face of X on which Twin is unused. The Lipschitz continuity of the dynamic V F implies that backward solutions starting in Z cannot enter X Z. Since A is forward and backward invariant under V F, the fact that A intersects Z implies the existence of a closed orbit γ A Z that circumnavigates the disk I Z. Examining the best response 22

correspondence of the hypnodisk game (Figure 8), we find that such an orbit γ must pass through a region in which strategy 3 is a best response. But since the twin strategy 4 is also a best response in this region, innovation (IN) tells us that solutions passing through this region must reenter the interior of X, contradicting that the attractor A intersects the face Z. 5. Discussion 5.1 Constructing Games in which Dominated Strategies Survive If an evolutionary dynamic satisfies monotonicity condition (PC), all of its rest points are Nash equilibria. It follows that dominated strategies can only survive on solution trajectories that do not converge to rest points. To construct games in which dominated strategies can survive, one should first look for games in which convergence rarely occurs. The hypnodisk game, the starting point for the proof of the main theorem, is a population game with nonlinear payoff functions. Such games were uncommon in the early literature on evolution in games, which focused on random matching settings. But population games with nonlinear payoffs are more common now, in part because of their appearance in applications. For example, the standard model of driver behavior in a highway network is a congestion game with nonlinear payoff functions, as delays on each network link are increasing, convex functions of the number of drivers using the link. 13 For this reason, we do not view the use of a game with nonlinear payoffs as a shortcoming of our analysis. But despite this, it seems worth asking whether our results could be proved within the linear, random matching framework. In Section 3, where we considered dynamics with prespecified functional forms, we were able to prove survival results within the linear setting. More generally, if we fix an evolutionary dynamic before seeking a population game, finding a linear game that exhibits cycling seems a feasible task. Still, a virtue of our analysis in Section 4 is that it avoids this case-by-case analysis: the hypnodisk game generates cycling under all of the relevant dynamics simultaneously, enabling us to prove survival of dominated strategies under all of these dynamics at once. Could we have done the same using linear payoffs? Consider the following game due 13 Congestion games with a continuum of agents are studied by Beckmann et al. (1956) and Sandholm (2001). For finite player congestion games, see Rosenthal (1973) and Monderer and Shapley (1996). 23

to Hofbauer and Swinkels (1996) (see also (Hofbauer and Sigmund, 1998, Sec 8.6)): F ε (x) = A ε x = 0 0 1 ε ε 0 0 1 x. 1 ε 0 0 0 1 ε 0 When ε = 0, the game F 0 is a potential game with potential function f (x) = (x 1 x 3 + x 2 x 4 ). It has two components of Nash equilibria: one is a singleton containing the completely mixed equilibrium x = ( 1, 1, 1, 1); the other is the closed curve γ containing edges e 4 4 4 4 1e 2, e 2 e 3, e 3 e 4, and e 4 e 1. The former component is a saddle point of f, and so is unstable under dynamics that satisfy (PC); the latter component is the maximizer set of f, and so attracts most solutions of these dynamics. If ε is positive but sufficiently small, Theorem A.1 implies that most solutions of dynamics satisfying (PC) lead to an attractor near γ. But once ε is positive, the unique Nash equilibrium of F ε is the mixed equilibrium x. Therefore, the attractor near γ is far from any Nash equilibrium. If we now introduce a feeble twin, we expect that this dominated strategy would survive in the resulting five-strategy game. But in this case, evolutionary dynamics run on a four-dimensional state space. Proving survival results when the dimension of the state space exceeds three is very difficult, even if we fix the dynamic under consideration in advance. This points to another advantage of the hypnodisk game: it allows us to work with dynamics on a three-dimensional state space, where the analysis is still tractable. 5.2 How Dominated Can Surviving Strategies Be? Since the dynamics we consider are nonlinear, our proofs of survival of dominated strategies are topological in nature, and so do not quantify the level of domination that is consistent with a dominated strategy maintaining a significant presence in the population. We can provide a sense of this magnitude by way of numerical analysis. Our analysis considers the behavior of the BNN and Smith dynamics in the following version of bad RPS with a feeble twin : 0 2 1 1 1 0 2 2 (7) F d (x) = Ã d x = 2 1 0 0 2 d 1 d d d x 1 x 2 x 3 x 4. 24

Figure 9(i) presents the maximum, time-average, and minimum weight on the dominated strategy in the limit cycle of the BNN dynamic, with these weights being presented as functions of the domination level d. The figure shows that until the dominated strategy Twin is eliminated, its presence declines at roughly linear rate in d. Twin is played recurrently by at least 10% of the population when d.14, by at least 5% of the population when d.19, and by at least 1% of the population when d.22. Figure 9(ii) shows that under the Smith dynamic, the decay in the use of the dominated strategy is much more gradual. In this case, Twin is recurrently played by at least 10% of the population when d.31, by at least 5% of the population when d.47, and by at least 1% of the population when d.66. These values of d are surprisingly large relative to the base payoff values of 0, 2, and 1: even strategies that are dominated by a significant margin can be played in perpetuity under common evolutionary dynamics. The reason for the difference between the two dynamics is easy to explain. As we saw in Section 3, the BNN dynamic describes the behavior of agents who compare a candidate strategy s payoff with the average payoff in the population. For its part, the Smith dynamic is based on comparisons between the candidate strategy s payoff and an agent s current payoff. The latter specification makes it relatively easy for agents obtaining a low payoff from Paper or Rock to switch to the dominated strategy Twin. 5.3 Exact and Hybrid Imitative Dynamics An important class of dynamics excluded by our results are imitative dynamics, a class that includes the replicator dynamic as its best-known example. In general, imitative dynamics are derived from revision protocols of the form ρ ij (π, x) = x j r ij (π, x). The x j term reflects the fact that when an agent receives a revision opportunity, he selects an opponent at random, and then decides whether or not to imitate this opponent s strategy. Substituting ρ into equation (1), we see that imitative dynamics take the simple form ( (8) ẋ i = x i x j rji (F(x), x) r ij (F(x), x) ) j S x i p i (F(x), x). In other words, each strategy s absolute growth rate ẋ i is proportional to its level of utilization x i. 25