Complexity in social dynamics : from the. micro to the macro. Lecture 4. Franco Bagnoli. Lecture 4. Namur 7-18/4/2008

Similar documents
Simple Models of Evolutive Population Dynamics

Evolutionary Game Theory and Frequency Dependent Selection

Evolutionary Game Theory

Understanding and Solving Societal Problems with Modeling and Simulation

Living in groups 1. What are three costs and three benefits of living in groups?

Evolution of cooperation. Martin Nowak, Harvard University

Problems on Evolutionary dynamics

Alana Schick , ISCI 330 Apr. 12, The Evolution of Cooperation: Putting gtheory to the Test

Evolution & Learning in Games

Computational Evolutionary Game Theory and why I m never using PowerPoint for another presentation involving maths ever again

Costly Signals and Cooperation

Evolutionary Computation. DEIS-Cesena Alma Mater Studiorum Università di Bologna Cesena (Italia)

Communities and Populations

VII. Cooperation & Competition

Graph topology and the evolution of cooperation

MATCHING STRUCTURE AND THE EVOLUTION OF COOPERATION IN THE PRISONER S DILEMMA

Kalle Parvinen. Department of Mathematics FIN University of Turku, Finland

EVOLUTIONARY GAMES WITH GROUP SELECTION

Emergence of Cooperation and Evolutionary Stability in Finite Populations

EVOLUTIONARY STABILITY FOR TWO-STAGE HAWK-DOVE GAMES

An Introduction to Evolutionary Game Theory

Evolutionary computation

Evolutionary computation

Game Theory and Evolution

Evolution of Cooperation in the Snowdrift Game with Incomplete Information and Heterogeneous Population

Evolutionary Theory. Sinauer Associates, Inc. Publishers Sunderland, Massachusetts U.S.A.

Game Theory, Population Dynamics, Social Aggregation. Daniele Vilone (CSDC - Firenze) Namur

arxiv: v2 [q-bio.pe] 18 Dec 2007

Computation of Efficient Nash Equilibria for experimental economic games

Game Theory -- Lecture 4. Patrick Loiseau EURECOM Fall 2016

Game Theory, Evolutionary Dynamics, and Multi-Agent Learning. Prof. Nicola Gatti

Local Search & Optimization

Darwinian Selection. Chapter 6 Natural Selection Basics 3/25/13. v evolution vs. natural selection? v evolution. v natural selection

Other-Regarding Preferences: Theory and Evidence

Phase transitions in social networks

Mohammad Hossein Manshaei 1394

Stochastic evolutionary game dynamics

ALTRUISM OR JUST SHOWING OFF?

Complex networks and evolutionary games

Processes of Evolution

Scale-invariant behavior in a spatial game of prisoners dilemma

EVOLUTIONARILY STABLE STRATEGIES AND GROUP VERSUS INDIVIDUAL SELECTION

Evolutionary Games and Computer Simulations

BIO S380T Page 1 Summer 2005: Exam 2

Mathematical and statistical models in evolutionary game theory

6.207/14.15: Networks Lecture 16: Cooperation and Trust in Networks

Dynamics and Chaos. Melanie Mitchell. Santa Fe Institute and Portland State University

The concept of the adaptive landscape

DARWIN: WHICH MATHEMATICS?

Evolution of Populations. Chapter 17

Chapter 8: Introduction to Evolutionary Computation

Game Theory. Summary: History:

Evolution of Diversity and Cooperation 2 / 3. Jorge M. Pacheco. Departamento de Matemática & Aplicações Universidade do Minho Portugal

A game-theoretical approach to the analysis of microbial metabolic pathways

The Stability for Tit-for-Tat

COMP3411: Artificial Intelligence 10a. Evolutionary Computation

CS 798: Multiagent Systems

V. Evolutionary Computing. Read Flake, ch. 20. Genetic Algorithms. Part 5A: Genetic Algorithms 4/10/17. A. Genetic Algorithms

Module BIO- M1- S06 Evolu-onary ecology. Adaptive dynamics. David Claessen Ins-tut de Biologie de l ENS Equipe Eco- Evolu-on Mathéma-que

Scaling Up. So far, we have considered methods that systematically explore the full search space, possibly using principled pruning (A* etc.).

IV. Evolutionary Computing. Read Flake, ch. 20. Assumptions. Genetic Algorithms. Fitness-Biased Selection. Outline of Simplified GA

Problems for 3505 (2011)

Quantum Games. Quantum Strategies in Classical Games. Presented by Yaniv Carmeli

Cooperation. Main points for today. How can altruism evolve? Group living vs. cooperation. Sociality-nocooperation. and cooperationno-sociality

Institut für Höhere Studien (IHS), Wien Institute for Advanced Studies, Vienna

The Evolution of Gene Dominance through the. Baldwin Effect

arxiv:math/ v1 [math.oc] 29 Jun 2004

Major questions of evolutionary genetics. Experimental tools of evolutionary genetics. Theoretical population genetics.

A.I.: Beyond Classical Search

V. Evolutionary Computing. Read Flake, ch. 20. Assumptions. Genetic Algorithms. Fitness-Biased Selection. Outline of Simplified GA

Chapter 7. Evolutionary Game Theory

Game theory Lecture 19. Dynamic games. Game theory

Policing and group cohesion when resources vary

Evolution of Social Behavior: Kin Selection & Sociobiology. Goal: Why does altruism exist in nature?

Computational Problems Related to Graph Structures in Evolution

Population Genetics I. Bio

Thursday, September 26, 13

Competitive Co-evolution


Darwinian Evolution of Cooperation via Punishment in the Public Goods Game

Game Theory. Professor Peter Cramton Economics 300

Game Theory and Algorithms Lecture 2: Nash Equilibria and Examples

COMP3411: Artificial Intelligence 7a. Evolutionary Computation

An Introduction to Evolutionary Game Theory: Lecture 2

Application of Evolutionary Game theory to Social Networks: A Survey

Belief-based Learning

Stability in negotiation games and the emergence of cooperation

Divided We Stand: The Evolution of Altruism. Darwin 101. The Paradox of Altruism. Altruism: Costly behavior that helps others

April 29, 2010 CHAPTER 13: EVOLUTIONARY EQUILIBRIUM

Prisoner s Dilemma. Veronica Ciocanel. February 25, 2013

EC3224 Autumn Lecture #04 Mixed-Strategy Equilibrium

Introduction to Game Theory. Outline. Topics. Recall how we model rationality. Notes. Notes. Notes. Notes. Tyler Moore.

EVOLUTIONARY GAMES AND LOCAL DYNAMICS

Introduction to Game Theory

How robust are the predictions of the W-F Model?

NOTES Ch 17: Genes and. Variation

Brief history of The Prisoner s Dilemma (From Harman s The Price of Altruism)

Endowed with an Extra Sense : Mathematics and Evolution

Evolution and Computation. Christos H. Papadimitriou The Simons Institute

Transcription:

Complexity in Namur 7-18/4/2008

Outline 1 Evolutionary models. 2 Fitness landscapes. 3 Game theory. 4 Iterated games. Prisoner dilemma. 5 Finite populations.

Evolutionary dynamics The word evolution is traditionally related to biological systems. Actually, it was introduced in the economic/ context (Darwin was inspired by Malthus). Evolutionary theories changed in the 70s when game theory was introduced (e.g., Maynard-Smith). Game theory (von Neumann, Nash) vas born in politics and economics. Iterated games (Axelrod) were a subject of sociology. Now, evolutionary theory (memes) are coming back to psychology. And actually, brains are shaped by evolution. Evolutionary computation (e.g., genetic algorithms) are common tools for optimization problems.

Evolution Evolution is composed by three ingredients: reproduction, selection and mutations (or recombination). We can think to a finite population of N individuals. Each one has a genetic information (genotype g) that determines its actions (phenotype f ). Selection may act on survival probability or on reproduction efficiency. It is a function of the phenotype and depends on the composition of the whole population. Individuals that survive or reproduce better tend to replace others with its copies. However, during reproduction or lifetime, mutations can appear. Sexual reproduction moreover can mix genotypes. It is possible to have evolution without selection (neutral evolution).

Selection (fitness) Let us assume that the the probability of survival A depends on the phenotype A(f ). In general, it depends also on the distribution of other phenotypes in the network of contacts (hares do not like to stay near to lynxs, the latters have opposite opinions). The probability of passing (and spreading) the genotype to the following generation depends also on available space, which may be considered a global coupling.

Fitness is a relative concept: Red Queen Well, in our country, said Alice, still panting a little, you d generally get to somewhere else if you run very fast for a long time, as we ve been doing. A slow sort of country! said the Red Queen. Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that! One performances A(g) have always to be compared with the average ones A.

Implementation Fixed size population, nonoverlapping generations, stirred (mean field). An individual is represented as a string of L bits g 1,..., g l. The environment E is an array of N individuals. For each generation, one has to fill up another environment E. Choose an individual i at random. Compute A = A(E(i), E) (may depend on the other phenotypes). With probability proportional to A it is copied to E, with mutations. Repeat until E is full. Replace E with E. A may be an arbitrary positive quantity, the survival probability is computed as A/ A.

Fitness (or adaptive) Landscape If the fitness A = A(g) does not depend on the distribution of phenotypes, one can visualize A as a surface. Mutations corresponds to small or large jumps in the genotypic space (g). The selection acts either on the probability of survival, or on the efficiency in reproduction. If selection is (inversely) proportional to fitness, evolution is like a search for the global maximum of A. There is a strong correspondence between genotype and configurations, fitness and energy, mutations and temperature.

Evolution on a fitness landscape For vanishing mutations the average fitness A is a nondecreasing function of time (Fisher theorem), And the asymptotic population distribution is a delta peak at the global maximum of the fitness (master sequence). For finite mutations, the master sequence is surrounded by a cloud of mutants with lower fitness (quasispecies). Broader peaks may win over sharper and slightly higher ones. Coexistence is fragile. Anyhow, coexisting strains have the same average fitness (Gause principle). The portions of the genome subjected to higher selective pressure are less mutable than neutral ones.

Quasispecies. In equilibrium, infinite population, static fitness landscape, no mutations: just one strain (Fisher theorem). Mutations widen the distribution (quasispecies) and lower its average fitness 0.1 population static fitness 0.1 population static fitness 0.08 mu: 0.010000 0.08 mu: 0.310000 t: 2000 (a) t: 2000 (b) 0.06 0.06 0.04 0.04 0.02 0.02 0 0-0.5-0.4-0.3-0.2-0.1 0 0.1 0.2 0.3 0.4 0.5-0.5-0.4-0.3-0.2-0.1 0 0.1 0.2 0.3 0.4 0.5

Infinite population In the case of infinite population, one can use the probability i distribution p(g) = lim [g i = g] N. N The evolution equation is, for discrete time intervals or, for continuous time p(g, t + 1) = A(g) p(g, t) + mutations A p(g, t) t = (A(g) A )p(g, t) + mutations Evolution is a kind of pattern formation in genotypic space.

Coexistence. Coexistence is possible, but it is very sensitive to the mutation rate 0.2 population mu: 0.120000 static fitness t: 500 (c) 0.2 population mu: 0.120000 static fitness t: 5000 (d) 0.15 0.15 0.1 0.1 0.05 0.05 0 0-0.5-0.4-0.3-0.2-0.1 0 0.1 0.2 0.3 0.4 0.5-0.5-0.4-0.3-0.2-0.1 0 0.1 0.2 0.3 0.4 0.5 0.2 population mu: 0.140000 static fitness t: 500 (e) 0.2 population mu: 0.140000 static fitness t: 5000 (f) 0.15 0.15 0.1 0.1 0.05 0.05 0 0-0.5-0.4-0.3-0.2-0.1 0 0.1 0.2 0.3 0.4 0.5-0.5-0.4-0.3-0.2-0.1 0 0.1 0.2 0.3 0.4 0.5

Genetic algorithms In order to maximize a function of a discrete (Boolean) parameters, identify the parameters with a genetic code, and the function with the fictness. Let a population evolve under selection and competition. If the only source of variability is mutation, it is essentially a Monte-Carlo sampling. By lowering the mutation rate (simulated annealing) one recovers the global maximum. One can also add recombination (sex) by exchanging pieces of genomes. It works on separable problems (but remember that sex was not invented to speed-up evolution, but rather to escape parasites and plagues).

Competition The fitness actually depends also on the population distribution. p(g) t A(g, p) = exp A(g, p) = p(g) + mutations, A H 0 (g) + H (i) 1 (g, g )p(g ) + H (e) 1 (g, g )p(g ). g g g g One can include simple competition terms: intraspecies (H (i) 1 ) and interspecies (H(e) 1 ) contributions. Intraspecie competition broadens the distribution, favouring dispersion. Interspecies competition stabilizes coexistence. Predation induces competition between predators and between preys.

Competition. Short-range competition stabilizes coexistence 0.35 0.3 time: 0 population static fitness dynamical fitness 0.35 0.3 time: 100 population static fitness dynamical fitness 0.25 (a) 0.25 (b) 0.2 0.2 0.15 0.15 0.1 0.1 0.05 0.05 0-0.5-0.4-0.3-0.2-0.1 0 0.1 0.2 0.3 0.4 0.5 0-0.5-0.4-0.3-0.2-0.1 0 0.1 0.2 0.3 0.4 0.5 0.35 0.3 time: 200 population static fitness dynamical fitness 0.35 0.3 time: 500 population static fitness dynamical fitness 0.25 (c) 0.25 0.2 0.2 0.15 0.15 0.1 0.1 0.05 0.05 0-0.5-0.4-0.3-0.2-0.1 0 0.1 0.2 0.3 0.4 0.5 0-0.5-0.4-0.3-0.2-0.1 0 0.1 0.2 0.3 0.4 0.5

Games Game theory has changed the way we look at evolution. We can think to a population of agents (automata), that interact (play the game). A strategy can be seen as a rule for an automata, either deterministic (pure rules) or stochastic (mixed rules). The ingredients to the strategy are present and past states. The selection of the best strategy depends on the payoff (evolutionary stable strategies). Individuals whose strategies give higher payoff reproduce (and mutate).

Example: evolution of cooperation Interspecies interaction can be divided into predation (or parassitism), cooperation and competition. Predation may lead to interspecies competition (your enemy is my friend). Cooperation is strongly related to competiton: there is no natural cooperation (genes are selfish). Nevertheless, cooperation is common: genes, insect, muticellular organisms, slime molds, even humans... So, what are the evolutionary basis of cooperation?

Payoff Suppose that there are only two states (card 0 and 1 or cooperate (C) and defect (D)). Players play cards at the same moment. The generic payoff table (for one of participant) is: payoff C D C α β D δ γ

Strategy A strategy can be given by a hard rule, that states what to play next. Given a certain memory of deals, a strategy is thus an automaton rule. Another possibility is to give probabilities (stochastic rule or mixed strategy). For memory equal to 1, a mixed strategy is just given by a probability p of playing C, and 1 p of playing D.

Equilibrium Nash equilibrium: no person can increase his payoff by changing his strategy alone. There may be no nash equilibrium for pure stategies, but there is always an equilibrium for mixed strategies (Nash theorem). In a large population, an evolutionary stable strategy (ESS) is a strategy that cannot be invaded by a single mutant. An unstable strategy may become ESS and even invade other strategies in repeated games.

Prisoner dilemma For instance, in the single game, with the payoff matrix payoff C D C 3 0 D 5 1 the Nash equilibrium corresponds to both payers playing D (non collaborating), even though collaboration gives higher payoff. In repeated games, it may be worth collaborating.

Evolutionary games In evolutionary games, the payoff is related to fitness. One may think for instance that in a population two players are picked at random, and play one or more deals (the number of deals or the probability of ending the game is an important parameter). By playing, individuals accumulate payoffs (may be negative). After some tounament, couples of individuals (i and j) are chosen, one of them dies and the other duplicates with mutations. The probability A(i) of surviving (fitness) is related to payoff (E(i) and E(j) and to the temperature T A(i) = 1 + exp 1 ( E(j) E(i) T )

Mean field analysis payoff C D C α β D δ γ Let us denote with p the fraction of population playing C (and 1 p those playing C). If the payoff is related to fitness p = αp 2 + βp(1 p) αp 2 + βp(1 p) + δp(1 p) + γ(1 p) 2 neglecting fluctuation, the asymptotic state is given by the fixed points of the previous equations.

Evolution on finite populations A society of cooperators has an higher fitness that a society of defectors, but is in general susceptible to invasions or mutation by defectors. Possible scenarios are: (D) If the only stable point is p = 0, defectors always dominates. (ESS) If there are two stable points, p = 0 and p = 1, but the latter has only a tiny basin, then cooperators are evolutive stable strategie (ESS): a single mutant cannot invade, but multiple ones can, helped by fluctuations. α > β. (RD) If the basin of p = 1 extends up to p 0 = 1/2, it is favoured by fluctuations. α + γ > β + δ (AD) The basin extends up to p 0 = 2/3 can be shown to be related to stability respect to fixation of a single mutant (Kimura theory). α + 2γ > β + 2δ (C) Finally, if the basin extends up to p 0 1, cooperators always dominate (except for p 0 = 1).

Standard payoff payoff C D C b c c D b 0 Benefits (may be long term like parental cares) have to be greater than costs. However, defectors always wins.

Kin selection Haldane one said: I will jump inot the river to save two brothers or eight cousins. If you cooperate with a relative that shares a fraction r of your genes, then your payoff (well, that of your genes) is augmented by a fraction r of the payoff of your opponent. payoff C D C (b c)(1 + r) br c D b rc 0 For c b < r, cooperation is ESS,RD,AD. Kin selection says that cooperation among relatives (cells in multicellular organisms, insects) derives by selfishness of genes.

Direct reciprocity For a one shot game the best strategy is to defect. But Axelrod discovered by computer experiments that for repeated games between two opponents it is best to cooperate and to forgive: tit for tat or similar strategies. In this case the parameter is the probability w of another encounter (or the expected number 1/w of rounds). ESS for c/b < w RD for c/b < w/(2 w) payoff C D C (b c)/(1 w) c D b 0 AD for c/b < w/(3 2w)

Reputation (indirect reciprocity) For humans, reputation is a valuable quantity. It is defined as the average cooperation-to defection record. It may be known with a probability q. If you know that the opponent is a defector, defect, otherwise cooperate. ESS for c/b < q RD for c/b < q/(2 q) AD for c/b < q/(3 2q) payoff C D C (b c) c(1 q) D b(1 q) 0

Network reciprocity Human societies are structurate networks. For a given connectivity k of a node, a cooperator pays a cost c and each of the neighbors receive a benefit b. with H = (b c)k 2c (k+1)(k 2). ESS, RD, AD for c/b < 1/k payoff C D C (b c) H c D b(1 H) 0

Group selection Group selection is based on the higher payoff of cooperation, but one has to find a mechanism for stabilizing it against defectors. The idea is that the society automatically splits into m groups of size n. Cooperators help only inside groups, and successful groups split more often. payoff C D C (b c)(n + m) bm c(m + n) Theoretical and D bn 0 ESS, RD, AD for c/b < m/(m + n).