First Prev Next Last Go Back Full Screen Close Quit. Game Theory. Giorgio Fagiolo

Similar documents
Basic Game Theory. Kate Larson. January 7, University of Waterloo. Kate Larson. What is Game Theory? Normal Form Games. Computing Equilibria

Iterated Strict Dominance in Pure Strategies

MS&E 246: Lecture 12 Static games of incomplete information. Ramesh Johari

Belief-based Learning

Industrial Organization Lecture 3: Game Theory

Lecture Notes on Game Theory

Prisoner s Dilemma. Veronica Ciocanel. February 25, 2013

Higher Order Beliefs in Dynamic Environments

Game Theory and Rationality

Weak Dominance and Never Best Responses

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

STOCHASTIC STABILITY OF GROUP FORMATION IN COLLECTIVE ACTION GAMES. Toshimasa Maruta 1 and Akira Okada 2

Economics 703 Advanced Microeconomics. Professor Peter Cramton Fall 2017

Satisfaction Equilibrium: Achieving Cooperation in Incomplete Information Games

MS&E 246: Lecture 4 Mixed strategies. Ramesh Johari January 18, 2007

A (Brief) Introduction to Game Theory

Game Theory and Algorithms Lecture 2: Nash Equilibria and Examples

Computing Minmax; Dominance

BELIEFS & EVOLUTIONARY GAME THEORY

C31: Game Theory, Lecture 1

Game Theory, Evolutionary Dynamics, and Multi-Agent Learning. Prof. Nicola Gatti

Introduction to Game Theory Lecture Note 2: Strategic-Form Games and Nash Equilibrium (2)

Equilibrium Refinements

Perfect Bayesian Equilibrium

Payoff Continuity in Incomplete Information Games

Observations on Cooperation

EVOLUTIONARY STABILITY FOR TWO-STAGE HAWK-DOVE GAMES

Sequential Games with Incomplete Information

Bargaining Efficiency and the Repeated Prisoners Dilemma. Bhaskar Chakravorti* and John Conley**

Game Theory Lecture 7 Refinements: Perfect equilibrium

Theory of Auctions. Carlos Hurtado. Jun 23th, Department of Economics University of Illinois at Urbana-Champaign

Introduction to Game Theory

Entropic Selection of Nash Equilibrium

Static (or Simultaneous- Move) Games of Complete Information

Other Equilibrium Notions

Refinements - change set of equilibria to find "better" set of equilibria by eliminating some that are less plausible

6.891 Games, Decision, and Computation February 5, Lecture 2

Computing Minmax; Dominance

General-sum games. I.e., pretend that the opponent is only trying to hurt you. If Column was trying to hurt Row, Column would play Left, so

Conflict Games with Payoff Uncertainty 1

Game Theory. Professor Peter Cramton Economics 300

Ex Post Cheap Talk : Value of Information and Value of Signals

6.207/14.15: Networks Lecture 16: Cooperation and Trust in Networks

Robust Mechanism Design and Robust Implementation

: Cryptography and Game Theory Ran Canetti and Alon Rosen. Lecture 8

Game Theory Fall 2003

Bayesian Learning in Social Networks

arxiv: v1 [physics.soc-ph] 3 Nov 2008

Political Economy of Institutions and Development: Problem Set 1. Due Date: Thursday, February 23, in class.

Bounded Rationality Lecture 2. Full (Substantive, Economic) Rationality

Mathematical Economics - PhD in Economics

INFORMATIONAL ROBUSTNESS AND SOLUTION CONCEPTS. Dirk Bergemann and Stephen Morris. December 2014 COWLES FOUNDATION DISCUSSION PAPER NO.

Economics 201A Economic Theory (Fall 2009) Extensive Games with Perfect and Imperfect Information

Area I: Contract Theory Question (Econ 206)

Variants of Nash Equilibrium. CMPT 882 Computational Game Theory Simon Fraser University Spring 2010 Instructor: Oliver Schulte

A Generic Bound on Cycles in Two-Player Games

Non-zero-sum Game and Nash Equilibarium

Conservative Belief and Rationality

Extensive Form Games with Perfect Information

Selfishness vs Altruism vs Balance

Game Theory for Linguists

Introduction to game theory LECTURE 1

Recap Social Choice Fun Game Voting Paradoxes Properties. Social Choice. Lecture 11. Social Choice Lecture 11, Slide 1

Game Theory. Wolfgang Frimmel. Perfect Bayesian Equilibrium

Learning by (limited) forward looking players

Game Theory. School on Systems and Control, IIT Kanpur. Ankur A. Kulkarni

1 Games in Normal Form (Strategic Form)

Game Theory Lecture 10+11: Knowledge

UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer 2016

Notes on Coursera s Game Theory

Models of Strategic Reasoning Lecture 2

On Reputation with Imperfect Monitoring

Microeconomics. 2. Game Theory

Computation of Efficient Nash Equilibria for experimental economic games

A Primer on Strategic Games

Correlated Equilibrium in Games with Incomplete Information

6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Potential Games

GAMES: MIXED STRATEGIES

Weak Robust (Virtual) Implementation

1. Introduction. 2. A Simple Model

Game Theory: introduction and applications to computer networks

Understanding and Solving Societal Problems with Modeling and Simulation

Title: The Castle on the Hill. Author: David K. Levine. Department of Economics UCLA. Los Angeles, CA phone/fax

Bayes Correlated Equilibrium and Comparing Information Structures

Bargaining, Contracts, and Theories of the Firm. Dr. Margaret Meyer Nuffield College

Online Learning Class 12, 20 March 2006 Andrea Caponnetto, Sanmay Das

Mechanism Design: Implementation. Game Theory Course: Jackson, Leyton-Brown & Shoham

Computing Solution Concepts of Normal-Form Games. Song Chong EE, KAIST

Game Theory: Spring 2017

MS&E 246: Lecture 17 Network routing. Ramesh Johari

Mixed Strategies. Krzysztof R. Apt. CWI, Amsterdam, the Netherlands, University of Amsterdam. (so not Krzystof and definitely not Krystof)

Evolution & Learning in Games

Learning Near-Pareto-Optimal Conventions in Polynomial Time

6.207/14.15: Networks Lecture 11: Introduction to Game Theory 3

Equilibria in Games with Weak Payoff Externalities

6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2

Recap Social Choice Functions Fun Game Mechanism Design. Mechanism Design. Lecture 13. Mechanism Design Lecture 13, Slide 1

For general queries, contact

Computational Evolutionary Game Theory and why I m never using PowerPoint for another presentation involving maths ever again

Definitions and Proofs

Transcription:

Game Theory Giorgio Fagiolo giorgio.fagiolo@univr.it https://mail.sssup.it/ fagiolo/welcome.html Academic Year 2005-2006 University of Verona

Summary 1. Why Game Theory? 2. Cooperative vs. Noncooperative Games 3. Description of a Game 4. Rationality and Information Structure 5. Simultaneous-Move (SM) vs. Dynamic Games 6. Analysis 7. Examples 8. Problems and Suggested Solutions

Analysis Main Question What outcome should we expect to observe in a game played by fully rational players with perfect recall and common knowledge about the structure of the game? Answer It depends on whether: Agents know about others payoffs Rationality is common knowledge Players conjectures about each other s play must be mutually correct Notation s = {s i, s i } = {s i, (s 1,..., s i 1, s i+1,..., s N )} S = S 1 S 2 S N S i = S 1 S 2 S i 1 S i+1 S N Pure vs. Mixed Strategies

Level-1 Rationality Assume H1 only: Players are rational and know the structure of the game. Definition 1 (Strictly dominant strategy) A strategy s i S i is a strictly dominant strategy for player i in game Γ N if for all s i s i and s i S i π i (s i, s i ) > π i (s i, s i ). Definition 2 (Strictly dominated strategies) A strategy s i S i is strictly dominated (SD) for player i in game Γ N if there exists another strategy s i S i such that for all s i S i π i (s i, s i ) > π i (s i, s i ). A rational player satisfying H1 will: play a strictly dominant strategy if there exists one; not play a strictly dominated strategy. Problem: the outcome of the game is far from being unique It is rare that a strictly dominant strategy exists Many options resist to deletion of strictly dominated strategies

Level-1 Rationality: Examples A Strictly Dominant Strategy in the Prisoner Dilemma Player 1 Player 2 (π 1, π 2 ) DC C DC ( 2, 2) ( 10, 1) C ( 1, 10) ( 5, 5) Deletion of a Strictly Dominated Strategy (D for Player 1) Player 2 (π 1, π 2 ) L R U (+1, 1) ( 1, +1) Player 1 M ( 1, +1) (+1, 1) D ( 2, +5) ( 3, +2) No strict dominance in Matching Pennies 2.0 Player 2 (π 1, π 2 ) H T Player 1 H ( 1, +1) (+1, 1) T (+1, 1) ( 1, +1)

Level-2 Rationality (1/3) Assume that H1, H2 and H3 hold: Players know others payoffs and that all are rational. Players know that others will not play strictly dominated strategies. Deletion of strictly dominated strategies can be iterated. Unique predictions can be sometimes reached. Example Player 1 Player 2 (π 1, π 2 ) DC C DC (0, 2) ( 10, 1) C ( 1, 10) ( 5, 5) C is no longer a dominant strategy for player 1. C is still a dominant strategy for player 2. Player 1 knows that 2 will not play DC and can eliminate it. Now player 1 knows that Player 2 will play C, to which C is a dominant strategy.

Level-2 Rationality (2/3) Iterative deletion of strictly dominated strategies (IDSDS) does not depend on the order of deletion. When N = 2 the set of strategies that resist to IDSDS is the prediction of the game. When N > 2 assumption H1-H3 allows one to delete even more outcomes. Definition 3 (Best Response) In game Γ N a strategy s i S i is a best response for player i to s i if for all s i S i π i (s i, s i ) π i (s i, s i ). A strategy s i is never a best response if there is no s i for which s i is a best response. A strictly dominated strategy is never a best response, but there may exist strategies that are never a best-response which are not strictly dominated. Iterative elimination of strategies that are never a best-response leads to the set of rationalizable strategies, which is generally smaller than the set of strategies that resist IDSDS. Rationalizable strategies are those that one can expect to occur in a game played by rational agents for which H1-H3 hold true.

Level-2 Rationality (3/3) Example Player 2 (π 1, π 2 ) l m n U (5, 3) (0, 4) (3, 5) Player 1 M (4, 0) (5, 5) (4, 0) D (3, 5) (0, 4) (5, 3) Best-Response Strategies U if s 2 = l n if s 1 = U s 1 = M D if if s 2 = m s 2 = n, s 2 = m l if if s 1 = M s 1 = D Thus: No strategy is never a best response. Iterative elimination cannot be applied (Hint: with 2 players a strategy is never a BR it is strictly dominated). All strategies can be rationalized by H1-H3 through a chain of justifications. Example for (U, l): J 1 ={P1 justifies U by the belief that P2 will play l, which can be justified if P1 thinks that P2 believes that P1 plays D}. J 2 ={P1 justifies J 1 by thinking that P2 thinks that P1 believes that P2 plays l}... and so on! Beliefs can be mutually wrong! H1-H3 do not require them to be mutually consistent!

Level-3 Rationality (1/3) Assume that H1-H3 hold and that players are mutually correct in their beliefs Definition 4 (Nash Equilibrium (NE)) A strategy profile (s 1,..., s N ) is a Nash equilibrium for the game Γ N if for every i I and for all s i S i π i (s i, s i ) π i (s i, s i ) i.e. if each actual player s strategy is a best-response. Examples: Player 2 (π 1, π 2 ) l m n U (5, 3) (0, 4) (3, 5) Player 1 M (4, 0) (5,5) (4, 0) D (3, 5) (0, 4) (5, 3) Player 1 Player 2 (π 1, π 2 ) DC C DC ( 2, 2) ( 10, 1) C ( 1, 10) (-5,-5)

Level-3 Rationality (2/3) Three Crucial Questions: Why should players play a NE? Does a NE always exist? When a NE exists, is it always unique?

Level-3 Rationality (2/3) Three Crucial Questions: Why should players play a NE? Does a NE always exist? When a NE exists, is it always unique? Why should players beliefs be mutually correct? Not a consequence of rationality NE as obvious ways to play the game NE as Pre-play commitments NE as Social Conventions No reasons to expect players to play NE are given in the rules of the game Need to complement the theory with something else...

Level-3 Rationality (3/3) Does a NE always exist? No, even in simplest games... Player 2 (π 1, π 2 ) H T Player 1 H ( 1, +1) (+1, 1) T (+1, 1) ( 1, +1) Existence is guaranteed in the space of mixed strategies for finite-strategy games Proof: See MWG p.250-253 (Hemicontinuous Corr, Kakutani Fixed Point Theorem)

Level-3 Rationality (3/3) Does a NE always exist? No, even in simplest games... Player 2 (π 1, π 2 ) H T Player 1 H ( 1, +1) (+1, 1) T (+1, 1) ( 1, +1) Existence is guaranteed in the space of mixed strategies for finite-strategy games Proof: See MWG p.250-253 (Hemicontinuous Corr, Kakutani Fixed Point Theorem) When a NE exists, is it always unique? Player 2 (π 1, π 2 ) A B Player 1 A (1,1) (0, 0) B (0, 0) (2,2) Uniqueness and efficiency are not guaranteed When multiple NE arise, no selection principle is given

Summary 1. Why Game Theory? 2. Cooperative vs. Noncooperative Games 3. Description of a Game 4. Rationality and Information Structure 5. Simultaneous-Move (SM) vs. Dynamic Games 6. Analysis 7. Examples 8. Problems and Suggested Solutions

Some Examples Focus on 2-player 2 2 symmetric games (π i = π all i I) Adding (heterogeneous) players and strategies only complicates the framework Games become more difficult to solve No new intuitions: problems always are existence and uniqueness +1 1 +1 a b 1 c d +1 1 +1 1 0 1 α β where: a d, a > b and α = c b a b, β = d b a b so that α R and β 1

Case I: Prisoners Dilemma Games Call +1= Cooperate and 1= Defect : +1 1 +1 a b 1 c d +1 1 +1 1 0 1 α β P D : c > a > d > b. P D : 0 < β < 1 and α > 1 +1 is strictly dominated by 1 ( 1, 1) is the unique (inefficient) Nash

Case II: Coordination Games (CG) We have that: +1 1 +1 a b 1 c d +1 1 +1 1 0 1 α β CG : a > c and d > b CG : 0 < β 1 and α < 1 There are two NE: (+1, +1) and ( 1, 1) β < 1 : (+1, +1) is Pareto-efficient, ( 1, 1) is Pareto-inferior β = 1 : (+1, +1) and ( 1, 1) are Pareto-equivalent A Pure-Coordination Game arises when α = 0.

Case IIa: CG with Strategic Complementarities A 2 person, 2 2 symmetric game is a game with strategic complementarities if the expected payoff from playing +1 (resp. 1) is increasing in the probability that the opponent is playing +1 (resp. 1). Applications: Technological adoption, technological spillovers, etc.. A 2 person, 2 2 symmetric game is a game with strategic complementarities if it is a coordination game and in addition: SC : a > b, d > c SC games are therefore characterized by: SC : 1 α 1, 0 β 1 and 1 α β < 0 +1 1 +1 1.0 0.0 1 0.5 0.8

Case IIb: Stag-Hunt Coordination Games A 2 person, 2 2 symmetric game is called a stag-hunt game if (i) (+1, +1) is Pareto dominant (a > d); and (ii) c > d. SH : 1 α 1, 0 β 1 and 0 α β < 1 +1 1 +1 1.0 0.0 1 0.5 0.3 Applications: Pre-Play Commitments. Suppose that player I commits in a pre-play talk to play the efficient strategy +1. Would he be credible? No. Why? Because player I cannot credibly communicate this intention to player II as it is always in player I s interest to convince II to play +1. Indeed, if player I is cheating, he is going to get always a larger payoff if player II will play +1, as c > d. Hence, by convincing II to play +1, player I will always get a gain and the pre-play commitment is not credible. That is why pre-play communication does not ensure efficiency in a SH game. Conversely, in SCG, pre-play commitment is credible and can ensure efficiency.

Case III: Hawk-Dove Games A 2 person, 2 2 symmetric game is called a hawk-dove game if: HD : c > a > b > d HD : α 1 and β 0 Interpretation Strategy +1 is dove and strategy 1 is hawk. There is a common resource of 2 units. If two +1 meet, they share equally and get a payoff of 1 each. If a +1 and a 1 meet, then +1 gets 0 and 1 gets all 2 units. If two 1 meet, then not only they destroy the resource, but they get a negative payoff of 1 each. +1 1 +1 1 0 1 2 1 There are two NE: (+1, 1) and ( 1, +1)

Case IV: Efficient Dominant Strategy Games PD Games have an inefficient dominant strategy. dominant strategy: EDS games have instead an efficient EDS : a > c and b > d EDS : α < 1 and β < 0 +1 1 +1 1 0 1 α β 1 is strictly dominated by +1 (+1, +1) is the unique efficient NE

Classification of 2-person 2 2 Symmetric Games

Classification of 2-person 2 2 Symmetric Games

Summary 1. Why Game Theory? 2. Cooperative vs. Noncooperative Games 3. Description of a Game 4. Rationality and Information Structure 5. Simultaneous-Move (SM) vs. Dynamic Games 6. Analysis 7. Examples 8. Problems and Suggested Solutions

I. Incomplete Information (1/2) What happens if information is incomplete (nature moves first) and therefore imperfect (info sets are not singletons)? Example: Harsanyi Setup 1. Nature chooses a type for each player (θ 1,..., θ N ) Θ = Θ 1 Θ N 2. Joint probability distribution F (θ 1,..., θ N ) common knowledge 3. Each player can only observe θ i Θ i 4. Payoffs to player i are: π i (s i, s i ; θ i ) 5. A pure strategy for i is a decision rule s i (θ i ) 6. The set of pure strategies for player i is the set R i of all possible decision rules (i.e. functions s i (θ i )) 7. Player i s expected payoff is given by: π i (s 1 ( ),..., s N ( )) = E θ [π i (s 1 (θ 1 ),..., s N (θ N ); θ i )]

I. Incomplete Information (2/2) Bayesian Nash Equilibrium Definition 5 A Bayesian Nash Equilibrium (BNE) for the game [I, {S i }, {π i ( )}, Θ, F ( )] is a profile of decision rules (s 1 ( ),..., s N ( )) that is a NE for the game [I, {R i }, { π i ( )}]. Amount of information and computational abilities required is huge! In a BNE each player must play a BR to the conditional distribution of his opponents strategies for each type θ i he can have! Each player is actually split in a (possibly infinite) number of identities, each one associated to an element of Θ i It is like the game were populated by a (possibly infinite) number of players Only very simple games can be handled Multiple players? Multiple Nature Moves? Commitment to full rationality and complete-graph interactions become too stringent

II. Equilibrium Selection (1/3) The theory is completely silent on: Why players should play a NE What happens when a NE does not exist What happens when more than one NE do exist (e.g., efficiency issues) Complementing NE theory with equilibrium refinements Additional criteria that (may) help in solving the equilibrium selection issue Similar to tâtonnement in general equilibrium theory So many refinement theories that the issue shifted from equilibrium selection to equilibrium refinements selection... Each refinement theory can be ad hoc justified Example: Trembling Hand Perfection (MWG, 8.F) Consider the mixed-strategy game Γ N = [I, (S i ), π i ] The mixed-strategy set (S i ) means that players choose a probability distribution over S i, i.e. play a strategy σ i in the K-dim simplex A pure strategy is a vertex of the simplex The boundary of the simplex means playing some strategy with zero probability

II. Equilibrium Selection (2/3) Example: Cont d Problem: Some pure-strategy Nash equilibria are the result of an excess of rationality, i.e. they did not occurred if the players knew how to move slightly away from them NE with weakly-dominated strategies A B A 4 3 B 0 3 Question: What if we force players to play every pure-strategy with a small but positive probability (make mistakes)? Players now must choose a strategy in the interior of the simplex More formally: Define ɛ i (s i ) for all s i S i and i I and allow players to choose mixed strategies s.t. the probability that player i plays the pure-strategy s i is larger that ɛ i (s i ). Define a perturbed-game Γ N,ɛ as the original game Γ N when a particular choice of lower-bounds ɛ i (s i ) for all s i S i and i I is made. A mixed-strategy NE σ of the original game will be a Trembling-Hand Perfect (THP) NE if there exists a sequence of perturbed games that converges to the original game (as mistake sizes go to zero) whose associated NE stay arbitrarily close to σ.

II. Equilibrium Selection (3/3) A THP NE always exists if the game admits a finite number of pure strategies Main Result: If σ is a THP Nash equilibrium, then it does not involve playing weaklydominated strategies. If we decide not to accept equilibria that involve weakly-dominated strategies, and we are dealing with games with a finite number of pure strategies, we are sure that at least a THP NE does exist. Problems: Some important games have an infinite number of pure-strategies (continuous): Bertrand oligopoly games Difficult to find out THP NE if the game becomes more complicated (multiple players, incomplete information, etc.) Again: Commitment to full rationality and complete-graph interactions become too stringent

Concluding Remarks (1/2) Theory for simultaneous-move games very useful to answer very (too?) ( low fat modeling ) simple questions Restrictions on analytical treatment imposed by rationality requirements and interaction structure (all interact with everyone else) often become too stringent Games become very easily untractable and/or generate void implications when Many heterogeneous players Information is incomplete and/or imperfect Interaction structure not a complete graph Moves are not simultaneous: Dynamic games and anything can happen kind of results Need to go beyond full rationality paradigm, representative-individual philosophy and peculiar interaction structures Relevant literature: Evolutionary-game theory and beyond

An alternative class of models Agents: i I = 1, 2,..., N Actions: a i A i = {a i1,..., a ik } Time: t = 0, 1, 2,... Dynamics: Concluding Remarks (2/2) At each t (some or all) agents play a game Γ with players in V i Interaction sets V i I define the interaction structure Agents are boundedly-rational and adaptive: they form expectations by observing actions played in the past by their opponents Time-t payoffs to agent i are give by some function w t (a; a t 1 j, j V i ) where a A i Players update their current action at each t e.g. by choosing their myopic BR to observed configuration a t i arg max a A i w t (a; a t 1 j, j V i ) Looking for absorbing states or statistical equilibria of the (Markov) process governing the evolution of an action configuration (or some statistics thereof) as t a t = (a t 1,..., a t N )