Industrial Organization Lecture 3: Game Theory

Similar documents
Solving Extensive Form Games

4: Dynamic games. Concordia February 6, 2017

Basics of Game Theory

UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer 2016

6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2

Industrial Organization Lecture 7: Product Differentiation

Game Theory. Wolfgang Frimmel. Perfect Bayesian Equilibrium

Introduction to Game Theory

6.207/14.15: Networks Lecture 11: Introduction to Game Theory 3

Static (or Simultaneous- Move) Games of Complete Information

BELIEFS & EVOLUTIONARY GAME THEORY

Backwards Induction. Extensive-Form Representation. Backwards Induction (cont ) The player 2 s optimization problem in the second stage

Extensive games (with perfect information)

First Prev Next Last Go Back Full Screen Close Quit. Game Theory. Giorgio Fagiolo

Game Theory. Professor Peter Cramton Economics 300

Lecture 1. Evolution of Market Concentration

EconS Sequential Competition

Extensive Form Games with Perfect Information

Microeconomics. 2. Game Theory

Iterated Strict Dominance in Pure Strategies

Evolutionary Dynamics and Extensive Form Games by Ross Cressman. Reviewed by William H. Sandholm *

Basic Game Theory. Kate Larson. January 7, University of Waterloo. Kate Larson. What is Game Theory? Normal Form Games. Computing Equilibria

Quantum Games. Quantum Strategies in Classical Games. Presented by Yaniv Carmeli

Computing Minmax; Dominance

Computing Minmax; Dominance

Games with Perfect Information

Bargaining, Contracts, and Theories of the Firm. Dr. Margaret Meyer Nuffield College

Oligopoly. Oligopoly. Xiang Sun. Wuhan University. March 23 April 6, /149

6.254 : Game Theory with Engineering Applications Lecture 13: Extensive Form Games

Economics 703 Advanced Microeconomics. Professor Peter Cramton Fall 2017

Microeconomics for Business Practice Session 3 - Solutions

1 Games in Normal Form (Strategic Form)

Introduction to Game Theory Lecture Note 2: Strategic-Form Games and Nash Equilibrium (2)

Dynamic stochastic game and macroeconomic equilibrium

Introduction to game theory LECTURE 1

Negotiation: Strategic Approach

Government 2005: Formal Political Theory I

Equilibrium Refinements

Oligopoly. Molly W. Dahl Georgetown University Econ 101 Spring 2009

SF2972 Game Theory Exam with Solutions March 15, 2013

Game Theory and Algorithms Lecture 2: Nash Equilibria and Examples

Bertrand Model of Price Competition. Advanced Microeconomic Theory 1

Bounded Rationality Lecture 2. Full (Substantive, Economic) Rationality

Advanced Microeconomics

EconS Advanced Microeconomics II Handout on Subgame Perfect Equilibrium (SPNE)

3.3.3 Illustration: Infinitely repeated Cournot duopoly.

Computing Equilibria of Repeated And Dynamic Games

Microeconomics II Lecture 4: Incomplete Information Karl Wärneryd Stockholm School of Economics November 2016

Refinements - change set of equilibria to find "better" set of equilibria by eliminating some that are less plausible

MS&E 246: Lecture 12 Static games of incomplete information. Ramesh Johari

1 The General Definition

Algorithmic Game Theory. Alexander Skopalik

Game Theory and Rationality

MS&E 246: Lecture 4 Mixed strategies. Ramesh Johari January 18, 2007

Competitive Equilibrium

FORWARD INDUCTION AND SUNK COSTS GIVE AVERAGE COST PRICING. Jean-Pierre Ponssard. Abstract

6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Potential Games

Ex Post Cheap Talk : Value of Information and Value of Signals

Dynamic Games and Bargaining. Johan Stennek

REPEATED GAMES. Jörgen Weibull. April 13, 2010

Game Theory and Evolution

Lecture Notes on Game Theory

EVOLUTIONARY STABILITY FOR TWO-STAGE HAWK-DOVE GAMES

Theory of Auctions. Carlos Hurtado. Jun 23th, Department of Economics University of Illinois at Urbana-Champaign

Belief-based Learning

A Polynomial-time Nash Equilibrium Algorithm for Repeated Games

Prisoner s Dilemma. Veronica Ciocanel. February 25, 2013

Oligopoly Theory. This might be revision in parts, but (if so) it is good stu to be reminded of...

Normal-form games. Vincent Conitzer

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

Economics 3012 Strategic Behavior Andy McLennan October 20, 2006

Lecture Notes: Industrial Organization Joe Chen 1. The Structure Conduct Performance (SCP) paradigm:

CS 798: Multiagent Systems

A (Brief) Introduction to Game Theory

Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Extensive games with perfect information OR6and7,FT3,4and11

Stackelberg-solvable games and pre-play communication

EconS Advanced Microeconomics II Handout on Repeated Games

Fixed Point Theorems

Classic Oligopoly Models: Bertrand and Cournot

Game Theory. School on Systems and Control, IIT Kanpur. Ankur A. Kulkarni

CSC304 Lecture 5. Game Theory : Zero-Sum Games, The Minimax Theorem. CSC304 - Nisarg Shah 1

Notes on Coursera s Game Theory

Introduction to Game Theory

A Folk Theorem For Stochastic Games With Finite Horizon

Some forgotten equilibria of the Bertrand duopoly!?

General Equilibrium and Welfare

Equilibrium Computation

Algorithmic Game Theory and Applications. Lecture 4: 2-player zero-sum games, and the Minimax Theorem

Lecture 6. Xavier Gabaix. March 11, 2004

Bayesian Games and Mechanism Design Definition of Bayes Equilibrium

Game Theory. Kuhn s Theorem. Bernhard Nebel, Robert Mattmüller, Stefan Wölfl, Christian Becker-Asano

Deceptive Advertising with Rational Buyers

On the Unique D1 Equilibrium in the Stackelberg Model with Asymmetric Information Janssen, M.C.W.; Maasland, E.

MAS and Games. Department of Informatics Clausthal University of Technology SS 18

C31: Game Theory, Lecture 1

4. Partial Equilibrium under Imperfect Competition

Cournot Competition Under Asymmetric Information

University of Warwick, Department of Economics Spring Final Exam. Answer TWO questions. All questions carry equal weight. Time allowed 2 hours.

We set up the basic model of two-sided, one-to-one matching

Uniqueness, Stability, and Gross Substitutes

Transcription:

Industrial Organization Lecture 3: Game Theory Nicolas Schutz Nicolas Schutz Game Theory 1 / 43

Introduction Why game theory? In the introductory lecture, we defined Industrial Organization as the economics of imperfect competition. Imperfect competition usually involves strategic interactions: A firm s action (price choice, quantity produced, R&D, advertising expenditures,... ) has strong effects on its rivals profits. Can we predict anything about an industry in which firms interact strategically? Under perfect competition, which assumes away strategic interactions, we have a very useful equilibrium concept: the walrasian equilibrium. Can we find a solution concept with strategic interactions? Game theory, which can be defined as the study of multiperson decision problems, has been designed to find such solution concepts. Nicolas Schutz Game Theory 2 / 43

Introduction Roadmap: Definition of a game: normal form. A first solution concept: Equilibrium in dominant strategies. A less stringent solution concept: Nash equilibrium. Examples and interpretation. Extensive form games. Criticism of the Nash equilibrium concept in multistage games. The subgame-perfect equilibrium for multistage games. Nicolas Schutz Game Theory 3 / 43

Normal Form Game Definition A normal form game is described by: 1 A set of players, I = {1, 2,..., N} (N 1). 2 For each player, an action set A i. Let a = (a 1, a 2,..., a N ) Π N j=1 A j a list of the actions chosen by each player. We say that a is an outcome of the game, or an action profile. 3 For each player i, a payoff function, π i : a Π N j=1 A j π i (a) [0, ) Nicolas Schutz Game Theory 4 / 43

Normal Form Game Examples: Price competition with differentiated products: Two players: firm 1 and firm 2. Action set of firm i: p i [0, ) (the set of non-negative prices). Example of an outcome / action profile: p 1 = 3, p 2 = 4. Payoff function: π i (p i, p j ) = (p i c i )q i (p i, p j ), where q i (p i, p j ) denotes firm i s demand at prices p i and p j. Usual assumption: q i p i < 0 and q i p j > 0. We will analyze this model more thoroughly in Lecture 7. Nicolas Schutz Game Theory 5 / 43

Normal Form Game Examples (Cont d): The (well-known) prisoners dilemma: Two players: Prisoner 1 and prisoner 2. (jointly) committed a crime. Actions of prisoner i: Either Defect (D), i.e., testify against prisoner j. Or Cooperate (C), i.e., remain silent. So A 1 = A 2 = {C, D}. Examples of action profiles: (C, D), (D, D), etc. Payoff function (utility function) of player 1: 1 if a 1 = a 2 = C 0 if a u 1 (a 1, a 2 ) = 1 = D and a 2 = C 10 if a 1 = C and a 2 = D 6 if a 1 = a 2 = D + symmetric payoff function for player 2. Both are suspected of having Nicolas Schutz Game Theory 6 / 43

Normal Form Game Matrix representation of 2-player finite games: The prisoner s dilemma game can be conveniently summarized as follows: Conventions: C D C ( 1, 1) ( 10, 0) D (0, 10) (-6,-6) Row player is player 1; Column player is player 2. The first number in parentheses is the row player s payoff; the second one is the column player s payoff. Remark: The prisoner s dilemma game will have many applications in IO. Example: Price competition with homogenous products. Nicolas Schutz Game Theory 7 / 43

Equilibrium in Dominant Strategies Now that we have defined a game, we would like to make predictions about what players will actually do. Put differently, we need a solution concept (or an equilibrium concept). Start with a few definitions: Let a Π N j=1 A j. For i {1,..., N}, denote a i = (a 1, a 2,..., a i 1, a i+1,..., a N ). Now, we can rewrite outcome a as a = (a i, a i ). In normal form games, a strategy for player i can be defined easily: it is just an action A i. Things will get more complicated when we consider extensive form games. Definition ã i A i is a dominant strategy for player i if: a i Π j i A j, a i A i, π i (ã i, a i ) π i (a i, a i ) Nicolas Schutz Game Theory 8 / 43

Equilibrium in Dominant Strategies Example: Remember the prisoner s dilemma: C D C ( 1, 1) ( 10, 0) D (0, 10) (-6,-6) Whatever player 2 does, player 1 is always better off defecting. D is therefore a dominant strategy for player 1. By the same token, D is also a dominant strategy for player 2. Now, we can define our first equilibrium concept. Definition A strategy profile ã = (ã 1, ã 2,..., ã N ) Π N j=1 A j is an equilibrium in dominant strategies if ã i is a dominant strategy for each player i. Formally, i {1, 2,..., N}, a i Π j i A j, a i A i, π i (ã i, a i ) π i (a i, a i ) Nicolas Schutz Game Theory 9 / 43

Equilibrium in Dominant Strategies Example: Consider once again the prisoner s dilemma: C D C ( 1, 1) ( 10, 0) D (0, 10) (-6,-6) We already know that D is a dominant strategy for both players. C is definitely NOT a dominant strategy. Conclusion: The only equilibrium in dominant strategies is (D, D). Nicolas Schutz Game Theory 10 / 43

Equilibrium in Dominant Strategies So our solution concept predicts that both players will betray. Several remarks: No reason to expect decentralized decision-making to lead to a Paretooptimum here. ( 6, 6) is Pareto-dominated by ( 1, 1). Actually, from a utilitarian perspective, ( 6, 6) is the worst outcome. If players could somehow communicate and commit not to defect before the game starts, they would definitely do it. Some people (including some textbook-writers, gasp) say that each player chooses to defect because he is afraid that the other player will defect as well. Do NOT say that. This is wrong. In Lecture 6, we will see that, if players play the prisoner s dilemma repeatedly, then they may be able to sustain cooperation in equilibrium. Nicolas Schutz Game Theory 11 / 43

Equilibrium in Dominant Strategies This equilibrium concept is rather simple and intuitive: In the prisoner s dilemma, it would be stupid to play C. But... A dominant strategy equilibrium often fails to exist. An example (among many others): The Battle of the Sexes game: Consider the Row player: Opera Football Opera (2, 1) (0, 0) Football (0, 0) (1,2) If Column plays Opera, then Row is (strictly) better off playing Opera as well. If Column plays Football, then Row is (strictly) better off playing Football as well. Neither Row nor Column have a dominant strategy. There is no dominant strategy equilibrium in the Battle of Sexes game. Nicolas Schutz Game Theory 12 / 43

Nash Equilibrium So our solution concept does not have any predictive power in some games. This is embarrassing. To go beyond this, we need to weaken the restrictions imposed by our equilibrium concept. This is what John Nash did in 1951: Definition A strategy profile ã = (ã 1, ã 2,..., ã N ) Π N j=1 A j is a Nash equilibrium if i {1, 2,..., N}, a i A i, π i (ã i, ã i ) π i (a i, ã i ) In words, ã is a Nash equilibrium if no player has an incentive to deviate from this strategy profile. Nicolas Schutz Game Theory 13 / 43

Nash Equilibrium How do you calculate the Nash equilibria of a game? methods: There are two main If the game is rather simple (say, a 2 2 game, or, more generally a game with few players and actions), then you can just check for all the possibilities. Again, consider the prisoner s dilemma: C D C ( 1, 1) ( 10, 0) D (0, 10) (-6,-6) (C, C) is not a Nash equilibrium because player 1 (or 2) wants to defect. (D, C) is not a Nash equilibrium because player 2 wants to defect. (C, D) is not a Nash equilibrium because player 1 wants to defect. (D, D) is a Nash equilibrium because nobody wants to start cooperating. Nicolas Schutz Game Theory 14 / 43

Nash Equilibrium If the game is more complicated, then you should rather conduct a bestresponses analysis. To do so, let i {1,..., N} and a i Π j i A j. Define BR i (a i ) arg max ai π i (a i, a i ). a i Π j i A j BR i (a i ) P(A i ) is firm i s best-response correspondence (NB: P(A i ) is the set of all subsets included in A i ). BR i (.) is a correspondence (and not a function), because it is not necessarily single-valued. Several values of a i may solve the maximization problem max ai π i (a i, a i ). (Don t worry too much about that, it won t really matter in this course). Now, we can rewrite the definition of a Nash equilibrium as follows: A strategy profile ã = (ã 1, ã 2,..., ã N ) Π N j=1 A j is a Nash equilibrium if i, ã i BR i (ã i ) In words, each player chooses a strategy which is a best-response to other players (equilibrium) strategies. Nicolas Schutz Game Theory 15 / 43

Nash Equilibrium To conduct a best-response analysis and solve for the Nash equilibria of a game, just do the following: For each player i, look for best-response correspondence BR i (.). Then, look for the intersection of these best-responses. This gives you all the Nash equilibria. Illustration: Consider the price competition w/ differentiated products model, and let s make a couple additional assumptions: c 1 = c 2 = 0. q i = 1 2p i + p j. So the profit function of firm i {1, 2} is just π i (p 1, p 2 ) = p i (1 2p i + p j ). How do you compute BR i (p j )? By definition, BR i (p j ) = arg max pi {p i (1 2p i + p j )}. Notice that the objective function is concave ( 2 π i / p 2 i < 0), so we can use the first order condition: 1 4p i + p j = 0 Solving for p i, we get p i = BR i (p j ) = 1+p j 4. Nicolas Schutz Game Theory 16 / 43

Nash Equilibrium In this particular example, the best-response correspondence is single-valued. (ˆp 1, ˆp 2 ) is a Nash equilibrium if and only if: ˆp 1 BR 1 (ˆp 2 ). ˆp 2 BR 2 (ˆp 1 ). Therefore, we have to solve the following system of equations (to get the intersection of best-responses): We get ˆp 1 = ˆp 2 = 1/3. { ˆp1 = 1+ˆp 2 4 ˆp 2 = 1+ˆp 1 4 Therefore, (1/3, 1/3) is the only Nash equilibrium of this price competition game. Nicolas Schutz Game Theory 17 / 43

Nash Equilibrium Two comments: 1 An equilibrium in dominant strategies is always a Nash equilibrium. Why? A dominant strategy is a best-response to any strategy profile. However, a Nash equilibrium is NOT always an equilibrium in dominant strategies. 2 When looking for a Nash equilibrium, only consider unilateral deviations from the strategy profile. Nicolas Schutz Game Theory 18 / 43

Nash Equilibrium How should we think about the Nash equilibrium concept? foundations for this concept? Introspective foundation: What are the I am rational and I know the other players are rational. The other players know I m rational, and know that I know they re rational,... Since I m rational, I will do my best to maximize my utility given other players actions. I know other players will just do the same. At the end of the day, we should be playing a strategy profile such that nobody wants to deviate. This is not always convincing: Sometimes, Nash equilibria are very complicated to calculate, even for a well-trained game theorist. What if there are several Nash equilibria? How can people coordinate on one of them? Nicolas Schutz Game Theory 19 / 43

Nash Equilibrium Example of multiple Nash equilibria: consider the Battle of Sexes game: Opera Football Opera (2, 1) (0, 0) Football (0, 0) (1,2) Two Nash equilibria: (Opera, Opera) and (Football, Football). The introspection story does not tell us how players can coordinate on one of these equilibria. More on multiple equilibria: In some games, one of the Nash equilibria stands out: Maybe because it is Pareto-dominant. Or because it acts as a focal point. Nicolas Schutz Game Theory 20 / 43

Nash Equilibrium The learning / evolution foundation: Under some mild conditions, it can be shown that evolutionary / learning processes usually converge to a Nash equilibrium. So even if people are not that rational, at some point they should end up playing a Nash equilibrium. History matters: depending on where you start (i.e., which actions you play initially), you may converge to different Nash equilibria. Nicolas Schutz Game Theory 21 / 43

Nash Equilibrium Do Nash equilibria always exist? No. Example: Rock, Paper, Scissors Rock Paper Scissors Rock 0, 0 1, 1 1, 1 Paper 1, 1 0, 0 1, 1 Scissors 1, 1 1, 1 0, 0 No Nash equilibrium (in pure strategies). However, it is possible to extent the Nash equilibrium concept to mixed strategies (in which players can randomize over the set of actions). With mixed strategies, a Nash equilibrium exists under very weak conditions. We won t use mixed strategies in this course. Nicolas Schutz Game Theory 22 / 43

Nash Equilibrium Just for fun: The Split or Steal game: Just click on the link, and enjoy. Split Steal Split (1/2, 1/2) (0, 1) Steal (1, 0) (0,0) Nicolas Schutz Game Theory 23 / 43

Extensive Form Games Before I give you the formal definition, let s start with an example: The terrorist-pilot game: We will analyze a similar game in Lecture 10, when we consider entry deterrence strategies. Nicolas Schutz Game Theory 24 / 43

Extensive Form Games Another example: The Stackelberg price competition game: The timing is the following: 1 Firm 1 (the Stackelberg leader) sets its price p 1. 2 Firm 2 observes price p 1, and sets price p 2. Once prices have been set, demands are realized, and firms earn π i (p 1, p 2 ) = (p i c i )q i (p 1, p 2 ). So, an intuitive definition for an extensive form game is a game that can be represented by a game tree. Nicolas Schutz Game Theory 25 / 43

Extensive Form Games A more formal definition: Definition An extensive form game is: 1 A game tree containing a starting node, other decision nodes, terminal nodes, and branches linking nodes. 2 A list of players {1, 2,..., N}. 3 For each decision node, the name of the player(s) entitled to choose an action. 4 For each player i, a specification of i s action set at each node that player i is entitled to choose an action. 5 A specification of the payoff to each player at each terminal node. Nicolas Schutz Game Theory 26 / 43

Extensive Form Games Now, we can define a strategy in an extensive form game: Definition A strategy for player i is a complete plan of actions, one action for each decision node that the player is entitled to choose an action. Remark: In normal form games, a strategy was just an action. Here, you need a complete plan of actions. Examples: The pilot-terrorist game: A strategy for player 1: NYC. A strategy for player 2: s2 (CUBA) = NB, s 2 (NYC) = B (in short: (NB, B)). Stackelberg game: A strategy for player 1: s1 = 2. A strategy for player 2: s2 : s 1 [0, ) s 2 (s 1 ) [0, ). Nicolas Schutz Game Theory 27 / 43

Extensive Form Games It is very convenient to represent multistage games with their extensive form. However, these games also have a normal form. This will be helpful to extend the definition of Nash equilibrium to such games. Examples: The pilot-terrorist game: (B,B) (B,NB) (NB,B) (NB,NB) CUBA (-1,-1) (-1,-1) (1,1) (1,1) NYC (-1,-1) (2,0) (-1,-1) (2,0) The Stackelberg game: Players: {1, 2}. Strategies: Firm 1: p1 [0, ). Firm 2: p 2 : p 1 [0, ) p 2 (p 1 ) [0, ). Payoffs: Firm 1: π1 (p 1, p 2 (.)) = (p 1 c 1 )q 1 (p 1, p 2 (p 1 )). Firm 2: π 2 (p 1, p 2 (.)) = (p 2 c 2 )q 2 (p 1, p 2 (p 1 )). Nicolas Schutz Game Theory 28 / 43

Extensive Form Games: Nash Equilibrium Now, we can just extend our definition of a Nash equilibrium to these somewhat more complicated games: In words, a profile of strategies is a Nash equilibrium iff no player has an incentive to deviate from its strategy. To begin with, let s look for Nash equilibria in the pilot-terrorist game: (B,B) (B,NB) (NB,B) (NB,NB) CUBA (-1,-1) (-1,-1) (1,1) (1,1) NYC (-1,-1) (2,0) (-1,-1) (2,0) There are three Nash equilibria: (NYC, (B,NB)). (NYC, (NB,NB)). (CUBA, (NB,B)). One of these equilibria looks very suspicious. Nicolas Schutz Game Theory 29 / 43

Extensive Form Games: Nash Equilibrium Consider the (CUBA, (NB,B)) equilibrium. A rational pilot should be able to make the following reasoning: If I fly to NYC anyway, then, it is in the terrorist s interest to choose NB instead of B. But, since (CUBA, (NB,B)) is a Nash equilibrium, this intuitive reasoning is not captured by the Nash equilibrium concept. Another way to put this: Sometimes, some Nash equilibria can be sustained thanks to non-credible threats. For multistage games, we need an equilibrium concept which rules out those non-credible threats. Nicolas Schutz Game Theory 30 / 43

Extensive Form Games: Nash Equilibrium Forget about the Nash equilibrium concept (for this slide only), and let s try to solve the pilot-terrorist game in a more intuitive way. What should the pilot think? If I fly to Cuba, then, the terrorist can either bomb the plane and get payoff 1, or not bomb and get 1. I know the terrorist is rational, so he will definitely not bomb the plane. Therefore, in this branch of the game tree, I expect to get payoff 1. If I fly to NYC, then, the terrorist can either bomb the plane and get payoff 1, or not bomb and get 0. Again, the terrorist is rational, so he will definitely not bomb the plane. Therefore, in this branch of the game tree, I expect to get payoff 2. With this in mind, I choose b/w going to Cuba and getting 1, and going to NYC and getting 2. At the end of the day, the pilot chooses NYC, the bomb does not explode. Final payoffs are therefore (2, 0). Nicolas Schutz Game Theory 31 / 43

Extensive Form Games: Backward Induction This kind of intuitive reasoning is called backward induction: Start solving for optimal decisions in terminal nodes, and derive the implied payoffs. Go one step back and, again, solve for optimal decisions, anticipating that people will behave optimally in subsequent nodes. Derive the implied payoffs. Iterate until you reach the initial node. This thought process is not contained in the Nash equilibrium concept. Backward induction in the Stackelberg game (w/ linear demands): Consider firm 1: If I set p 1 0, what will firm 2 do? Firm 2 is rational, so it should maximize its profit. i.e., firm 2 should solve max p2 p 2 (1 2p 2 + p 1 ). We ve already solved this problem: Firm 2 should play p 2 = BR 2 (p 1 ) = (1 + p 1 )/4. Conclusion: Firm 2 has only one strategy consistent with backward induction: p 2 : p 1 [0, ) (1 + p 1 )/4 Nicolas Schutz Game Theory 32 / 43

Extensive Form Games: Backward Induction Being rational, firm 1 should anticipate this behavior: If I play p 1, then I know firm 2 will play (1 + p 1 )/4. So I anticipate payoff p 1 (1 2p 1 + 1+p 1 4 ) = p 1 4 (5 7p 1). Maximizing this payoff function with respect to p 1, we get p 1 = 5/14. Conclusion: The only strategies consistent with backward induction are: Firm 1: p 1 = 5/14. Firm 2: p 2 : p 1 [0, ) (1 + p 1 )/4. This generates the following outcome: p 1 = 5/14 and p 2 = (1 + 5/14)/4 = 19/56. Notice that this profile of strategies is also a Nash equilibrium, since: If 1 plays 5/14, then, firm 2 wants to play BR 2 (5/14) = (1 + 5/14)/4. So, strategy p 2 : p 1 [0, ) (1+p 1 )/4 is a best response to strategy p 1 = 5/14. Conversely, if 2 plays p 2 : p 1 [0, ) (1 + p 1 )/4, then, we have just shown that firm 1 wants to play p 1 = 5/14. Nicolas Schutz Game Theory 33 / 43

Extensive Form Games: Backward Induction Let us build a weird Nash equilibrium in the Stackelberg game: Denote by p m 2 profile: firm 2 s monopoly price, and consider the following strategies p 1 = + (in words, firm 1 exits the market). { if p1 + p 2 (p 1 ) = p m if p 2 1 = + Again, this Nash equilibrium relies on a non-credible threat: I ll flood the market with my products if you don t exit. Nicolas Schutz Game Theory 34 / 43

Extensive Form Games: Backward Induction Bottom line: Nash generates weird equilibria in some simple multistage games. Backward induction seems to eliminate these equilibria. A backward induction equilibrium is always a Nash equilibrium. A Nash equilibrium may not be consistent w/ backward induction. So backward induction allows us to refine the Nash equilibrium concept. Notice however that we have introduced backward induction within a somewhat restrictive class of games. Now, we would like to extend this concept to games in which: Players may take actions simultaneously at some decision nodes, There may be an infinite number of periods. Nicolas Schutz Game Theory 35 / 43

Subgame-Perfect Equilibrium Definition A subgame is a decision node from the original game along with the decision nodes and and terminal nodes directly following this node. A subgame is called a proper subgame if it differs from the original game. Example 1: There are exactly two proper subgames in the Pilot-Terrorist game: Nicolas Schutz Game Theory 36 / 43

Subgame-Perfect Equilibrium Example 2: There is an infinity of proper subgames (indexed by p 1 0) in the Stackelberg game. Let p 1 0: Two players: firms 1 and 2. Firm 1 s action set:. Firm 2 chooses p 2 [0, ). Payoffs: π i = (p i c i )q i (p 1, p 2 ). Now, we can define our equilibrium concept: Definition A profile of strategies is a subgame-perfect equilibrium (SPE) if it induces a Nash equilibrium in every subgame of the original game. Remarks: SPE Nash equilibrium. SPE is a refinement of Nash. In the simple games analyzed so far, SPE is just backward induction. Nicolas Schutz Game Theory 37 / 43

Subgame-Perfect Equilibrium Remarks (Cont d): So (NY, (NB, NB)) is the only SPE of the pilot-terrorist game. (p 1 = 5/14, p 2 (p 1 ) = 1+p 1 4 ) is the only SPE of the Stackelberg game. We will often say things like (p 1 = 5/14, p 2 = 19 56 ) is the only SPE of the Stackelberg game. Keep in mind that this is not rigorous (a SPE is a strategy profile, a strategy is a complete plan of actions). Nicolas Schutz Game Theory 38 / 43

Subgame-Perfect Equilibrium Assume that the game has a finite number of periods. Then, even if some players move simultaneously at some nodes, we can still use backward induction to solve the game: The trick is to start with the smallest subgames and to solve for Nash equilibrium in these subgames. Then, in the extensive form, replace these smallest subgames by the players payoffs at the Nash equilibrium you ve just calculated. Iterate until there are no subgames left. Nicolas Schutz Game Theory 39 / 43

Subgame-Perfect Equilibrium Let us work out a simple example to see how this works: Example: Entry game: Nicolas Schutz Game Theory 40 / 43

Subgame-Perfect Equilibrium In subgame II, there is a unique Nash equilibrium: (S, S). This equilibrium generates payoffs (2, 2). We can plug this into the extensive form: We can conclude that the only SPE of our entry game is (E, S), (E, S). Are there other Nash equilibria? Nicolas Schutz Game Theory 41 / 43

Subgame-Perfect Equilibrium Remarks: Some games may have several Nash equilibria (or SPE) in some of their subgames. In this case, solving for SPE using backward induction becomes trickier. We won t deal with such games in this course. Caution: If the game has an infinite number of periods, then you cannot use backward induction to find its SPEs. We won t talk about such games until Lecture 6. Nicolas Schutz Game Theory 42 / 43

Summary Normal form game: Players, actions, payoffs. Solution concept: Nash equilibrium. How to look for Nash equilibria? Trial and error (for simple games). Best-response analysis (for more complex ones). Extensive form game: a game tree. Solution concept: Subgame-perfect equilibrium. Allows us to get rid of Nash equilibria which rely on non-credible threats. How to look for SPE? Finite number of periods: backward induction. Otherwise, wait until Lecture 6. Nicolas Schutz Game Theory 43 / 43