Lecture Notes on Game Theory

Size: px
Start display at page:

Download "Lecture Notes on Game Theory"

Transcription

1 Lecture Notes on Game Theory Alfredo Di Tillio January 9, 008 Contents Normal Form Games 3. Basic Notations and Definitions Dominance and Best Response Iterated Dominance and Rationalizability Iterated Dominance Correlated Rationalizability An Equivalence Result Independent Rationalizability Examples Independent vs. Correlated Rationalizability Rationalizability in the Cournot Game Equilibrium in Normal Form Games 3. Nash Equilibrium Existence of Nash Equilibria Strictly Competitive Games Normal Form Refinements Trembling-Hand Perfect Equilibrium PhD Microeconomics III, Spring 009

2 .3. Proper Equilibrium Examples Trembling-Hand Perfection and Weakly Dominated Strategies Trembling-Hand Perfection and Properness Extensive Form Games: Basics 3 3. Formal Definition Normal Form Representation of an Extensive Form Game Mixed and Behavior Strategies: Kuhn s Theorem Continuation Strategies, Continuation Outcomes, and Continuation Payoffs Subgame Perfect Equilibrium Examples Finding Subgame Perfect Equilibria of Perfect Information Games Trembling Hand Perfect and Subgame Perfect Extensive Form Games: Further Topics Sequential Equilibrium Alternating Offer Bargaining Finite Horizon Infinite Horizon Examples Inconsistent Assessments and One-Shot Deviations More on Out of Equilibrium Beliefs Repeated Games Discounting Two Folk Theorems Examples Nash and Subgame Perfect Equilibria Repeated Prisoners Dilemma Games with Incomplete Information 5 6. Bayesian Games

3 . Normal Form Games.. Basic Notations and Definitions A game in normal form is a list hn;.s i ; u i / in i comprising the following objects: a set of players N for each player i, a set of strategies S i (we define S WD in S i ) for each player i, a payoff function u i W S! R A game in normal form is finite if the set of players is finite and the set of strategies of each player is finite. For any finite set X, we write.x/ to denote the set of all probability distributions over X, that is, the set of all functions p W X! Œ0; such that P xx p.x/ D. Let hn;.s i ; u i / in i be a finite game in normal form. A mixed strategy of player i in this game is an element of i WD.S i /. Assume without loss of generality (just re-labeling players) that N D f; : : : ; Ig, where I D jn j. The set I will be abbreviated as. Moreover, we will write S i and i as abbreviations for S S i S ic S I and i ic I, respectively. Remark. Note that i is not the same as.s i /. An element of the latter set is a belief of player i over his opponents strategies, i.e. a probability distribution over the set of all profiles of i s opponents strategies, S i. An element of i is instead a profile of mixed strategies, one for each of i s opponents. Given any s i S i and s i D.s ; : : : ; s i ; s ic ; : : : ; s I / S i, we will write.s i ; s i / as an abbreviation for.s ; : : : ; s i ; s i ; s ic ; : : : ; s I /. For all i; j N, all i i, and all s i S i, we define u j. i ; s i / WD X i.s i /u j.s i ; s i /: s i S i Then, for all i.s i /, we define u j. i ; i / WD X i.s i /u j. i ; s i /: s i S i 3

4 One particular case is when i satisfies independence, that is, when there exist mixed strategies i D. ; : : : ; i ; ic ; : : : I / i such that i.s ; : : : ; s i ; s ic ; : : : ; s I / D.s / i.s i / ic.s ic / I.s I / for all.s ; : : : ; s i ; s ic ; : : : ; s I / S i. In this case, we define u j. ; : : : ; i ; i ; ic ; : : : ; I / WD u j. i ; i /; which we may further abbreviate as u j. i ; i /. When i can be written this way, we interpret this as saying that i believes his opponents choose their strategies independently... Dominance and Best Response A basic assumption of game theory is that each player maximizes his payoff, given his beliefs about what other players do. The outstanding questions are then: (a) What do we mean exactly by beliefs? (b) Are some beliefs more reasonable than others? In some cases, a strategy of i does strictly better than another, regardless of the opponents choices, hence regardless of how we answer these questions. Definition (Strict Dominance). A strategy s i S i is strictly dominated if there exists i i such that u i. i ; s i / > u i.s i ; s i / for all s i S i. In this case, we say i strictly dominates s i. Note that, in some cases, a strategy is strictly dominated by a mixed strategy without being strictly dominated by a pure strategy (i.e. a degenerate mixed strategy). Definition (Weak Dominance). A strategy s i S i is weakly dominated if there exists i i such that u i. i ; s i / u i.s i ; s i / for all s i S i, with strict inequality for some s i. Definition 3 (Best Response). Let i.s i /. A strategy s i S i is a best response to i if u i.s i ; i / u i.si 0; i/ for all si 0 S i. Given i D. ; : : : ; i ; ic ; : : : I / i, we say that s i S i is a best response to i if it is a best response to i ic I. 4

5 If our answer to question (a) above is that a player holds probabilistic beliefs over the opponents choices, then the rationality requirement that every player i maximizes his expected payoff given his belief about S i can be stated as follows: every player i must choose a strategy s i that is a best response to his beliefs i. Definition 4 (Never Best Response). A strategy s i S i is a never best response (NBR) if there exists no i.s i / such that s i is a best response to i. We are still left with question (b) unanswered. There are two basic ways to approach the problem of finding reasonable beliefs. They are both based on interactive reasoning, i.e. each player s reasoning about other players choices and about other players beliefs, and they are indeed equivalent in some cases..3. Iterated Dominance and Rationalizability.3.. Iterated Dominance The basic idea is: a rational player i cannot choose a strictly dominated strategy, nor can he believe player j i would, nor can he believe player j can believe player k j would, and so on. At every step of the reasoning, further restrictions are imposed on what i s choice can be. How do we formalize this idea? Starting from the game, define a sequence of games D ; D ; : : : as follows: D then recursively n D of all players. Clearly, nc D is the game obtained from n D D, and by deleting all strictly dominated strategies D n D for n large enough. Let us denote by D this game we have converged to. The strategies in that are also strategies in D are said to survive iterated elimination of strictly dominated strategies. If the strategy set of every player in game D is a singleton, then the game is said to be dominance solvable. Note that, in the process of elimination, we are not really assuming that player i has a well defined probabilistic belief i over S i. All we are requiring is that he considers some of the opponents strategies impossible. However, we do require that, when arguing that a certain strategy of j will not be chosen, this is because there is a mixed strategy of j that does better. The interpretation of the latter assumption may be, in principle, problematic. 5

6 .3.. Correlated Rationalizability The basic idea is: a rational player i is Bayesian, i.e. he must have a well defined probabilistic belief i over S i, thus he cannot choose a NBR strategy, nor can he believe player j i would, nor can he believe player j can believe player k j would, and so on. At every step of the reasoning, further restrictions are imposed on what i can be, and thus on what i s choice can be. How do we formalize this idea? Starting from the game, define a sequence of games R ; R ; : : : as follows: R then recursively n R Clearly, nc R is the game obtained from n R D, and by deleting all NBR strategies of all players. D n R for n large enough. Let us denote by R this game we have converged to. The strategies in the game R are the correlated-rationalizable strategies of. Note that, in the process of elimination, we are not assuming players randomize. However, we are assuming that each player has a well defined probabilistic belief over the opponents choices. An equivalent procedure, indeed more in the spirit of the notion of rationalizability as it is intended in the literature, is to eliminate beliefs, and hence eliminate strategies, at each round. In the first round, for each player i we define S i D n o s i S i W s i is a best response to some i.s i / Having defined Si n for every player i and some n, we define recursively S n i as an abbreviation for S n S n i S nc i WD S n ic S n I, and ns i S i W s i is a best response to some i.s i / satisfying i.s ni / D o : It is clear that S nc i S n i S i for every player i and every n. It is also rather obvious that S n i contains exactly those strategies of that are also strategies in R n. Those strategies of player i that belong to S n i for every n are i s correlated-rationalizable strategies An Equivalence Result Iterated elimination of strictly dominated strategies and correlated rationalizability are equivalent, as the following result shows. 6

7 Proposition. In a finite normal form game, a strategy is NBR if and only if it is strictly dominated. In particular, D D R. The proof of the proposition is a simple application of the following well known result, a version of Farkas s Lemma. Lemma (Farkas s Lemma). Let A be a m n matrix and let b be a n vector. Either there exists a m vector x = 0 such that xa 5 b, or there exists a n vector z = 0 such that Az = 0 and bz < 0, but not both. Proof. See, for instance, the Notes on Optimization on my webpage. Proof of Proposition. Recall our definition of D n and n R. The second claim in the proposition easily follows from the first by induction on n, as n D and n R are, like, finite games in normal form. To prove the first part, fix a strategy s i S i, and let A be the matrix whose rows correspond to the elements of S i, whose columns correspond to the elements of S i, and whose entry corresponding to row s i and column s i is given by the difference u i.s i ; s i / be a n vector of u i.s i ; s i /. Also, let b s. Then s i is NBR if and only if the second alternative in the lemma is false, hence if and only if the first alternative in the lemma is true. The latter is easily verified to be equivalent to s i being strictly dominated; indeed, s i is strictly dominated if and only if there exists i such that i A 0, which is equivalent to xa b for x D a i and a > 0 large enough. To see Proposition in action, consider the following game: L R U 4; 0 3; 0 M 9; 0 0; 0 D 0; 0 9; 0 Player, the row player, will play the role of player i in the proposition, and strategy U will play the role of strategy s i. To construct the matrix A used in the proof of the proposition, we first consider only the matrix of player s payoffs: The matrix A is then obtained as follows: for every s fu; M; Dg and s fl; Rg, the entry When dealing with two vectors v and w, the inequality v = w means that every element of v is greater than or equal to the corresponding element of w. Similarly for the inequality v 5 w. The inequality v w instead means that every element of v is strictly greater than the corresponding element of w, and similarly for the inequality v w. Make sure you know how to give a formal proof of the latter claim. 7

8 corresponding to s and s is the difference u.u; s / u.s ; s /. The matrix A is thus A D D Now U is strictly dominated by the mixed strategy that chooses M or D with equal probability; indeed, pre-multiplying A by the row vector D Œ0; =; = we get A D Œ 0:5; :5: Thus, defining x D a for a large enough any a > will do we get xa D a A D Œ 0:5a; :5a Œ ; : This means the first alternative in Farkas s Lemma is true, hence that the second is false. To verify that there is indeed no belief.fl; Rg/ against which U is a best response, note that this would require 4.L/ C 3.R/ 9.L/ C 0.R/ and 4.L/ C 3.R/ 0.L/ C 9.R/; which (seeing D Œ.L/;.R/ as a column vector) are together equivalent to A = 0 and imply the contradiction that.r/.5=3/.l/ and.r/.4=6/.l/. 8

9 .3.4. Independent Rationalizability Definition 5 (Independent Never Best Response). A strategy s i S i is an independent never best response (INBR) if there exists no i i such that s i is a best response to i. This is closer to the original version of rationalizability introduced by Bernheim and Pearce s papers (Econometrica 5(4), 984). The basic idea is: a rational player i is Bayesian and believes his opponents choices are uncorrelated; thus, he must have a well defined probabilistic belief i over S i satisfying independence, and he cannot choose a INBR strategy, nor can he believe player j i would, nor can he believe player j can believe player k j would, and so on. At every step of the reasoning, further restrictions are imposed on what i can be, and thus on what i s choice can be. These restrictions are stronger than in the correlated case. Thus, iterated elimination of INBR strategies will deliver, in general, smaller sets of rationalizable strategies..4. Examples.4.. Independent vs. Correlated Rationalizability Independent and correlated rationalizability obviously coincide in two-player games; in these games, a strategy is NBR if and only if it is INBR. In three-player games, the latter is no longer true. Consider the following three-player game, where both player and player s payoffs are constant, and player 3 chooses A, B, C, or D. Strategy D is INBR. Indeed, take any and. L R a 0; 0; 0 0; 0; 0 b 0; 0; 50 0; 0; 0 L R a 0; 0; 0 0; 0; 4 b 0; 0; 0 0; 0; 4 L R a 0; 0; 4 0; 0; 4 b 0; 0; 0 0; 0; 0 L R a 0; 0; 3 0; 0; 3 b 0; 0; 0 0; 0; 3 A B C D If.R/ > 3=4, then B does better than D; if.a/ > 3=4, then C does better than D; finally, if.r/ 3=4 and.a/ 3=4, then.l/ =4 and.b/ =4, hence 3.b; L/ =6, hence A gives at least 50=6 > 3 and therefore does strictly better than D. However, D is not NBR, because it is a best response to the belief 3 such that 3.a; L/ D 3.a; R/ D 3.b; R/ D =3. Remark. Note that D is neither strictly nor weakly dominated by any mixed strategy of player 3. Let us prove this. Since D is not NBR, we know from Proposition that it cannot be strictly dominated. To see that it is not weakly dominated, either, suppose by contradiction that it is weakly 9

10 dominated by some 3 3. Then we must have u 3.a; L; 3 / u 3.a; L; D/, i.e. 4 3.C / C 3 3.D/ 3, which implies 3.C /.3=4/Π3.D/. Moreover, we must have u 3.b; R; 3 / u 3.b; R; D/, i.e. 4 3.B/ C 3 3.D/ 3, which implies 3.B/.3=4/Π3.D/. Thus, we must have 3.C / C 3.B/.6=4/Π3.D/.6=4/Π3.A/ C 3.B/ C 3.C /, which is only possible if 3.A/ D 3.B/ D 3.C / D 0, i.e. if 3.D/ D, contradicting the initial supposition that 3 weakly dominates D (since that supposition obviously implies 3.D/ < )..4.. Rationalizability in the Cournot Game The Cournot game is defined as follows: N D f; g, S i D R C, and u.q ; q / D q.p q q /, u.q ; q / D q.p q q / for all.q ; q / S WD S S, where P > 0 is a parameter. This game is not finite, so checking whether a certain strategy of a firm is NBR requires looking at all probability distributions over the infinite set of possible quantities chosen by the other firm. This is, in principle, a hard task. However, a few simple observations will show that the problem is, in fact, rather easy. This is an example where rationalizability works wonderfully: it pins down a unique strategy for each player. Let denote the Cournot game. Let R denote the Cournot game after deletion of all strategies that are NBR in. What are such NBR strategies? Let be a belief of firm over S R C, i.e. a probability measure over the set of player s strategies in. A strategy in S R C is a best response to for player if and only if it solves the following: 3 max q R C Z R C q.p q q /d.q /: By linearity of the integral, and using the fact that R d D, the latter problem is equivalent to Z max q C q P q q d.q /: q R C R C 3 Here and later on in this example, the integral sign denotes the Lebesgue integral see the Notes on Probability on my webpage to find out more on this (and also to find a rigorous definition of probability measure). 0

11 In order to apply the Kuhn-Tucker theorem, we rewrite the latter as max q R q C q P q ZR C q d.q / subject to q 0 and we conclude, by the Kuhn-Tucker theorem, 4 that a strategy q is not NBR in if and only if there exists.r C / such that either R R C q d.q / > P and q D 0, or R R C q d.q / P and q D P R R C q d.q / : As varies in.r C /, the expectation R R C q d.q / varies between 0 and C, hence the set of NBR strategies of player in game is.p=; C/. By symmetry, this is also the set of NBR strategies of player in this game. Thus, the strategy set of each player in game is Œ0; P=. A strategy q is not NBR in this game if and only if it solves max q Œ0;P= q C q P q Z Œ0;P= q d.q /: for some.œ0; P=/. Again applying Kuhn-Tucker (this time with the two constraints q 0 and q P=) we conclude that a strategy q is not NBR in if and only if q D P R Œ0;P= q d.q / for some such. As varies in.œ0; P=/, the expectation R Œ0;P= q d.q / varies between 0 and P=, hence the set of NBR strategies of player in game is Œ0; P=4/. By symmetry, this is also the set of NBR strategies of player in this game. Now let be the game resulting from after deletion of NBR strategies. Thus, the strategy set of each player in game is ŒP=4; P=. A strategy q is not NBR in this game if it solves max q ŒP=4;P= q C q P q Z ŒP=4;P= q d.q /: 4 See my Notes on Optimization if you want to check that in this problem the Kuhn-Tucker conditions are indeed necessary and sufficient.

12 for some.œp=4; P=/, that is, again by Kuhn-Tucker, if and only if q D P R ŒP=4;P= q d.q / for some such. As varies in.œp=4; P=/, the expectation R ŒP=4;P= q d.q / varies between P=4 and P=, hence the set of NBR strategies of player in game is.3p=8; P=. By symmetry, this is also the set of NBR strategies of player in this game. In game 3, the strategy set of each player is thus ŒP=4; 3P=8. Etc. etc. etc. Continuing in this fashion, we see that n is obtained from n by removal of the (upper or lower) half of both players strategy sets in n. This process converges to the unique pair q D q D P=3.

13 . Equilibrium in Normal Form Games.. Nash Equilibrium In general, correlated (or even independent) rationalizability does not give sharp predictions on how rational players should play and how much should they expect to get. On the other hand, all one needs in order to get the rationalizable outcomes is the assumption of rationality and common knowledge of rationality. In some cases, a legitimate question is whether there is a strategy profile such that, once each player knows every other player is following the strategies specified in the profile, then he finds it optimal to do so himself. 5 Definition 6. Let hn;.s i ; u i / in i be a normal form game. A Nash equilibrium of this game is a strategy profile s S such that u i.s/ u i.s i ; s i / for every player i N and every s i S i. It is quite easy to see that a Nash equilibrium can only prescribe (independent) rationalizable strategies. If s is a Nash equilibrium, then s i survives the first round of elimination of INBR strategies (being a best response to the belief that his opponents will play s i with probability one); since this is true for every player, it remains true at all subsequent rounds as well. Moreover, since a NBR strategy is also an INBR strategy, the strategies in a Nash equilibrium are also correlatedrationalizable; equivalently, they survive iterated elimination of strictly dominated strategies.... Existence of Nash Equilibria A Nash equilibrium may not exist, e.g. in the game matching pennies: H T H ; ; T ; ; But if we allow mixed strategies, then it always exists (in finite games). Definition 7. The mixed extension of a finite game hn;.s i ; u i / in i is the game where the set of players is again N, the set of strategies of player i is i D.S i /, and the payoff to player i from 5 Here and in what follows, given any s S and i N, we write s i and s i to denote the elements of S i and S i corresponding to s. Thus, by this definition, s D.s i ; s i / for every s S and every i N. 3

14 the strategy profile is u i./. The following is the result originally proved by Nash in 95. Theorem. The mixed extension of any finite game has a Nash equilibrium. The proof of the theorem uses the following result. Lemma (Brouwer s Fixed Point Theorem). Let C be a compact convex subset of a Euclidean space. Let f W C! C be a continuous function. Then f has a fixed point, i.e. there is a point x C for which f.x/ D x. Proof. See, for instance, the book Fixed Point Theorems with Applications to Economics and Game Theory by Kim Border. Proof of Theorem. Let hn;.s i ; u i / in i be a finite normal form game. Suppose without loss of generality that N D f; : : : ; Ig, where I D jn j, and S i D f; : : : ; m i g, where m i D js i j, for every i D ; : : : ; I. Then, for every player i and every i i, write i n for the probability assigned by i to the nth strategy of player i, where n D ; : : : ; m i. Obviously, i is compact and convex for each player i, hence so is. For every player i and every n D ; : : : ; m i define a function gi n W! R as follows: n o g n i./ D max 0 ; u i.n; i / u i. i ; i / 8 D. i ; i / : In other words, gi n./ is the gain accruing to player i as a consequence of moving from i to his nth pure strategy, given that other players are choosing i, or it is zero if the latter is actually a loss. Define the function f W! as follows: for every D. i ; i /, the profile of mixed strategies f./ D.f i.// I id is such that, for every player i, the probability assigned by f i./ to the nth strategy of player i is f n i./ D i n C gi n./ C P m i md gm i./: Note that, since fi n./ 0 for every n, and moreover fi./ C C f m i./ D, we indeed have f i./ i and hence f./. Moreover, the function f is continuous; since g is the maximum of polynomial functions, it is continuous, so f is a ratio of continuous functions and i 4

15 thus also continuous. By Brouwer s fixed point theorem, there exists D. i ; i / such that f./ D. We now show that is a Nash equilibrium. For every player i and every n D ; : : : ; m i we have fi n./ D n i, that is, and hence n i C gi n./ C P m i md gm i./ D n i g n i./ D n i Xm i md g m i./: Moreover, for every player i there must exist n such that n i > 0 and gi n./ D 0. If this were not true, then, for all m for which m i > 0, we would have g m i./ > 0, i.e. u i.m; i / > u i./, hence Xm i md m i u i.m; i / > u i./ which is a contradiction. Thus, and therefore, since g m i Xm i md g m i./ D 0 8i;./ 0 for all i and all m, also g m i./ D 0 8i8m: This means that u i./ u i.m; i / for all i and all m S i, hence u i./ u i. i ; i / for all i and all i i... Strictly Competitive Games In general, the notion of Nash equilibrium does not give sharp predictions about the behavior of rational players, though certainly sharper than those implied by (either form of) rationalizability. There is a class of games, however, where the implications of the equilibrium hypothesis are indeed rather precise. This is the class of games whose analysis (pioneered by von Neumann and Morgenstern) gave birth to game theory. Definition 8. A game hn;.s i ; u i / in i is strictly competitive if N D f; g and if, moreover, for every s S one has u.s/ > 0 if and only if u.s/ < 0. 5

16 In the remainder of this section, we will just assume that a strictly competitive game is in fact a zero-sum game, that is, we will assume that u.s/ C u.s/ D 0 for every s S. The following is one of the most important and classical results about this class of games it was proved by von Neumann in Theorem (The Minimax Theorem). Let hf; g; f ; g; fu ; u gi be the mixed extension of a finite zero-sum game. A profile. ; / is a Nash equilibrium if and only if arg max min u. ; / and arg min max u. ; /: () Moreover, if. ; / is a Nash equilibrium, then u. ; / D min max u. ; / D max min u. ; /: () Remark 3. Note that, since an equilibrium of the mixed extension of a finite game always exists, the theorem does imply that v WD min max u. ; / D max min u. ; /. This number v is called the value of the game. Proof of Theorem. Pick any. ; /. Obviously, we have u. ; / min u. ; / 8 ; u. ; / max u. ; / 8 : Suppose. ; / is an equilibrium. Then, using the inequalities above, u. ; / D max u. ; / max u. ; / D min u. ; / min min u. ; /; max u. ; /; 6 The statement of Theorem refers to Nash equilibria, and our proof (taken from Myerson s textbook) uses Nash s theorem, that is, Theorem in these notes, which only appeared in 95. The theorem originally proved by von Neumann, of course, does not refer to Nash equilibria, nor does its proof refer to Nash s theorem; indeed, von Neumann takes the equalities in () to mean, by themselves, that. ; / is solution to the game see Remark 4 below. 6

17 and moreover u. ; / D min u. ; / max u. ; / D max u. ; / min The latter four inequalities clearly imply (). Moreover, they imply min u. ; /; max u. ; /: min u. ; / D max min u. ; / and max u. ; / D min max u. ; / which is the same as (). Conversely, suppose () holds. By Theorem, an equilibrium must exist. Thus, by the first part of this proof, the second equality in () holds. Thus, u. ; / max u. ; / D min u. ; / (by () and the second equality in ()) u. ; /; which shows that. ; / is a Nash equilibrium. Remark 4. The Minimax Theorem states that all Nash equilibria of a finite zero-sum game give player (and hence also player ) the same payoff, namely, the value of the game. Note that the value is the payoff that player can guarantee himself (max min), but also the payoff that player can force player down to (min max). Thus, in a sense, a rational player in a zero-sum game does not need to guess what his opponent will do, in order to play an optimal strategy. This is why, even if the notion of Nash equilibrium did not exist at the time, the conclusion of von Neumann was that the equalities in () by themselves constitute an acceptable characterization of a solution to a zero-sum game. Another implication of the Minimax Theorem is that the equilibria of a finite zero-sum game are interchangeable. If. ; / and.e ;e / are both equilibria of the game, then. ;e / and.e ; / are also equilibria of the game (why?). Once again, this shows that, in a zero-sum game, a rational player (resp. player ) does not need to guess what exact strategy player (resp. player F 7

18 ) will choose; all he needs to know is that player (resp. player ) will choose an optimal strategy, i.e. a strategy (resp. ) that satisfies ()..3. Normal Form Refinements Are there solution concepts that deliver more accurate predictions than Nash equilibrium? The answer is yes, and we will briefly review two of them. Both are based on the idea that, among all Nash equilibria of (the mixed extension of) a game, only those that are robust to small perturbations in the players choices are reasonable..3.. Trembling-Hand Perfect Equilibrium The first refinement we review is due to Selten (International Journal of Game Theory 4, 975). Let hn;. i ; u i / in i be the mixed extension of a finite normal form game. A completely mixed strategy profile of this game is a strategy profile such that i.s i / > 0 for every player i N and every s i S i. Definition 9. Let hn;. i ; u i / in i be the mixed extension of a finite normal form game. A mixed strategy profile is a trembling-hand perfect equilibrium (of the original finite normal form game) if there exists a sequence ; ; : : : of completely mixed strategy profiles converging to and such that, for every n D ; ; : : : and every player i N, the strategy i is a best response to n i. It is quite easy to see that a trembling-hand perfect equilibrium is also a mixed strategy Nash equilibrium. (Would you be able to prove this formally?) Thus, trembling-hand perfection is indeed a refinement of Nash. F.3.. Proper Equilibrium An even more demanding solution concept has been introduced by Myerson (International Journal of Game Theory 7, 978). Definition 0. Let hn;. i ; u i / in i be the mixed extension of a finite normal form game. A mixed strategy profile is a proper equilibrium (of the original finite normal form game) if there exist a sequence of positive numbers " ; " ; : : : and a sequence of completely mixed strategy 8

19 profiles ; ; : : : converging to such that lim n! " n D 0 and, for all n D ; ; : : :, all i N and all s i ; s 0 i S i, u i.s 0 i ; n i / < u i.s i ; n i / ) n i.s0 i / " n n i.s i/: Similarly to a trembling-hand perfect equilibrium, a proper equilibrium must be robust to some small perturbation in the players behavior. But, in addition, this perturbation must be rational, in the sense that strategies that do not do well along the sequence must be played with much smaller probability than those which do better, i.e. players are more likely to tremble on strategies that are not too bad for them. One can show that a proper equilibrium must be trembling-hand perfect. (Giving a formal proof is a little hard, but definitely doable.) Thus, the following result establishes existence of trembling-hand perfect equilibria as well. FFF Proposition. Every finite normal form game has a proper equilibrium. The proof uses the following result. Lemma 3 (Kakutani s Fixed Point Theorem). Let C be a compact, convex subset of a Euclidean space. Let ˆ W C C be a correspondence having closed graph and such that ˆ.x/ is nonempty and convex for every x C. Then ˆ has a fixed point, i.e. there exists x C for which x ˆ.x/. Proof. See, for instance, Corollary 5.3 in Border s book. Proof of Proposition. Choose 0 < " < and, for every i N, define R " i D i.s i / W i.s i / " js i j =js i j 8s i S i : Clearly, R i " is compact and convex for every i N, hence so is R " WD in R i ". Now define ˆ" W R " R " as follows: for every R ", ˆ"./ D.ˆ"i.// in, where ˆ" i./ D 0 i R" i W for all s i ; s 0 i S i, u i.s 0 i ; i/ < u i.s i ; i / ) 0 i.s0 i / " 0 i.s i/ for every i N. This correspondence has a closed graph and is convex valued. (Can you show FF 9

20 why this is true?) Moreover, it is nonempty valued, i.e. ˆ"./ for all R. To show this, take any R " and define, for every i N and every s i S i, k.s i ; / D ˇ ˇ s 0 i S i W u i.s i ; i / < u i.s 0 i ; i/ ˇˇˇ and e i.s i / D " k.s i ;/ Ps 0i S i "k.s0 i ;/ : Clearly, if s i ; si 0 S i and u i.s i ; i / < u i.si 0; i/, then e i.s i / "e i.si 0 /. Thus, e ˆ"./. We conclude by Kakutani s theorem that ˆ has a fixed point. (Why does this imply existence of a proper equilibrium? Hint: Every bounded sequence in a Euclidean space has a convergent subsequence; moreover, in.s i / is closed... ) FF.4. Examples.4.. Trembling-Hand Perfection and Weakly Dominated Strategies A trembling-hand perfect equilibrium cannot prescribe that a weakly dominated strategy be played with positive probability. Consider the following example: L R U ; 0 ; D ; ; 0 This game has one pure strategy Nash equilibrium,.u; R/, and a continuum of mixed strategy equilibria (player chooses R, player chooses U with probability p, where = p < ). However, none of these mixed equilibria is trembling-hand perfect. Since a proper equilibrium exists, we conclude that the unique proper equilibrium (and also the unique trembling-hand perfect equilibrium) is.u; R/. To see why the mixed equilibria are not trembling-hand perfect, let n D. n; n / be any sequence of completely mixed profiles, and let D. ; / be such that =.U / < and 0

21 .R/ D. Since.D/ > 0 and n.l/ > 0 for every n D ; ; : : :, we have u. ; n / D.U / n.l/ C n.r/ C.D/ < n.l/ C n.r/ D u.u; n /: Thus, is not a best response to n for any n. Therefore, is not trembling-hand perfect. To verify that.u; R/ is a trembling-hand perfect equilibrium, consider the sequence of completely mixed strategy profiles ; ; : : : such that, for every n D ; ; : : :, n.u / D.=/n and n.r/ D.=/n Clearly, n converges to.u; R/. Moreover, U and R are best responses to n and n, respectively, for every n. Thus,.U; R/ is trembling-hand perfect..4.. Trembling-Hand Perfection and Properness The following example shows that a trembling-hand perfect equilibrium need not be proper. L M R U ; 0; 0 9; 9 M 0; 0 0; 0 7; 7 D 9; 9 7; 7 7; 7 The strategy profile.m; M/ is trembling-hand perfect. (You should have no problem in proving the latter.) However, it is not proper. Indeed, take any sequence of positive numbers " ; " ; : : : converging to 0 and any sequence of strategy profiles ; ; : : : converging to.m; M/ such that FF u i.s 0 i ; n i / < u i.s i ; n i / ) n i.s0 i / " n n i.s i/ for all n D ; ; : : :, all i D ;, and all s i ; s 0 i S i. Since n converges to.m; M/, we have u.d; n/ < u.u; n/ and u.r; n/ < u.l; n / for all n large enough. (Why? ) Thus, F n.d/ " n n.u / and n.r/ " n n.l/ (3)

22 for all n large enough. Now, against n, the strategies U and M give respectively u.u; n / D n.l/ 9 n.r/ and u.m; n/ D 7 n.r/. By (3), then, u.u; n / u.m; n / D n.l/ n.r/ n.l/ " n n.l/ D. " n / n.l/: Since n is completely mixed, n.l/ > 0 for every n. Moreover, since " n converges to zero, " n > 0 for all n large enough. We conclude that u.u; n/ u.m; n / > 0 for all n large enough, hence that n.m/ " n n.u / for all n large enough, hence (why?) that n.m/ converges F to zero. This contradicts the initial supposition that n converges to.m; M/.

23 3. Extensive Form Games: Basics A game in normal form specifies what the players strategies are, and what each player s payoff is, as a function of the possible profiles of strategies chosen by all players. A game in extensive form is a more detailed description of a strategic situation, and allows to model dynamic choice. 3.. Formal Definition An extensive form specifies the physical order of play (who moves when, and what choices are available), the information available to a player when making a choice, and the payoff to each player as a function of the moves selected by all players. Definition. A finite extensive form game is a list hn; X; p; A; ; H; ; 0 ;.u i / in i comprising the following objects: a finite set of players N D f0; ; : : : ; Ig; player 0 is called Nature; a finite set of nodes X and a function p W X! X [, indicating immediate precedence; we define p D p and recursively p nc D pbp n for all n ; if x; x 0 X and x D p n.x 0 / for some n, then we say that x is a predecessor of x 0 and that x 0 is a successor of x; we require p to satisfy the following properties: there exists a unique initial node, that is, a unique x 0 X such that p.x 0 / D ; for every x X, the set of predecessors of x and the set of successors of x are disjoint; in other words, there exist no n and x 0 X for which x 0 D p n.x 0 /; the nodes in the set Z WD fx X W p.x/ D g are called terminal nodes; a finite set of possible actions A and a function W X n fx 0 g! A specifying, for each noninitial node, the action that leads to that node; it is assumed that.x 0 /.x 00 / for every x X and every two distinct x 0 ; x 00 p.x/; the set of actions available at any x X is defined as A.x/ WD fa A W 9x 0 p.x/ s.t..x 0 / D ag; a partition H of X n Z into information sets, and a function W H! N specifying who moves at each information set, such that A.x/ D A.x 0 / for every h H and every x; x 0 h; this means that the actions available at node x in information set h are the same as those 3

24 available at any other node in h, hence we can write A.h/ to denote the set of such actions; we define H i WD.i/ for each player i D 0; ; : : : ; I ; for every x X n Z we write H.x/ to denote the member of H containing x; we assume that, if Nature is an active player, then it moves only at the beginning of the game i.e. either H 0 D or H 0 D H.x 0 / and, in the latter case, it does so by choosing randomly an element of A.x 0 / according to the probability distribution 0.A.x 0 //; for each player i D ; : : : ; I, a payoff function u i W Z! R. Here is an example of an extensive form game with I D and no moves by Nature. Player chooses L or R and then player, without knowing what the choice of player has been, chooses A or B. The fact that does not know whether chose L or R is reflected by the fact that the two successors of the initial node are in the same information set, denoted by a dotted line. Note that at the two nodes player has the same actions available. L R A B A B ; 0 ; 3 0; ; 0 Figure : An extensive form game. Here is another example: Player chooses L or R and then player, having observed the choice of player, chooses A or B; if he chooses B, then it is s turn again, and he has to choose between l and r. L R A ; 0 B ; 3 C 0; l 0; D r ; 0 Figure : An extensive form game where player moves twice. A common property that we will assume throughout is perfect recall. Roughly, this means that no player ever forgets what he once knew. Formally, the property can be stated as follows: for every 4

25 player i and every three nonterminal nodes x; y; w satisfying H.x/ H i, H.y/ D H.w/ H i, and x D p n.y/ for some n, there must exist a nonterminal node x 0 such that H.x 0 / D H.x/, x 0 D p m.w/ for some m, and.p n.y// D.p m.w// Normal Form Representation of an Extensive Form Game A strategy for player i D ; : : : ; I is a function s i W H i! A such that s i.h/ A.h/ for all h H i. Thus, the set of all strategies of player i is the cartesian product S i D X A.h/: hh i If Nature is not active, a pure strategy profile induces a unique terminal node. To see this, suppose H 0 D and take any s S. Since actions leading to distinct immediate successors of a nonterminal node are labeled differently by, there exists a unique x p.x 0 / such that.x / D s.h.x0 //.H.x 0 //, and then a unique x p.x / such that.x / D s.h.x //.H.x //, and so on. Thus we have a sequence x 0 ; x ; : : : ; x n such that x k D p.x kc / for every k D 0; : : : ; n, and x n Z. Each of the nodes x 0 ; : : : ; x n is said to be reached under s. The terminal node x n is the outcome induced by s and is denoted by.s/. The function W S! Z thus constructed is the outcome function of the game, and the associated game in normal form is then hn;.s i ; U i /i, where U i W S! R is defined as U i.s/ WD u i..s// for all s S. Remark 5. If Nature is active, a profile of pure strategies s induces a probability distribution over the terminal nodes. Every move a A.x 0 / by Nature results in a node x a p.x 0 /, and given this x a there exists a unique xa p.x a/ such that.xa / D s.h.x a//.h.x a //, then a unique x a 3 p.x a / such that.xa 3 / D s.h.x a //.H.x a //, and so on. Thus each a A.x 0/, together with s, uniquely identifies a sequence x 0 ; x a ; : : : ; xa n.a/ such that x 0 D p.x a / and xa k D p.xa kc / for every k D ; : : : ; n.a/, and x a n.a/ Z. Each of the nodes in the set fx 0 g [ x X W x D x a k for some a A.x 0/ such that 0.a/ > 0 and some k n.a/ is said to be reached under s. The random outcome induced by s is then the probability distribution.s/.z/ such that.s/.x a n.a/ / D 0.a/ for every a A.x 0 /. The function W S!.Z/ thus constructed is the outcome function of the game, and the associated game in normal form is 5

26 hn;.s i ; U i /i, where U i W S! R is defined as U i.s/ WD P zz.s/.z/u i.z/ for all s S. 3.. Mixed and Behavior Strategies: Kuhn s Theorem A mixed strategy for player i f; : : : ; Ig is a probability distribution i.s i /. Note that, if Nature is active, then the elements of A.x 0 / can be seen as the pure strategies of Nature, and 0 can be seen as an exogenously given mixed strategy of Nature. If Nature is not active, the random outcome induced by a mixed strategy profile is simply the probability distribution y./.z/ which assigns to terminal node z the probability y./.z/ D X s.z/.s/; (4) where is the function constructed in 3.. above, and.s/ is the probability that s is chosen when the players randomize according to. Given any p Œ0;, we say a node x X is reached with probability p under if, denoting by S.h/ the set of strategy profiles under which x is reached, we have P ss.h/.s/ D p. (If p D 0, we may just say that x is not reached under.) The expected payoff of player i under, which we denote by U i./ with slight abuse of notation, is U i./ WD X zz u i.z/y./.z/: (5) Two mixed strategies i and 0 i y. i ; i / D y. 0 i ; i/ for all i i. of player i f; : : : ; Ig are said to be realization equivalent if Remark 6. If Nature is active, the random outcome induced by a mixed strategy profile is the probability distribution y./.z/ which assigns to terminal node z the probability y./.z/ D X ss.s/.s/.z/; where W S!.Z/ is the function constructed in Remark 5. As before, x 0 is reached with probability one, and each x p.x 0 / is reached with probability 0..x //, under. For any other node x X that is neither the initial node nor an immediate successor of it, given any p Œ0; we say x is reached with probability p under if, letting x 0 be the unique predecessor of x in p.x 0 /, and denoting by S.x/ the set of strategy profiles under which x is reached in the 6

27 sense of Remark 5, we have 0..x 0 // P ss.h/.s/ D p. (If p D 0, we may just say that x is not reached under.) Player i s expected payoff under is again defined as in (5), and realization equivalence is defined exactly as before. Mixed strategies are often complicated to visualize, and it is more convenient to work with a simpler kind of randomization by player i, as in the following definition. Definition. A behavior strategy for player i f; : : : ; Ig specifies a probability distribution b i.h/.a.h// for each h H i. The mixed representation of a behavior strategy b i is the mixed strategy of i that assigns to each s i S i the probability Y b i.h/.s i.h// hh i The probabilities of reaching the various nodes in the game and the induced distributions on terminal nodes and payoffs corresponding to a behavior strategy profile b D.b ; : : : ; b I / are simply the ones given by the profile of mixed representations of b ; : : : ; b I. Again with slight abuse of notation, we denote the induced distribution on outcomes and player i s expected payoff by y.b/ and U i.b/, respectively. Analogously, we say a behavior strategy b i is realization equivalent to a mixed strategy i i if the mixed representation of b i is realization equivalent to i. As an example, consider the game in Figure and the behavior strategy of player that chooses L with probability =4 and l with probability =3. The mixed representation of this behavior strategy is the mixed strategy that puts probabilities =, =6, =4, and = on Ll, Lr, Rl, and Rr, respectively. Behavior strategies are much easier to visualize than mixed strategies. One must simply specify, for each information set h of player i, the probabilities with which i will choose the various actions available at h. Thus, the set of all behavior strategies of player i is the cartesian product B i WD X A.h/ : hh i While behavior strategies are simpler objects than mixed strategies, they are also less general. Indeed, mixed strategies allow for correlation among different information sets of the same player, 7

28 whereas mixed representations of behavior strategies do not. For instance, in Figure there is no behavior strategy for player whose mixed representation is the mixed strategy that puts equal probability on Ll, Lr, and Rr. (Make sure you are able to prove this claim.) However, for the purposes of computing distributions on outcomes, and hence expected payoffs, this difference has no consequences; Kuhn s theorem is the formal statement of the latter claim. F Theorem 3 (Kuhn, 953). In a finite extensive form game with perfect recall, every mixed strategy has a realization equivalent behavior strategy. Proof. A formal proof can be found in various textbooks (see, for instance, Theorem 4. in Myerson s book). For any i i, the realization equivalent behavior strategy b i whose existence is established in the theorem can be described as follows. Say an information set h H i is reachable under a mixed strategy 0 i if there exists i such that some node in h (and hence all nodes in h, by perfect recall) is reached with poitive probability under. 0 i ; i/. For every information set h H i and every a A.h/, let S i.h/ be the set of all s i S i such that h is reachable under s i, let S i.a/ be the set of all s i S i such that s i.h/ D a, and let S i.h; a/ be the set of all s i S i.h/ such that s i.h/ D a. Then the probability assigned by b i to action a is 8P s ˆ< i S i.h/ i.s i / P b i.h/.a/ D s i S i.h/ i.s i / ˆ: P s i S i.a/ i.s i / if h is reachable under i, otherwise: In other words, the probability with which b i chooses a at h is either the conditional probability of choosing a under i, given that h is reachable, or, if h is not reachable under i, it is defined arbitrarily. As an illustration, you should verify that, in the game of Figure, the mixed strategy that puts equal probability on Ll, Lr, and Rr is realization equivalent to the behavior strategy that chooses L with probability =3 and ` with probability zero. F 8

29 3... Continuation Strategies, Continuation Outcomes, and Continuation Payoffs Take any nonterminal node x X n.z [ fx 0 g/ and write H x to denote the set of information sets containing either x or some successor of x, that is, H x D H.x/ [ h H W h D H.x 0 / for some n and some x 0 X such that x D p n.x 0 / : The continuation strategy induced by a strategy profile s D.s ; : : : ; s I / from node x is the profile of functions s x; : : : ; sx I such that sx i is the restriction of s i to H i \ H x for every player i. (If x is the initial node of a subgame see Definition 3 below then these functions will be well defined strategies for the subgame.) Since actions leading to distinct immediate successors of a nonterminal node are labeled differently by, there exists a unique x p.x/ such that.x / D s.h.x//.h.x//, a unique x p.x / such that.x / D s.h.x //.H.x //, and so on. Thus we have a sequence x; x ; : : : ; x n such that x D p.x /, x k D p.x kc / for every k D ; : : : ; n, and x n Z. The terminal node x n is the continuation outcome associated to x and s and is denoted by.sjx/. The continuation payoff to any player i associated to x and s is the payoff to player i from the continuation outcome, that is, U i.sjx/ WD U i..sjx//. It is not obvious how to define a continuation strategy associated to x and a mixed strategy, especially if x is not reached under. However, an extension of our definitions of continuation strategy, outcome, and payoffs to behavior strategies does make sense. Indeed, contrary to a mixed strategy profile, a behavior strategy profile (even one under which x is not reached) tells us explicitly what player.h.x// would do at node x. Just like we did for pure strategies, we define the continuation behavior strategy induced by a behavior strategy profile b D.b ; : : : ; b I / from a nonterminal node x as the profile of functions b x; : : : ; bx I such that bx i is the restriction of b i to H i \ H x for every player i. (The above comment about subgames also applies here.) The random continuation outcome i.e. the element of.z/ associated to x and b will be denoted by y.bjx/, and the associated expected continuation payoff to any player i will be U i.bjx/ WD P y zz.bjx/.z/u i.z/. The probability distribution y.bjx/.z/ is determined in the obvious way, as follows. Let Z x denote the set of terminal nodes that are successors of x. For every z Z x there exists a unique sequence x z; : : : ; xz n.z/ such that x D p.xz /, xz k D p.xz kc / for every k D ; : : : ; n.z/, and x z n.z/ D z. Then, for every terminal node z Z, we set 9

30 y.bjx/.z/ D 0 if z Z x and if z Z x. n.z/ y.bjx/.z/ D b.h.x//..x z // Y kd b.h.x z k //..x z kc // 3.3. Subgame Perfect Equilibrium The notion of Nash equilibrium very often leads to unreasonable predictions in extensive form games. Consider the following extensive form game and its associated game in normal form. A L B R C D AC AD BC BD L 0; 0 0; 0 0; 0; R 0; 0 ; 0; 0 ; 0; 0 0; 0; 0 ; The profile.l; AC / is a Nash equilibrium, yet it is unreasonable. Indeed, player is supposed to choose C if player chooses R (which he does not), whereas D would give more. Similarly,.R; BD/ is a Nash equilibrium, but it is unreasonable because it requires player to play B after L, whereas A would give more. The idea of subgame perfect equilibrium (Selten, 965) is that behavior in parts of the game that can be regarded as games in themselves should agree with Nash equilibria of them. Definition 3. Let hn; X; p; A; ; H; ; 0 ;.u i / in i be a game in extensive form. A subgame of this game is any extensive form game hn; yx; yp; A; y ; yh;y; 0 ;.yu i / in i such that: yx X and yp W yx! yx [ is the restriction of p to yx; the set of terminal nodes of the subgame is yz WD fyx yx W yp.yx/ D g; the initial node is denoted yx 0 ; it is assumed that yx contains all successors of yx 0, i.e. for every node x X satisfying x yx we must have p.x/ yx; note that this implies yz Z; y W yx n fyx 0 g! A is the restriction of to yx n fyx 0 g; yh H, and y W yh! N is the restriction of to yh ; note that yh H implies that if a node of the game is also a node of the subgame, then all nodes in the same information set 30

Solving Extensive Form Games

Solving Extensive Form Games Chapter 8 Solving Extensive Form Games 8.1 The Extensive Form of a Game The extensive form of a game contains the following information: (1) the set of players (2) the order of moves (that is, who moves

More information

Lecture Notes on Game Theory

Lecture Notes on Game Theory Lecture Notes on Game Theory Levent Koçkesen Strategic Form Games In this part we will analyze games in which the players choose their actions simultaneously (or without the knowledge of other players

More information

4: Dynamic games. Concordia February 6, 2017

4: Dynamic games. Concordia February 6, 2017 INSE6441 Jia Yuan Yu 4: Dynamic games Concordia February 6, 2017 We introduce dynamic game with non-simultaneous moves. Example 0.1 (Ultimatum game). Divide class into two groups at random: Proposers,

More information

6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2

6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2 6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2 Daron Acemoglu and Asu Ozdaglar MIT October 14, 2009 1 Introduction Outline Mixed Strategies Existence of Mixed Strategy Nash Equilibrium

More information

WEAKLY DOMINATED STRATEGIES: A MYSTERY CRACKED

WEAKLY DOMINATED STRATEGIES: A MYSTERY CRACKED WEAKLY DOMINATED STRATEGIES: A MYSTERY CRACKED DOV SAMET Abstract. An informal argument shows that common knowledge of rationality implies the iterative elimination of strongly dominated strategies. Rationality

More information

Equilibrium Refinements

Equilibrium Refinements Equilibrium Refinements Mihai Manea MIT Sequential Equilibrium In many games information is imperfect and the only subgame is the original game... subgame perfect equilibrium = Nash equilibrium Play starting

More information

Tijmen Daniëls Universiteit van Amsterdam. Abstract

Tijmen Daniëls Universiteit van Amsterdam. Abstract Pure strategy dominance with quasiconcave utility functions Tijmen Daniëls Universiteit van Amsterdam Abstract By a result of Pearce (1984), in a finite strategic form game, the set of a player's serially

More information

Iterated Strict Dominance in Pure Strategies

Iterated Strict Dominance in Pure Strategies Iterated Strict Dominance in Pure Strategies We know that no rational player ever plays strictly dominated strategies. As each player knows that each player is rational, each player knows that his opponents

More information

Microeconomics. 2. Game Theory

Microeconomics. 2. Game Theory Microeconomics 2. Game Theory Alex Gershkov http://www.econ2.uni-bonn.de/gershkov/gershkov.htm 18. November 2008 1 / 36 Dynamic games Time permitting we will cover 2.a Describing a game in extensive form

More information

Basic Game Theory. Kate Larson. January 7, University of Waterloo. Kate Larson. What is Game Theory? Normal Form Games. Computing Equilibria

Basic Game Theory. Kate Larson. January 7, University of Waterloo. Kate Larson. What is Game Theory? Normal Form Games. Computing Equilibria Basic Game Theory University of Waterloo January 7, 2013 Outline 1 2 3 What is game theory? The study of games! Bluffing in poker What move to make in chess How to play Rock-Scissors-Paper Also study of

More information

6.207/14.15: Networks Lecture 11: Introduction to Game Theory 3

6.207/14.15: Networks Lecture 11: Introduction to Game Theory 3 6.207/14.15: Networks Lecture 11: Introduction to Game Theory 3 Daron Acemoglu and Asu Ozdaglar MIT October 19, 2009 1 Introduction Outline Existence of Nash Equilibrium in Infinite Games Extensive Form

More information

Entropic Selection of Nash Equilibrium

Entropic Selection of Nash Equilibrium Entropic Selection of Nash Equilibrium Zeynel Harun Alioğulları Mehmet Barlo February, 2012 Abstract This study argues that Nash equilibria with less variations in players best responses are more appealing.

More information

6.891 Games, Decision, and Computation February 5, Lecture 2

6.891 Games, Decision, and Computation February 5, Lecture 2 6.891 Games, Decision, and Computation February 5, 2015 Lecture 2 Lecturer: Constantinos Daskalakis Scribe: Constantinos Daskalakis We formally define games and the solution concepts overviewed in Lecture

More information

First Prev Next Last Go Back Full Screen Close Quit. Game Theory. Giorgio Fagiolo

First Prev Next Last Go Back Full Screen Close Quit. Game Theory. Giorgio Fagiolo Game Theory Giorgio Fagiolo giorgio.fagiolo@univr.it https://mail.sssup.it/ fagiolo/welcome.html Academic Year 2005-2006 University of Verona Summary 1. Why Game Theory? 2. Cooperative vs. Noncooperative

More information

1 The General Definition

1 The General Definition MS&E 336 Lecture 1: Dynamic games Ramesh Johari April 4, 2007 1 The General Definition A dynamic game (or extensive game, or game in extensive form) consists of: A set of players N; A set H of sequences

More information

Realization Plans for Extensive Form Games without Perfect Recall

Realization Plans for Extensive Form Games without Perfect Recall Realization Plans for Extensive Form Games without Perfect Recall Richard E. Stearns Department of Computer Science University at Albany - SUNY Albany, NY 12222 April 13, 2015 Abstract Given a game in

More information

Refinements - change set of equilibria to find "better" set of equilibria by eliminating some that are less plausible

Refinements - change set of equilibria to find better set of equilibria by eliminating some that are less plausible efinements efinements - change set of equilibria to find "better" set of equilibria by eliminating some that are less plausible Strategic Form Eliminate Weakly Dominated Strategies - Purpose - throwing

More information

Perfect Bayesian Equilibrium

Perfect Bayesian Equilibrium Perfect Bayesian Equilibrium For an important class of extensive games, a solution concept is available that is simpler than sequential equilibrium, but with similar properties. In a Bayesian extensive

More information

SF2972 Game Theory Exam with Solutions March 15, 2013

SF2972 Game Theory Exam with Solutions March 15, 2013 SF2972 Game Theory Exam with s March 5, 203 Part A Classical Game Theory Jörgen Weibull and Mark Voorneveld. (a) What are N, S and u in the definition of a finite normal-form (or, equivalently, strategic-form)

More information

Strongly Consistent Self-Confirming Equilibrium

Strongly Consistent Self-Confirming Equilibrium Strongly Consistent Self-Confirming Equilibrium YUICHIRO KAMADA 1 Department of Economics, Harvard University, Cambridge, MA 02138 Abstract Fudenberg and Levine (1993a) introduce the notion of self-confirming

More information

6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Potential Games

6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Potential Games 6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Asu Ozdaglar MIT March 2, 2010 1 Introduction Outline Review of Supermodular Games Reading: Fudenberg and Tirole, Section 12.3.

More information

Game Theory and Rationality

Game Theory and Rationality April 6, 2015 Notation for Strategic Form Games Definition A strategic form game (or normal form game) is defined by 1 The set of players i = {1,..., N} 2 The (usually finite) set of actions A i for each

More information

Economics 703 Advanced Microeconomics. Professor Peter Cramton Fall 2017

Economics 703 Advanced Microeconomics. Professor Peter Cramton Fall 2017 Economics 703 Advanced Microeconomics Professor Peter Cramton Fall 2017 1 Outline Introduction Syllabus Web demonstration Examples 2 About Me: Peter Cramton B.S. Engineering, Cornell University Ph.D. Business

More information

Quitting games - An Example

Quitting games - An Example Quitting games - An Example E. Solan 1 and N. Vieille 2 January 22, 2001 Abstract Quitting games are n-player sequential games in which, at any stage, each player has the choice between continuing and

More information

BELIEFS & EVOLUTIONARY GAME THEORY

BELIEFS & EVOLUTIONARY GAME THEORY 1 / 32 BELIEFS & EVOLUTIONARY GAME THEORY Heinrich H. Nax hnax@ethz.ch & Bary S. R. Pradelski bpradelski@ethz.ch May 15, 217: Lecture 1 2 / 32 Plan Normal form games Equilibrium invariance Equilibrium

More information

Extensive Form Games with Perfect Information

Extensive Form Games with Perfect Information Extensive Form Games with Perfect Information Pei-yu Lo 1 Introduction Recap: BoS. Look for all Nash equilibria. show how to nd pure strategy Nash equilibria. Show how to nd mixed strategy Nash equilibria.

More information

Game Theory and Algorithms Lecture 2: Nash Equilibria and Examples

Game Theory and Algorithms Lecture 2: Nash Equilibria and Examples Game Theory and Algorithms Lecture 2: Nash Equilibria and Examples February 24, 2011 Summary: We introduce the Nash Equilibrium: an outcome (action profile) which is stable in the sense that no player

More information

Mixed Strategies. Krzysztof R. Apt. CWI, Amsterdam, the Netherlands, University of Amsterdam. (so not Krzystof and definitely not Krystof)

Mixed Strategies. Krzysztof R. Apt. CWI, Amsterdam, the Netherlands, University of Amsterdam. (so not Krzystof and definitely not Krystof) Mixed Strategies Krzysztof R. Apt (so not Krzystof and definitely not Krystof) CWI, Amsterdam, the Netherlands, University of Amsterdam Mixed Strategies p. 1/1 Mixed Extension of a Finite Game Probability

More information

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games 6.254 : Game Theory with Engineering Applications Lecture 7: Asu Ozdaglar MIT February 25, 2010 1 Introduction Outline Uniqueness of a Pure Nash Equilibrium for Continuous Games Reading: Rosen J.B., Existence

More information

Bargaining Efficiency and the Repeated Prisoners Dilemma. Bhaskar Chakravorti* and John Conley**

Bargaining Efficiency and the Repeated Prisoners Dilemma. Bhaskar Chakravorti* and John Conley** Bargaining Efficiency and the Repeated Prisoners Dilemma Bhaskar Chakravorti* and John Conley** Published as: Bhaskar Chakravorti and John P. Conley (2004) Bargaining Efficiency and the repeated Prisoners

More information

Games and Their Equilibria

Games and Their Equilibria Chapter 1 Games and Their Equilibria The central notion of game theory that captures many aspects of strategic decision making is that of a strategic game Definition 11 (Strategic Game) An n-player strategic

More information

Sequential Equilibria of Multi-Stage Games with Infinite Sets of Types and Actions

Sequential Equilibria of Multi-Stage Games with Infinite Sets of Types and Actions Sequential Equilibria of Multi-Stage Games with Infinite Sets of Types and Actions by Roger B. Myerson and Philip J. Reny* Draft notes October 2011 http://home.uchicago.edu/~preny/papers/bigseqm.pdf Abstract:

More information

Periodic Strategies a New Solution Concept- Algorithm for non-trivial Strategic Form Games

Periodic Strategies a New Solution Concept- Algorithm for non-trivial Strategic Form Games See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/268744437 Periodic Strategies a New Solution Concept- Algorithm for non-trivial Strategic Form

More information

Fixed Point Theorems

Fixed Point Theorems Fixed Point Theorems Definition: Let X be a set and let f : X X be a function that maps X into itself. (Such a function is often called an operator, a transformation, or a transform on X, and the notation

More information

Extensive Form Games I

Extensive Form Games I Extensive Form Games I Definition of Extensive Form Game a finite game tree X with nodes x X nodes are partially ordered and have a single root (minimal element) terminal nodes are z Z (maximal elements)

More information

Foundations of Mathematics MATH 220 FALL 2017 Lecture Notes

Foundations of Mathematics MATH 220 FALL 2017 Lecture Notes Foundations of Mathematics MATH 220 FALL 2017 Lecture Notes These notes form a brief summary of what has been covered during the lectures. All the definitions must be memorized and understood. Statements

More information

Weak Dominance and Never Best Responses

Weak Dominance and Never Best Responses Chapter 4 Weak Dominance and Never Best Responses Let us return now to our analysis of an arbitrary strategic game G := (S 1,...,S n, p 1,...,p n ). Let s i, s i be strategies of player i. We say that

More information

Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about

Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about 0368.4170: Cryptography and Game Theory Ran Canetti and Alon Rosen Lecture 7 02 December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about Two-Player zero-sum games (min-max theorem) Mixed

More information

Industrial Organization Lecture 3: Game Theory

Industrial Organization Lecture 3: Game Theory Industrial Organization Lecture 3: Game Theory Nicolas Schutz Nicolas Schutz Game Theory 1 / 43 Introduction Why game theory? In the introductory lecture, we defined Industrial Organization as the economics

More information

Higher Order Beliefs in Dynamic Environments

Higher Order Beliefs in Dynamic Environments University of Pennsylvania Department of Economics June 22, 2008 Introduction: Higher Order Beliefs Global Games (Carlsson and Van Damme, 1993): A B A 0, 0 0, θ 2 B θ 2, 0 θ, θ Dominance Regions: A if

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Computing Minmax; Dominance

Computing Minmax; Dominance Computing Minmax; Dominance CPSC 532A Lecture 5 Computing Minmax; Dominance CPSC 532A Lecture 5, Slide 1 Lecture Overview 1 Recap 2 Linear Programming 3 Computational Problems Involving Maxmin 4 Domination

More information

Lecture Notes on Bargaining

Lecture Notes on Bargaining Lecture Notes on Bargaining Levent Koçkesen 1 Axiomatic Bargaining and Nash Solution 1.1 Preliminaries The axiomatic theory of bargaining originated in a fundamental paper by Nash (1950, Econometrica).

More information

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication)

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Nikolaus Robalino and Arthur Robson Appendix B: Proof of Theorem 2 This appendix contains the proof of Theorem

More information

Conservative Belief and Rationality

Conservative Belief and Rationality Conservative Belief and Rationality Joseph Y. Halpern and Rafael Pass Department of Computer Science Cornell University Ithaca, NY, 14853, U.S.A. e-mail: halpern@cs.cornell.edu, rafael@cs.cornell.edu January

More information

Preference, Choice and Utility

Preference, Choice and Utility Preference, Choice and Utility Eric Pacuit January 2, 205 Relations Suppose that X is a non-empty set. The set X X is the cross-product of X with itself. That is, it is the set of all pairs of elements

More information

REPEATED GAMES. Jörgen Weibull. April 13, 2010

REPEATED GAMES. Jörgen Weibull. April 13, 2010 REPEATED GAMES Jörgen Weibull April 13, 2010 Q1: Can repetition induce cooperation? Peace and war Oligopolistic collusion Cooperation in the tragedy of the commons Q2: Can a game be repeated? Game protocols

More information

0 Sets and Induction. Sets

0 Sets and Induction. Sets 0 Sets and Induction Sets A set is an unordered collection of objects, called elements or members of the set. A set is said to contain its elements. We write a A to denote that a is an element of the set

More information

Conjectural Variations in Aggregative Games: An Evolutionary Perspective

Conjectural Variations in Aggregative Games: An Evolutionary Perspective Conjectural Variations in Aggregative Games: An Evolutionary Perspective Alex Possajennikov University of Nottingham January 2012 Abstract Suppose that in aggregative games, in which a player s payoff

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

MATH FINAL EXAM REVIEW HINTS

MATH FINAL EXAM REVIEW HINTS MATH 109 - FINAL EXAM REVIEW HINTS Answer: Answer: 1. Cardinality (1) Let a < b be two real numbers and define f : (0, 1) (a, b) by f(t) = (1 t)a + tb. (a) Prove that f is a bijection. (b) Prove that any

More information

arxiv: v4 [cs.gt] 24 Jan 2018

arxiv: v4 [cs.gt] 24 Jan 2018 Periodic Strategies. A New Solution Concept and an Algorithm for Non-Trivial Strategic Form Games arxiv:1307.2035v4 [cs.gt] 24 Jan 2018 V.K. Oikonomou 1, J. Jost 1,2 1 Max Planck Institute for Mathematics

More information

Extensive Form Games with Perfect Information

Extensive Form Games with Perfect Information Extensive Form Games with Perfect Information Levent Koçkesen 1 Extensive Form Games The strategies in strategic form games are speci ed so that each player chooses an action (or a mixture of actions)

More information

An axiomatization of minimal curb sets. 1. Introduction. Mark Voorneveld,,1, Willemien Kets, and Henk Norde

An axiomatization of minimal curb sets. 1. Introduction. Mark Voorneveld,,1, Willemien Kets, and Henk Norde An axiomatization of minimal curb sets Mark Voorneveld,,1, Willemien Kets, and Henk Norde Department of Econometrics and Operations Research, Tilburg University, The Netherlands Department of Economics,

More information

Mathematical Preliminaries for Microeconomics: Exercises

Mathematical Preliminaries for Microeconomics: Exercises Mathematical Preliminaries for Microeconomics: Exercises Igor Letina 1 Universität Zürich Fall 2013 1 Based on exercises by Dennis Gärtner, Andreas Hefti and Nick Netzer. How to prove A B Direct proof

More information

Interval values for strategic games in which players cooperate

Interval values for strategic games in which players cooperate Interval values for strategic games in which players cooperate Luisa Carpente 1 Balbina Casas-Méndez 2 Ignacio García-Jurado 2 Anne van den Nouweland 3 September 22, 2005 Abstract In this paper we propose

More information

WHEN ORDER MATTERS FOR ITERATED STRICT DOMINANCE *

WHEN ORDER MATTERS FOR ITERATED STRICT DOMINANCE * WHEN ORDER MATTERS FOR ITERATED STRICT DOMINANCE * Martin Dufwenberg ** & Mark Stegeman *** February 1999 Abstract: We demonstrate that iterated elimination of strictly dominated strategies is an order

More information

1 Games in Normal Form (Strategic Form)

1 Games in Normal Form (Strategic Form) Games in Normal Form (Strategic Form) A Game in Normal (strategic) Form consists of three components. A set of players. For each player, a set of strategies (called actions in textbook). The interpretation

More information

NASH IMPLEMENTATION USING SIMPLE MECHANISMS WITHOUT UNDESIRABLE MIXED-STRATEGY EQUILIBRIA

NASH IMPLEMENTATION USING SIMPLE MECHANISMS WITHOUT UNDESIRABLE MIXED-STRATEGY EQUILIBRIA NASH IMPLEMENTATION USING SIMPLE MECHANISMS WITHOUT UNDESIRABLE MIXED-STRATEGY EQUILIBRIA MARIA GOLTSMAN Abstract. This note shows that, in separable environments, any monotonic social choice function

More information

University of Warwick, Department of Economics Spring Final Exam. Answer TWO questions. All questions carry equal weight. Time allowed 2 hours.

University of Warwick, Department of Economics Spring Final Exam. Answer TWO questions. All questions carry equal weight. Time allowed 2 hours. University of Warwick, Department of Economics Spring 2012 EC941: Game Theory Prof. Francesco Squintani Final Exam Answer TWO questions. All questions carry equal weight. Time allowed 2 hours. 1. Consider

More information

NTU IO (I) : Classnote 03 Meng-Yu Liang March, 2009

NTU IO (I) : Classnote 03 Meng-Yu Liang March, 2009 NTU IO (I) : Classnote 03 Meng-Yu Liang March, 2009 Kohlberg and Mertens (Econometrica 1986) We will use the term (game) tree for the extensive form of a game with perfect recall (i.e., where every player

More information

Selfishness vs Altruism vs Balance

Selfishness vs Altruism vs Balance Selfishness vs Altruism vs Balance Pradeep Dubey and Yair Tauman 18 April 2017 Abstract We give examples of strategic interaction which are beneficial for players who follow a "middle path" of balance

More information

Refined best-response correspondence and dynamics

Refined best-response correspondence and dynamics Refined best-response correspondence and dynamics Dieter Balkenborg, Josef Hofbauer, and Christoph Kuzmics February 18, 2007 Abstract We study a natural (and, in a well-defined sense, minimal) refinement

More information

1 Extensive Form Games

1 Extensive Form Games 1 Extensive Form Games De nition 1 A nite extensive form game is am object K = fn; (T ) ; P; A; H; u; g where: N = f0; 1; :::; ng is the set of agents (player 0 is nature ) (T ) is the game tree P is the

More information

Non-reactive strategies in decision-form games

Non-reactive strategies in decision-form games MPRA Munich Personal RePEc Archive Non-reactive strategies in decision-form games David Carfì and Angela Ricciardello University of Messina, University of California at Riverside 2009 Online at http://mpra.ub.uni-muenchen.de/29262/

More information

Economics 201B Economic Theory (Spring 2017) Bargaining. Topics: the axiomatic approach (OR 15) and the strategic approach (OR 7).

Economics 201B Economic Theory (Spring 2017) Bargaining. Topics: the axiomatic approach (OR 15) and the strategic approach (OR 7). Economics 201B Economic Theory (Spring 2017) Bargaining Topics: the axiomatic approach (OR 15) and the strategic approach (OR 7). The axiomatic approach (OR 15) Nash s (1950) work is the starting point

More information

1 Lattices and Tarski s Theorem

1 Lattices and Tarski s Theorem MS&E 336 Lecture 8: Supermodular games Ramesh Johari April 30, 2007 In this lecture, we develop the theory of supermodular games; key references are the papers of Topkis [7], Vives [8], and Milgrom and

More information

Algorithmic Game Theory and Applications. Lecture 4: 2-player zero-sum games, and the Minimax Theorem

Algorithmic Game Theory and Applications. Lecture 4: 2-player zero-sum games, and the Minimax Theorem Algorithmic Game Theory and Applications Lecture 4: 2-player zero-sum games, and the Minimax Theorem Kousha Etessami 2-person zero-sum games A finite 2-person zero-sum (2p-zs) strategic game Γ, is a strategic

More information

Common Knowledge of Rationality is Self-Contradictory. Herbert Gintis

Common Knowledge of Rationality is Self-Contradictory. Herbert Gintis Common Knowledge of Rationality is Self-Contradictory Herbert Gintis February 25, 2012 Abstract The conditions under which rational agents play a Nash equilibrium are extremely demanding and often implausible.

More information

The Folk Theorem for Finitely Repeated Games with Mixed Strategies

The Folk Theorem for Finitely Repeated Games with Mixed Strategies The Folk Theorem for Finitely Repeated Games with Mixed Strategies Olivier Gossner February 1994 Revised Version Abstract This paper proves a Folk Theorem for finitely repeated games with mixed strategies.

More information

On Acyclicity of Games with Cycles

On Acyclicity of Games with Cycles On Acyclicity of Games with Cycles Daniel Andersson, Vladimir Gurvich, and Thomas Dueholm Hansen Dept. of Computer Science, Aarhus University, {koda,tdh}@cs.au.dk RUTCOR, Rutgers University, gurvich@rutcor.rutgers.edu

More information

Computational Game Theory Spring Semester, 2005/6. Lecturer: Yishay Mansour Scribe: Ilan Cohen, Natan Rubin, Ophir Bleiberg*

Computational Game Theory Spring Semester, 2005/6. Lecturer: Yishay Mansour Scribe: Ilan Cohen, Natan Rubin, Ophir Bleiberg* Computational Game Theory Spring Semester, 2005/6 Lecture 5: 2-Player Zero Sum Games Lecturer: Yishay Mansour Scribe: Ilan Cohen, Natan Rubin, Ophir Bleiberg* 1 5.1 2-Player Zero Sum Games In this lecture

More information

Positive Political Theory II David Austen-Smith & Je rey S. Banks

Positive Political Theory II David Austen-Smith & Je rey S. Banks Positive Political Theory II David Austen-Smith & Je rey S. Banks Egregious Errata Positive Political Theory II (University of Michigan Press, 2005) regrettably contains a variety of obscurities and errors,

More information

Repeated Downsian Electoral Competition

Repeated Downsian Electoral Competition Repeated Downsian Electoral Competition John Duggan Department of Political Science and Department of Economics University of Rochester Mark Fey Department of Political Science University of Rochester

More information

Axiomatic set theory. Chapter Why axiomatic set theory?

Axiomatic set theory. Chapter Why axiomatic set theory? Chapter 1 Axiomatic set theory 1.1 Why axiomatic set theory? Essentially all mathematical theories deal with sets in one way or another. In most cases, however, the use of set theory is limited to its

More information

Game Theory and Algorithms Lecture 7: PPAD and Fixed-Point Theorems

Game Theory and Algorithms Lecture 7: PPAD and Fixed-Point Theorems Game Theory and Algorithms Lecture 7: PPAD and Fixed-Point Theorems March 17, 2011 Summary: The ultimate goal of this lecture is to finally prove Nash s theorem. First, we introduce and prove Sperner s

More information

Economics 201A Economic Theory (Fall 2009) Extensive Games with Perfect and Imperfect Information

Economics 201A Economic Theory (Fall 2009) Extensive Games with Perfect and Imperfect Information Economics 201A Economic Theory (Fall 2009) Extensive Games with Perfect and Imperfect Information Topics: perfect information (OR 6.1), subgame perfection (OR 6.2), forward induction (OR 6.6), imperfect

More information

Ex Post Cheap Talk : Value of Information and Value of Signals

Ex Post Cheap Talk : Value of Information and Value of Signals Ex Post Cheap Talk : Value of Information and Value of Signals Liping Tang Carnegie Mellon University, Pittsburgh PA 15213, USA Abstract. Crawford and Sobel s Cheap Talk model [1] describes an information

More information

Coalitional Strategic Games

Coalitional Strategic Games Coalitional Strategic Games Kazuhiro Hara New York University November 28, 2016 Job Market Paper The latest version is available here. Abstract In pursuit of games played by groups of individuals (each

More information

Algorithms for cautious reasoning in games

Algorithms for cautious reasoning in games Algorithms for cautious reasoning in games Geir B. Asheim a Andrés Perea b October 16, 2017 Abstract We provide comparable algorithms for the Dekel-Fudenberg procedure, iterated admissibility, proper rationalizability

More information

Computing Minmax; Dominance

Computing Minmax; Dominance Computing Minmax; Dominance CPSC 532A Lecture 5 Computing Minmax; Dominance CPSC 532A Lecture 5, Slide 1 Lecture Overview 1 Recap 2 Linear Programming 3 Computational Problems Involving Maxmin 4 Domination

More information

Supermodular Games. Ichiro Obara. February 6, 2012 UCLA. Obara (UCLA) Supermodular Games February 6, / 21

Supermodular Games. Ichiro Obara. February 6, 2012 UCLA. Obara (UCLA) Supermodular Games February 6, / 21 Supermodular Games Ichiro Obara UCLA February 6, 2012 Obara (UCLA) Supermodular Games February 6, 2012 1 / 21 We study a class of strategic games called supermodular game, which is useful in many applications

More information

MS&E 246: Lecture 12 Static games of incomplete information. Ramesh Johari

MS&E 246: Lecture 12 Static games of incomplete information. Ramesh Johari MS&E 246: Lecture 12 Static games of incomplete information Ramesh Johari Incomplete information Complete information means the entire structure of the game is common knowledge Incomplete information means

More information

ENDOGENOUS REPUTATION IN REPEATED GAMES

ENDOGENOUS REPUTATION IN REPEATED GAMES ENDOGENOUS REPUTATION IN REPEATED GAMES PRISCILLA T. Y. MAN Abstract. Reputation is often modelled by a small but positive prior probability that a player is a behavioral type in repeated games. This paper

More information

EconS Advanced Microeconomics II Handout on Subgame Perfect Equilibrium (SPNE)

EconS Advanced Microeconomics II Handout on Subgame Perfect Equilibrium (SPNE) EconS 3 - Advanced Microeconomics II Handout on Subgame Perfect Equilibrium (SPNE). Based on MWG 9.B.3 Consider the three-player nite game of perfect information depicted in gure. L R Player 3 l r a b

More information

CRITICAL TYPES. 1. Introduction

CRITICAL TYPES. 1. Introduction CRITICAL TYPES JEFFREY C. ELY AND MARCIN PESKI Abstract. Economic models employ assumptions about agents infinite hierarchies of belief. We might hope to achieve reasonable approximations by specifying

More information

Microeconomics for Business Practice Session 3 - Solutions

Microeconomics for Business Practice Session 3 - Solutions Microeconomics for Business Practice Session - Solutions Instructor: Eloisa Campioni TA: Ugo Zannini University of Rome Tor Vergata April 8, 016 Exercise 1 Show that there are no mixed-strategy Nash equilibria

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

Title: The Castle on the Hill. Author: David K. Levine. Department of Economics UCLA. Los Angeles, CA phone/fax

Title: The Castle on the Hill. Author: David K. Levine. Department of Economics UCLA. Los Angeles, CA phone/fax Title: The Castle on the Hill Author: David K. Levine Department of Economics UCLA Los Angeles, CA 90095 phone/fax 310-825-3810 email dlevine@ucla.edu Proposed Running Head: Castle on the Hill Forthcoming:

More information

Question 1. (p p) (x(p, w ) x(p, w)) 0. with strict inequality if x(p, w) x(p, w ).

Question 1. (p p) (x(p, w ) x(p, w)) 0. with strict inequality if x(p, w) x(p, w ). University of California, Davis Date: August 24, 2017 Department of Economics Time: 5 hours Microeconomics Reading Time: 20 minutes PRELIMINARY EXAMINATION FOR THE Ph.D. DEGREE Please answer any three

More information

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers ALGEBRA CHRISTIAN REMLING 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers by Z = {..., 2, 1, 0, 1,...}. Given a, b Z, we write a b if b = ac for some

More information

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin Game Theory Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin Bimatrix Games We are given two real m n matrices A = (a ij ), B = (b ij

More information

Correlated Equilibria of Classical Strategic Games with Quantum Signals

Correlated Equilibria of Classical Strategic Games with Quantum Signals Correlated Equilibria of Classical Strategic Games with Quantum Signals Pierfrancesco La Mura Leipzig Graduate School of Management plamura@hhl.de comments welcome September 4, 2003 Abstract Correlated

More information

MS&E 246: Lecture 4 Mixed strategies. Ramesh Johari January 18, 2007

MS&E 246: Lecture 4 Mixed strategies. Ramesh Johari January 18, 2007 MS&E 246: Lecture 4 Mixed strategies Ramesh Johari January 18, 2007 Outline Mixed strategies Mixed strategy Nash equilibrium Existence of Nash equilibrium Examples Discussion of Nash equilibrium Mixed

More information

Contracts under Asymmetric Information

Contracts under Asymmetric Information Contracts under Asymmetric Information 1 I Aristotle, economy (oiko and nemo) and the idea of exchange values, subsequently adapted by Ricardo and Marx. Classical economists. An economy consists of a set

More information

Are Obstinacy and Threat of Leaving the Bargaining Table Wise Tactics in Negotiations?

Are Obstinacy and Threat of Leaving the Bargaining Table Wise Tactics in Negotiations? Are Obstinacy and Threat of Leaving the Bargaining Table Wise Tactics in Negotiations? Selçuk Özyurt Sabancı University Very early draft. Please do not circulate or cite. Abstract Tactics that bargainers

More information

Chapter 9. Mixed Extensions. 9.1 Mixed strategies

Chapter 9. Mixed Extensions. 9.1 Mixed strategies Chapter 9 Mixed Extensions We now study a special case of infinite strategic games that are obtained in a canonic way from the finite games, by allowing mixed strategies. Below [0, 1] stands for the real

More information

Basics of Game Theory

Basics of Game Theory Basics of Game Theory Giacomo Bacci and Luca Sanguinetti Department of Information Engineering University of Pisa, Pisa, Italy {giacomo.bacci,luca.sanguinetti}@iet.unipi.it April - May, 2010 G. Bacci and

More information

Seminaar Abstrakte Wiskunde Seminar in Abstract Mathematics Lecture notes in progress (27 March 2010)

Seminaar Abstrakte Wiskunde Seminar in Abstract Mathematics Lecture notes in progress (27 March 2010) http://math.sun.ac.za/amsc/sam Seminaar Abstrakte Wiskunde Seminar in Abstract Mathematics 2009-2010 Lecture notes in progress (27 March 2010) Contents 2009 Semester I: Elements 5 1. Cartesian product

More information

Equivalences of Extensive Forms with Perfect Recall

Equivalences of Extensive Forms with Perfect Recall Equivalences of Extensive Forms with Perfect Recall Carlos Alós-Ferrer and Klaus Ritzberger University of Cologne and Royal Holloway, University of London, 1 and VGSF 1 as of Aug. 1, 2016 1 Introduction

More information