Lecture 6: April 25, 2006

Size: px
Start display at page:

Download "Lecture 6: April 25, 2006"

Transcription

1 Computational Game Theory Spring Semester, 2005/06 Lecture 6: April 25, 2006 Lecturer: Yishay Mansour Scribe: Lior Gavish, Andrey Stolyarenko, Asaph Arnon Partially based on scribe by Nataly Sharkov and Itamar Nabriski from 2003/4 6.1 Outline This lecture shall deal with the existence of Nash Equilibria in general (i.e. non-zerosum) games. We start with a proof of the existence theorem. This proof uses a fixedpoint theorem known as Brouwer s Lemma, which we shall prove in the following section. We then move to an algorithm for finding Nash Equilibria in two-players general sum games. 6.2 Existence Theorem Theorem 6.1 Every finite game has a (mixed-strategy) Nash Equilibrium. This section shall outline a proof of this theorem. We begin with a definition of the model, proceed with a statement of Brouwer s Lemma and conclude with the proof Model and Notations Recall that a finite strategic game consists of the following: A finite set of players, namely N = {1,..., n}. For every player i, a set of actions A i = {a i1,..., a im }. The set A = n i=1a i of joint actions. For every player i, a utility function u i : A R. A mixed strategy for player i is a random variable over his actions. The set of such strategies is denoted (A i ). Letting every player have his own mixed strategy (independent of the others) leads to the set of joint mixed strategies, denoted (A) = n i=1 (A i ). 1

2 2 Lecture 6: April 25, 2006 Every joint mixed strategy p (A) consists of n vectors p 1,..., p n, where p i defines the distribution played by player i. Taking the expectation over the given distribution, we define the utility for player i by u i (p) = E a p [u i (a)] = p(a)u i (a) = ( n ) p i (a i ) u i (a) a A a A i=1 We can now define a Nash Equilibrium (NE) as a joint strategy where no player profits from unilaterally changing his strategy: Definition A joint mixed strategy p (A) is NE, if for every player 1 i n it holds that q i (A i ) u i (p ) u i (p i, q i ) or equivalently a i A i u i (p ) u i (p i, a i ) Brouwer s Lemma The following lemma shall aid us in proving the existence theorem: Lemma 6.2 Let B be a compact (i.e. closed and bounded) set. Further assume that B is convex. If f : B B is a continuous map, then there must exist x B such that f(x) = x. We will sketch the proof later. First, let us explore some examples: B f(x) Fixed Points [0, 1] x 2 0, 1 1 [0, 1] 1 x 2 [0, 1] 2 (x 2, y 2 ) {0, 1} {0, 1} unit ball (in polar coord.) ( r, 2θ) 2 (0, θ) for all θ Proof of Existence of Nash Equilibrium We now turn to the proof of the existence theorem. For 1 i n, j A i, p (A) we define g ij (p) = max{u i (p i, a ij ) u i (p), 0} to be the gain for player i from switching to the deterministic action a ij, when p is the joint strategy (if this switch is indeed profitable). We can now define a continuous map between mixed strategies y : (A) (A) by Observe that: y ij (p) = p ij + g ij (p) 1 + m j=1 g ij (p).

3 6.3. BROUWER S LEMMA 3 For every player i and action a ij, the mapping y ij (p) is continuous (w.r.t. p). This is due to the fact that u i (p) is obviously continuous, making g ij (p) and consequently y ij (p) continuous. For every player i, the vector (y ij (p)) m j=1 is a distribution, i.e. it is in (A i ). This is due to the fact that the denominator of y ij (p) is a normalizing constant for any given i. Therfore y fullfils the conditions of Brouwer s Lemma. Using the lemma, we conclude that there is a fixed point p for y. This point satisfies p ij = p ij + g ij (p) 1 + m j=1 g ij (p). This is possible only in one of the following cases. Either g ij (p) = 0 for every i and j, in which case we have an equilibrium (since no one can profit from changing his strategy). Otherwise, assume there is a player i s.t. m j=1 g ij (p) > 0. Then, m p ij 1 + g ij (p) = p ij + g ij (p) j=1 or m p ij j=1 g ij (p) = g ij (p). This means that g ij (p) = 0 iff p ij = 0, and therefore p ij > 0 g ij (p) > 0. However, this is impossible by the definition of g ij (p). Recall that u i (p) is a mean with weights p ij. Therefore, it cannot be that player i can profit from every pure action in p i s support (with respect to the mean). We are therefore left with the former possibility, i.e. g ij (p) = 0 for all i and j, implying a NE. Q.E.D. 6.3 Brouwer s Lemma Let us restate the lemma: Lemma 6.3 (Brouwer) Let f : B B be a continuous function from a non-empty, compact, convex set B R n to itself. Then there is x S such that x = f(x ) (i.e. x is a fixed point of f). We shall first show that the conditions are necessary, and then outline a proof in 1D and in 2D. The proof of the general N-D case is similar to the 2D case (and is omitted).

4 4 Lecture 6: April 25, Necessity of Conditions To demonstrate that the conditions are necessary, we show a few examples: When B is not bounded: Consider f(x) = x + 1 for x R. Then, there is obviously no fixed point. When B is not closed: Consider f(x) = x/2 for x (0, 1]. Then, although x = 0 is a fixed point, it is not in the domain. When B is not convex: Consider a circle in 2D with a hole in its center (i.e. a ring). Let f rotate the ring by some angle. Then, there is obviously no fixed point Proof of Brouwer s Lemma for 1D Let B = [0, 1] and f : B B be a continuous function. We shall show that there exists a fixed point, i.e. there is a x 0 in [0, 1] such that f(x 0 ) = x 0. There are 2 possibilities: 1. If f(0) = 0 or f(1) = 1 then we are done. 2. If f(0) 0 and f(1) 1. Then define: In this case: F (x) = f(x) x F (0) = f(0) 0 = f(0) > 0 F (1) = f(1) 1 < 0 Thus, we have F : [0, 1] R, where F (0) F (1) < 0. Since f( ) is continuous, F ( ) is continuous as well. By the Intermediate Value Theorem, there exists x 0 [0, 1] such that F (x 0 ) = 0. By definition of F ( ): 0 = F (x 0 ) = f(x 0 ) x 0 And thus: f(x 0 ) = x Proof of Brouwer s Lemma for 2D Let B = [0, 1] 2 and f : B B be a continuous function f(x, y) = (x, y ). The function can be split into two components: x = f 1 (x, y) ; y = f 2 (x, y) From the one-dimensional case we know that for any given value of y there is at least one value of x such that: x 0 = f 1 (x 0, y)

5 Brouwer s Lemma 5 Figure 6.1: A one dimensional fixed point (left) and the function F ( ) (right) Let us (falsely) assume that there is always one such value x 0 for any given y. The existence of an overall fixed point will follow immediately. Let F (y) denote the fixed value of x for any given y. F is a continuous function of y (this can be easily proven from f s continuity), so the composite function: y = f 2 (F (y), y) is a continuous function of y, and hence we can invoke the one-dimensional case again to assert the existence of a value y 0 such that: y 0 = f 2 (F (y 0 ), y 0 ) It is now clear that (F (y 0 ), y 0 ) is a fixed point. Analogously, we could let G(x) define the fixed value of y for any given x. On a two-dimensional plot, the function y = G(x) must pass continuously from the left edge to the right edge of the cell, and the function x = F (y) must pass continuously from the top edge to the bottom edge. Obviously, they must intersect at some point - the fixed point - as shown in Figure 6.2. However, the assumption that there is always one value x 0 such that x 0 = f 1 (x 0, y) for any given y is not true. The number of such fixed values may change as we vary y. Thus, it is not straightforward to define continuous functions F and G passing between opposite edge pairs. In general, for any given value of y, the function y = f 1 (x, y) could have multiple fixed points, as shown in Figure 6.3. Furthermore, as y is varied, the curve of f 1 (x, y) can shift and change shape in such a way that some of the fixed points disappear and others appear elsewhere. Notice that the single-valued functions f 1 and f 2 (when restricted to the interior of the interval) cross the diagonal (i.e. y = x) an odd number of times. If the curve is tangent to the diagonal, but does not cross it, we will consider the fixed point as 2 fixed points with the same value. The edges of the domain can be treated separately (and

6 6 Lecture 6: April 25, 2006 Figure 6.2: The fixed point functions G(x) and F (y) must intersect at some point. Figure 6.3: The function f 1 (x, y) can have multiple fixed points. we shall not discuss them here). A plot of the multi-valued function F (y), mapping y to the fixed points of f 1 (x, y) for an illustrative case is shown in Figure 6.4. We see that for each value of y, there is an odd number of fixed points of f 1 (x, y) (when double counting tangent fixed points). This implies that there is a continuous path extending all the way from the left edge to the right edge (the path AB in Figure 6.4). This is due to the fact that a closed loop contributes an even number of crossings at each y. Only paths that start at one edge and end at the other have odd parity. It follows that there must be curves that extend continuously from one edge to the other 1. Similarly, the corresponding function G(x) has a contiguous path from the top edge to the bottom edge. As a result, the intersection between the multi-valued functions F and G is guaranteed. This intersection is f s fixed point. 1 Notice that there might be a continuous segment of fixed points for a given y. We shall not discuss this case here.

7 6.4. COMPUTING NASH EQUILIBRIA 7 Figure 6.4: The fixed points function F of x as a function of y. AB is the continuous path from the left side to the right. 6.4 Computing Nash Equilibria We shall first examine an example of a general game, and then describe how to generally compute a NE for a 2-player game Simple Example Consider the following general sum game. Let U and V be the payoff matrices of players 1 and 2 respectively. Player 1 has two actions while Player 2 has four actions. Hence, the payoff matrices are 2x4 (note that V is slightly different from the one shown in class). [ ] U = V = [ We assume the strategy for Player 1 is (1 p, p), i.e., the bottom action has probability p while the upper has probability 1 p. Player 2 s strategy is a vector of length 4. The utility for Player 2 from playing action j [1, 4] is V 1j (1 p) + V 2j p, as depicted in Figure 6.5. Player 2 s best response (BR) as a function of p is given by the upper envelope of the graph. We examine the following points as candidates for Nash Equilibrium: The extreme point p = 0 (Player 1 chooses upper strategy). Player 2 s BR is to play j = 4. Therefore, the payoff vector for player 1 is (3, 3), and thus he is indifferent, so this is an equilibrium. ]

8 8 Lecture 6: April 25, 2006 Figure 6.5: Player 2 s utility as a function of p. The lines are derived from coloumn vectors of V. The extreme point p = 1 (Player 1 chooses bottom strategy). Player 2 s BR is to play j = 3. Therefore, the payoff vector for player 1 is (7, 4), and thus player 1 prefers to play p = 0, so this is not an equilibrium. The left segment (red, p (0, a)). Player 2 playes j = 4, and so player 1 is indifferent, because his payoff vector is (3, 3). Therefore, we have an equilibrium. The middle segment (green, p (a, b) ). Player 2 plays j = 1, and so Player 1 s payoff vector is (1, 2). His BR is p = 1, but p (a, b), so we have no equilibrium. The right segment (blue, p (b, 1)). Player 2 plays j = 3, and so player 1 s payoff vector is (7, 4). His BR is p = 0, but p (b, 1), so this is not an equilibrium. The intersection points. NE in intersection points occurs whenever Player 1 s BR flips. For Player 1, the first point (p = a) is a transition bewteen payoffs (3, 3) and (1, 2). So, he should play p = 1 (since 1 2, and 3 3). However, recall p = a, so there is no equilibrium. The second point (p = b) is a transition between (1, 2) and (7, 4). There is a flip in player 1 s BR, and consequently a NE. We shall now derive the NE. We start by calculating p as the intertesection point: 2p + 4(1 p) = p + 6(1 p) p = 2(1 p)

9 Computing Nash Equilibria 9 p = 2 3 We now calculate q, the probability of Player 2 choosing the green (i.e. first) strategy. Since this is a NE, Player 1 should be indifferent, so: 1 q + 7(1 q) = 2q + 4(1 q) 3(1 q) = q q = 3 4 To summarize, a NE occurs when Player 1 plays the second action with probability p = 2, and Player 2 plays strategy 1 with probability q = 3 and strategy with probability 1 q = Algorithm for Finding Nash Equilibria Consider a general two-players game, where U is the payoff matrix for Player 1 and V is the payoff matrix for Player 2. According to the existence theorem, a Nash Equilibrium (p, q) does exist. Suppose we know the support of p and q. Let p have support S 1 = {i : p i 0} and q have support S 2 = {j : q j 0}. How can the Nash Equilibrium be computed? Requiring that Player 1 plays his best response, we have: i S 1 l A 1 e i Uq T e l Uq T, where e i is the i-th row of the identity matrix. e i Uq T = e l Uq T. Similarly, for Player 2: It is clear that if i, l S 1, then j S 2 l A 2 pv e T j pv e T l Of course, since p and q are distributions, and since S 1 and S 2 are their supports, we must also require p i = 1 i i S 1 p i > 0 i S 1 p i = 0 and an analogous set of constraints on q. The inequalities with the constraints can be solved by Linear Programming algorithms. However, introducing variables u and v for the payoffs of the players in the equilibrium, we obtain the following equations i S 1 e i Uq T = u j S 2 pue T j = v i S 1 p i = 0 j S 2 q j = 0

10 10 Lecture 6: April 25, 2006 p i = 1 i q j = 1 j These equations can be solved more efficiently than the original inequalities. Notice there are 2(n + 1) equations with 2(n + 1) variables. When the equations are non-degenerate, there is a unique solution, which must be tested for validity (i.e. p i 0). Otherwise, there might be an infinite number of Nash Equilibriums. The algorithm for computing a NE is now straightforward. For all possible supports S 1, S 2 (since we do not know them in advance), we solve the equations. Whenever we find a valid solution, we can output it, as it is a NE. Unfortunately, this algorithm has worst-case exponential running time, since there are 2 n 2 n = 4 n possible supports (the support could be any subset of the actions for each player). We cannot, however, expect to do much better at finding all Nash Equilibria. Consider, for instance, a game where the payoff for both players is the identity matrix, i.e. U ij = V ij = δ ij. Then, when both players play a uniform distribution on the same support (for any support), we have a NE. Therefore, there are 2 n Nash Equilibria in this game. 6.5 Approximate Nash Equilibrium Definition A joint mixed strategy p is ɛ-nash if for every player i and every mixed strategy a (A i ) it holds that u i (p i, a) u i (p ) + ɛ Note that if ɛ = 0 then ɛ-nash Equilibrium becomes a general Nash Equilibrium. This definition is useful for cases where changing strategy has a cost of ɛ. Therefore, a player will not bother changing his strategy for less than ɛ.

Computational Game Theory Spring Semester, 2005/6. Lecturer: Yishay Mansour Scribe: Ilan Cohen, Natan Rubin, Ophir Bleiberg*

Computational Game Theory Spring Semester, 2005/6. Lecturer: Yishay Mansour Scribe: Ilan Cohen, Natan Rubin, Ophir Bleiberg* Computational Game Theory Spring Semester, 2005/6 Lecture 5: 2-Player Zero Sum Games Lecturer: Yishay Mansour Scribe: Ilan Cohen, Natan Rubin, Ophir Bleiberg* 1 5.1 2-Player Zero Sum Games In this lecture

More information

Chapter 9. Mixed Extensions. 9.1 Mixed strategies

Chapter 9. Mixed Extensions. 9.1 Mixed strategies Chapter 9 Mixed Extensions We now study a special case of infinite strategic games that are obtained in a canonic way from the finite games, by allowing mixed strategies. Below [0, 1] stands for the real

More information

CS 573: Algorithmic Game Theory Lecture date: January 23rd, 2008

CS 573: Algorithmic Game Theory Lecture date: January 23rd, 2008 CS 573: Algorithmic Game Theory Lecture date: January 23rd, 2008 Instructor: Chandra Chekuri Scribe: Bolin Ding Contents 1 2-Player Zero-Sum Game 1 1.1 Min-Max Theorem for 2-Player Zero-Sum Game....................

More information

Game Theory and Algorithms Lecture 7: PPAD and Fixed-Point Theorems

Game Theory and Algorithms Lecture 7: PPAD and Fixed-Point Theorems Game Theory and Algorithms Lecture 7: PPAD and Fixed-Point Theorems March 17, 2011 Summary: The ultimate goal of this lecture is to finally prove Nash s theorem. First, we introduce and prove Sperner s

More information

Algorithmic Game Theory and Applications. Lecture 4: 2-player zero-sum games, and the Minimax Theorem

Algorithmic Game Theory and Applications. Lecture 4: 2-player zero-sum games, and the Minimax Theorem Algorithmic Game Theory and Applications Lecture 4: 2-player zero-sum games, and the Minimax Theorem Kousha Etessami 2-person zero-sum games A finite 2-person zero-sum (2p-zs) strategic game Γ, is a strategic

More information

6.891 Games, Decision, and Computation February 5, Lecture 2

6.891 Games, Decision, and Computation February 5, Lecture 2 6.891 Games, Decision, and Computation February 5, 2015 Lecture 2 Lecturer: Constantinos Daskalakis Scribe: Constantinos Daskalakis We formally define games and the solution concepts overviewed in Lecture

More information

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin Game Theory Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin Bimatrix Games We are given two real m n matrices A = (a ij ), B = (b ij

More information

April 29, 2010 CHAPTER 13: EVOLUTIONARY EQUILIBRIUM

April 29, 2010 CHAPTER 13: EVOLUTIONARY EQUILIBRIUM April 29, 200 CHAPTER : EVOLUTIONARY EQUILIBRIUM Some concepts from biology have been applied to game theory to define a type of equilibrium of a population that is robust against invasion by another type

More information

MS&E 246: Lecture 4 Mixed strategies. Ramesh Johari January 18, 2007

MS&E 246: Lecture 4 Mixed strategies. Ramesh Johari January 18, 2007 MS&E 246: Lecture 4 Mixed strategies Ramesh Johari January 18, 2007 Outline Mixed strategies Mixed strategy Nash equilibrium Existence of Nash equilibrium Examples Discussion of Nash equilibrium Mixed

More information

Mixed Nash Equilibria

Mixed Nash Equilibria lgorithmic Game Theory, Summer 2017 Mixed Nash Equilibria Lecture 2 (5 pages) Instructor: Thomas Kesselheim In this lecture, we introduce the general framework of games. Congestion games, as introduced

More information

Nash Equilibrium: Existence & Computation

Nash Equilibrium: Existence & Computation : & IIIS, Tsinghua University zoy.blood@gmail.com March 28, 2016 Overview 1 Fixed-Point Theorems: Kakutani & Brouwer Proof of 2 A Brute Force Solution Lemke Howson Algorithm Overview Fixed-Point Theorems:

More information

a i,1 a i,j a i,m be vectors with positive entries. The linear programming problem associated to A, b and c is to find all vectors sa b

a i,1 a i,j a i,m be vectors with positive entries. The linear programming problem associated to A, b and c is to find all vectors sa b LINEAR PROGRAMMING PROBLEMS MATH 210 NOTES 1. Statement of linear programming problems Suppose n, m 1 are integers and that A = a 1,1... a 1,m a i,1 a i,j a i,m a n,1... a n,m is an n m matrix of positive

More information

: Cryptography and Game Theory Ran Canetti and Alon Rosen. Lecture 8

: Cryptography and Game Theory Ran Canetti and Alon Rosen. Lecture 8 0368.4170: Cryptography and Game Theory Ran Canetti and Alon Rosen Lecture 8 December 9, 2009 Scribe: Naama Ben-Aroya Last Week 2 player zero-sum games (min-max) Mixed NE (existence, complexity) ɛ-ne Correlated

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora princeton univ F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora Scribe: Today we first see LP duality, which will then be explored a bit more in the

More information

Introduction to Game Theory Lecture Note 2: Strategic-Form Games and Nash Equilibrium (2)

Introduction to Game Theory Lecture Note 2: Strategic-Form Games and Nash Equilibrium (2) Introduction to Game Theory Lecture Note 2: Strategic-Form Games and Nash Equilibrium (2) Haifeng Huang University of California, Merced Best response functions: example In simple games we can examine

More information

Game Theory: Lecture 2

Game Theory: Lecture 2 Game Theory: Lecture 2 Tai-Wei Hu June 29, 2011 Outline Two-person zero-sum games normal-form games Minimax theorem Simplex method 1 2-person 0-sum games 1.1 2-Person Normal Form Games A 2-person normal

More information

Games and Their Equilibria

Games and Their Equilibria Chapter 1 Games and Their Equilibria The central notion of game theory that captures many aspects of strategic decision making is that of a strategic game Definition 11 (Strategic Game) An n-player strategic

More information

15 Real Analysis II Sequences and Limits , Math for Economists Fall 2004 Lecture Notes, 10/21/2004

15 Real Analysis II Sequences and Limits , Math for Economists Fall 2004 Lecture Notes, 10/21/2004 14.102, Math for Economists Fall 2004 Lecture Notes, 10/21/2004 These notes are primarily based on those written by Andrei Bremzen for 14.102 in 2002/3, and by Marek Pycia for the MIT Math Camp in 2003/4.

More information

Mixed Strategies. Krzysztof R. Apt. CWI, Amsterdam, the Netherlands, University of Amsterdam. (so not Krzystof and definitely not Krystof)

Mixed Strategies. Krzysztof R. Apt. CWI, Amsterdam, the Netherlands, University of Amsterdam. (so not Krzystof and definitely not Krystof) Mixed Strategies Krzysztof R. Apt (so not Krzystof and definitely not Krystof) CWI, Amsterdam, the Netherlands, University of Amsterdam Mixed Strategies p. 1/1 Mixed Extension of a Finite Game Probability

More information

Game Theory DR. ÖZGÜR GÜRERK UNIVERSITY OF ERFURT WINTER TERM 2012/13. Strict and nonstrict equilibria

Game Theory DR. ÖZGÜR GÜRERK UNIVERSITY OF ERFURT WINTER TERM 2012/13. Strict and nonstrict equilibria Game Theory 2. Strategic Games contd. DR. ÖZGÜR GÜRERK UNIVERSITY OF ERFURT WINTER TERM 2012/13 Strict and nonstrict equilibria In the examples we have seen so far: A unilaterally deviation from Nash equilibrium

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

Lecture 2: Minimax theorem, Impagliazzo Hard Core Lemma

Lecture 2: Minimax theorem, Impagliazzo Hard Core Lemma Lecture 2: Minimax theorem, Impagliazzo Hard Core Lemma Topics in Pseudorandomness and Complexity Theory (Spring 207) Rutgers University Swastik Kopparty Scribe: Cole Franks Zero-sum games are two player

More information

6.896: Topics in Algorithmic Game Theory

6.896: Topics in Algorithmic Game Theory 6.896: Topics in Algorithmic Game Theory Audiovisual Supplement to Lecture 5 Constantinos Daskalakis On the blackboard we defined multi-player games and Nash equilibria, and showed Nash s theorem that

More information

Lower Semicontinuity of the Solution Set Mapping in Some Optimization Problems

Lower Semicontinuity of the Solution Set Mapping in Some Optimization Problems Lower Semicontinuity of the Solution Set Mapping in Some Optimization Problems Roberto Lucchetti coauthors: A. Daniilidis, M.A. Goberna, M.A. López, Y. Viossat Dipartimento di Matematica, Politecnico di

More information

Notes on Blackwell s Comparison of Experiments Tilman Börgers, June 29, 2009

Notes on Blackwell s Comparison of Experiments Tilman Börgers, June 29, 2009 Notes on Blackwell s Comparison of Experiments Tilman Börgers, June 29, 2009 These notes are based on Chapter 12 of David Blackwell and M. A.Girshick, Theory of Games and Statistical Decisions, John Wiley

More information

Brouwer Fixed-Point Theorem

Brouwer Fixed-Point Theorem Brouwer Fixed-Point Theorem Colin Buxton Mentor: Kathy Porter May 17, 2016 1 1 Introduction to Fixed Points Fixed points have many applications. One of their prime applications is in the mathematical field

More information

Nash-solvable bidirected cyclic two-person game forms

Nash-solvable bidirected cyclic two-person game forms DIMACS Technical Report 2008-13 November 2008 Nash-solvable bidirected cyclic two-person game forms by Endre Boros 1 RUTCOR, Rutgers University 640 Bartholomew Road, Piscataway NJ 08854-8003 boros@rutcor.rutgers.edu

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Game Theory. 4. Algorithms. Bernhard Nebel and Robert Mattmüller. May 2nd, Albert-Ludwigs-Universität Freiburg

Game Theory. 4. Algorithms. Bernhard Nebel and Robert Mattmüller. May 2nd, Albert-Ludwigs-Universität Freiburg Game Theory 4. Algorithms Albert-Ludwigs-Universität Freiburg Bernhard Nebel and Robert Mattmüller May 2nd, 2018 May 2nd, 2018 B. Nebel, R. Mattmüller Game Theory 2 / 36 We know: In finite strategic games,

More information

MANAGEMENT SCIENCE doi /mnsc ec pp. ec1 ec6

MANAGEMENT SCIENCE doi /mnsc ec pp. ec1 ec6 MANAGEMENT SCIENCE doi 10.1287/mnsc.1060.0680ec pp. ec1 ec6 e-companion ONLY AVAILABLE IN ELECTRONIC FORM informs 2007 INFORMS Electronic Companion The Horizontal Scope of the Firm: Organizational Tradeoffs

More information

1 Motivation. Game Theory. 2 Linear Programming. Motivation. 4. Algorithms. Bernhard Nebel and Robert Mattmüller May 15th, 2017

1 Motivation. Game Theory. 2 Linear Programming. Motivation. 4. Algorithms. Bernhard Nebel and Robert Mattmüller May 15th, 2017 1 Game Theory 4. Algorithms Albert-Ludwigs-Universität Freiburg Bernhard Nebel and Robert Mattmüller May 15th, 2017 May 15th, 2017 B. Nebel, R. Mattmüller Game Theory 3 / 36 2 We know: In finite strategic

More information

Introduction to Game Theory

Introduction to Game Theory Introduction to Game Theory Part 1. Static games of complete information Chapter 3. Mixed strategies and existence of equilibrium Ciclo Profissional 2 o Semestre / 2011 Graduação em Ciências Econômicas

More information

Introduction to Game Theory

Introduction to Game Theory COMP323 Introduction to Computational Game Theory Introduction to Game Theory Paul G. Spirakis Department of Computer Science University of Liverpool Paul G. Spirakis (U. Liverpool) Introduction to Game

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

Economics Noncooperative Game Theory Lectures 3. October 15, 1997 Lecture 3

Economics Noncooperative Game Theory Lectures 3. October 15, 1997 Lecture 3 Economics 8117-8 Noncooperative Game Theory October 15, 1997 Lecture 3 Professor Andrew McLennan Nash Equilibrium I. Introduction A. Philosophy 1. Repeated framework a. One plays against dierent opponents

More information

Equilibria in Games with Weak Payoff Externalities

Equilibria in Games with Weak Payoff Externalities NUPRI Working Paper 2016-03 Equilibria in Games with Weak Payoff Externalities Takuya Iimura, Toshimasa Maruta, and Takahiro Watanabe October, 2016 Nihon University Population Research Institute http://www.nihon-u.ac.jp/research/institute/population/nupri/en/publications.html

More information

CS364A: Algorithmic Game Theory Lecture #13: Potential Games; A Hierarchy of Equilibria

CS364A: Algorithmic Game Theory Lecture #13: Potential Games; A Hierarchy of Equilibria CS364A: Algorithmic Game Theory Lecture #13: Potential Games; A Hierarchy of Equilibria Tim Roughgarden November 4, 2013 Last lecture we proved that every pure Nash equilibrium of an atomic selfish routing

More information

Def. A topological space X is disconnected if it admits a non-trivial splitting: (We ll abbreviate disjoint union of two subsets A and B meaning A B =

Def. A topological space X is disconnected if it admits a non-trivial splitting: (We ll abbreviate disjoint union of two subsets A and B meaning A B = CONNECTEDNESS-Notes Def. A topological space X is disconnected if it admits a non-trivial splitting: X = A B, A B =, A, B open in X, and non-empty. (We ll abbreviate disjoint union of two subsets A and

More information

Efficient Nash Equilibrium Computation in Two Player Rank-1 G

Efficient Nash Equilibrium Computation in Two Player Rank-1 G Efficient Nash Equilibrium Computation in Two Player Rank-1 Games Dept. of CSE, IIT-Bombay China Theory Week 2012 August 17, 2012 Joint work with Bharat Adsul, Jugal Garg and Milind Sohoni Outline Games,

More information

Computing Solution Concepts of Normal-Form Games. Song Chong EE, KAIST

Computing Solution Concepts of Normal-Form Games. Song Chong EE, KAIST Computing Solution Concepts of Normal-Form Games Song Chong EE, KAIST songchong@kaist.edu Computing Nash Equilibria of Two-Player, Zero-Sum Games Can be expressed as a linear program (LP), which means

More information

Algebra II Vocabulary Alphabetical Listing. Absolute Maximum: The highest point over the entire domain of a function or relation.

Algebra II Vocabulary Alphabetical Listing. Absolute Maximum: The highest point over the entire domain of a function or relation. Algebra II Vocabulary Alphabetical Listing Absolute Maximum: The highest point over the entire domain of a function or relation. Absolute Minimum: The lowest point over the entire domain of a function

More information

Spring 2016 Network Science. Solution of Quiz I

Spring 2016 Network Science. Solution of Quiz I Spring 2016 Network Science Department of Electrical and Computer Engineering National Chiao Tung University Solution of Quiz I Problem Points Your Score 1 5 2 20 25 20 Total 100 Read all of the following

More information

Lecture Learning infinite hypothesis class via VC-dimension and Rademacher complexity;

Lecture Learning infinite hypothesis class via VC-dimension and Rademacher complexity; CSCI699: Topics in Learning and Game Theory Lecture 2 Lecturer: Ilias Diakonikolas Scribes: Li Han Today we will cover the following 2 topics: 1. Learning infinite hypothesis class via VC-dimension and

More information

Long-Run versus Short-Run Player

Long-Run versus Short-Run Player Repeated Games 1 Long-Run versus Short-Run Player a fixed simultaneous move stage game Player 1 is long-run with discount factor δ actions a A a finite set 1 1 1 1 2 utility u ( a, a ) Player 2 is short-run

More information

Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about

Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about 0368.4170: Cryptography and Game Theory Ran Canetti and Alon Rosen Lecture 7 02 December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about Two-Player zero-sum games (min-max theorem) Mixed

More information

Economics 204. The Transversality Theorem is a particularly convenient formulation of Sard s Theorem for our purposes: with r 1+max{0,n m}

Economics 204. The Transversality Theorem is a particularly convenient formulation of Sard s Theorem for our purposes: with r 1+max{0,n m} Economics 204 Lecture 13 Wednesday, August 12, 2009 Section 5.5 (Cont.) Transversality Theorem The Transversality Theorem is a particularly convenient formulation of Sard s Theorem for our purposes: Theorem

More information

Two-Player Kidney Exchange Game

Two-Player Kidney Exchange Game Two-Player Kidney Exchange Game Margarida Carvalho INESC TEC and Faculdade de Ciências da Universidade do Porto, Portugal margarida.carvalho@dcc.fc.up.pt Andrea Lodi DEI, University of Bologna, Italy andrea.lodi@unibo.it

More information

Game Theory Correlated equilibrium 1

Game Theory Correlated equilibrium 1 Game Theory Correlated equilibrium 1 Christoph Schottmüller University of Copenhagen 1 License: CC Attribution ShareAlike 4.0 1 / 17 Correlated equilibrium I Example (correlated equilibrium 1) L R U 5,1

More information

The Distribution of Optimal Strategies in Symmetric Zero-sum Games

The Distribution of Optimal Strategies in Symmetric Zero-sum Games The Distribution of Optimal Strategies in Symmetric Zero-sum Games Florian Brandl Technische Universität München brandlfl@in.tum.de Given a skew-symmetric matrix, the corresponding two-player symmetric

More information

MS&E 246: Lecture 17 Network routing. Ramesh Johari

MS&E 246: Lecture 17 Network routing. Ramesh Johari MS&E 246: Lecture 17 Network routing Ramesh Johari Network routing Basic definitions Wardrop equilibrium Braess paradox Implications Network routing N users travel across a network Transportation Internet

More information

A Note on Approximate Nash Equilibria

A Note on Approximate Nash Equilibria A Note on Approximate Nash Equilibria Constantinos Daskalakis, Aranyak Mehta, and Christos Papadimitriou University of California, Berkeley, USA. Supported by NSF grant CCF-05559 IBM Almaden Research Center,

More information

Computing Equilibria by Incorporating Qualitative Models 1

Computing Equilibria by Incorporating Qualitative Models 1 Computing Equilibria by Incorporating Qualitative Models 1 Sam Ganzfried March 3, 2010 CMU-CS-10-105 Tuomas Sandholm School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 1 This material

More information

CS364A: Algorithmic Game Theory Lecture #16: Best-Response Dynamics

CS364A: Algorithmic Game Theory Lecture #16: Best-Response Dynamics CS364A: Algorithmic Game Theory Lecture #16: Best-Response Dynamics Tim Roughgarden November 13, 2013 1 Do Players Learn Equilibria? In this lecture we segue into the third part of the course, which studies

More information

Lecture 9 : PPAD and the Complexity of Equilibrium Computation. 1 Complexity Class PPAD. 1.1 What does PPAD mean?

Lecture 9 : PPAD and the Complexity of Equilibrium Computation. 1 Complexity Class PPAD. 1.1 What does PPAD mean? CS 599: Algorithmic Game Theory October 20, 2010 Lecture 9 : PPAD and the Complexity of Equilibrium Computation Prof. Xi Chen Scribes: Cheng Lu and Sasank Vijayan 1 Complexity Class PPAD 1.1 What does

More information

Game Theory and Control

Game Theory and Control Game Theory and Control Lecture 4: Potential games Saverio Bolognani, Ashish Hota, Maryam Kamgarpour Automatic Control Laboratory ETH Zürich 1 / 40 Course Outline 1 Introduction 22.02 Lecture 1: Introduction

More information

EC3224 Autumn Lecture #04 Mixed-Strategy Equilibrium

EC3224 Autumn Lecture #04 Mixed-Strategy Equilibrium Reading EC3224 Autumn Lecture #04 Mixed-Strategy Equilibrium Osborne Chapter 4.1 to 4.10 By the end of this week you should be able to: find a mixed strategy Nash Equilibrium of a game explain why mixed

More information

האוניברסיטה העברית בירושלים

האוניברסיטה העברית בירושלים האוניברסיטה העברית בירושלים THE HEBREW UNIVERSITY OF JERUSALEM TOWARDS A CHARACTERIZATION OF RATIONAL EXPECTATIONS by ITAI ARIELI Discussion Paper # 475 February 2008 מרכז לחקר הרציונליות CENTER FOR THE

More information

Lecture notes for Analysis of Algorithms : Markov decision processes

Lecture notes for Analysis of Algorithms : Markov decision processes Lecture notes for Analysis of Algorithms : Markov decision processes Lecturer: Thomas Dueholm Hansen June 6, 013 Abstract We give an introduction to infinite-horizon Markov decision processes (MDPs) with

More information

Massachusetts Institute of Technology 6.854J/18.415J: Advanced Algorithms Friday, March 18, 2016 Ankur Moitra. Problem Set 6

Massachusetts Institute of Technology 6.854J/18.415J: Advanced Algorithms Friday, March 18, 2016 Ankur Moitra. Problem Set 6 Massachusetts Institute of Technology 6.854J/18.415J: Advanced Algorithms Friday, March 18, 2016 Ankur Moitra Problem Set 6 Due: Wednesday, April 6, 2016 7 pm Dropbox Outside Stata G5 Collaboration policy:

More information

On Acyclicity of Games with Cycles

On Acyclicity of Games with Cycles On Acyclicity of Games with Cycles Daniel Andersson, Vladimir Gurvich, and Thomas Dueholm Hansen Dept. of Computer Science, Aarhus University, {koda,tdh}@cs.au.dk RUTCOR, Rutgers University, gurvich@rutcor.rutgers.edu

More information

The Intermediate Value Theorem If a function f (x) is continuous in the closed interval [ a,b] then [ ]

The Intermediate Value Theorem If a function f (x) is continuous in the closed interval [ a,b] then [ ] Lecture 4-6B1 Evaluating Limits Limits x ---> a The Intermediate Value Theorem If a function f (x) is continuous in the closed interval [ a,b] then [ ] the y values f (x) must take on every value on the

More information

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 = Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values

More information

Correlated Equilibria: Rationality and Dynamics

Correlated Equilibria: Rationality and Dynamics Correlated Equilibria: Rationality and Dynamics Sergiu Hart June 2010 AUMANN 80 SERGIU HART c 2010 p. 1 CORRELATED EQUILIBRIA: RATIONALITY AND DYNAMICS Sergiu Hart Center for the Study of Rationality Dept

More information

NOTES ON COOPERATIVE GAME THEORY AND THE CORE. 1. Introduction

NOTES ON COOPERATIVE GAME THEORY AND THE CORE. 1. Introduction NOTES ON COOPERATIVE GAME THEORY AND THE CORE SARA FROEHLICH 1. Introduction Cooperative game theory is fundamentally different from the types of games we have studied so far, which we will now refer to

More information

Part IB Optimisation

Part IB Optimisation Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

Topics of Algorithmic Game Theory

Topics of Algorithmic Game Theory COMP323 Introduction to Computational Game Theory Topics of Algorithmic Game Theory Paul G. Spirakis Department of Computer Science University of Liverpool Paul G. Spirakis (U. Liverpool) Topics of Algorithmic

More information

Concavity and Lines. By Ng Tze Beng

Concavity and Lines. By Ng Tze Beng Concavity and Lines. By Ng Tze Beng Not all calculus text books give the same definition for concavity. Most would require differentiability. One is often asked about the equivalence of various differing

More information

CHAPTER 1. Relations. 1. Relations and Their Properties. Discussion

CHAPTER 1. Relations. 1. Relations and Their Properties. Discussion CHAPTER 1 Relations 1. Relations and Their Properties 1.1. Definition of a Relation. Definition 1.1.1. A binary relation from a set A to a set B is a subset R A B. If (a, b) R we say a is Related to b

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

GRE Math Subject Test #5 Solutions.

GRE Math Subject Test #5 Solutions. GRE Math Subject Test #5 Solutions. 1. E (Calculus) Apply L Hôpital s Rule two times: cos(3x) 1 3 sin(3x) 9 cos(3x) lim x 0 = lim x 2 x 0 = lim 2x x 0 = 9. 2 2 2. C (Geometry) Note that a line segment

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Learning and Games MEHRYAR MOHRI MOHRI@ COURANT INSTITUTE & GOOGLE RESEARCH. Outline Normal form games Nash equilibrium von Neumann s minimax theorem Correlated equilibrium Internal

More information

Notes on Complex Analysis

Notes on Complex Analysis Michael Papadimitrakis Notes on Complex Analysis Department of Mathematics University of Crete Contents The complex plane.. The complex plane...................................2 Argument and polar representation.........................

More information

Realization Plans for Extensive Form Games without Perfect Recall

Realization Plans for Extensive Form Games without Perfect Recall Realization Plans for Extensive Form Games without Perfect Recall Richard E. Stearns Department of Computer Science University at Albany - SUNY Albany, NY 12222 April 13, 2015 Abstract Given a game in

More information

Computing Minmax; Dominance

Computing Minmax; Dominance Computing Minmax; Dominance CPSC 532A Lecture 5 Computing Minmax; Dominance CPSC 532A Lecture 5, Slide 1 Lecture Overview 1 Recap 2 Linear Programming 3 Computational Problems Involving Maxmin 4 Domination

More information

Weak Dominance and Never Best Responses

Weak Dominance and Never Best Responses Chapter 4 Weak Dominance and Never Best Responses Let us return now to our analysis of an arbitrary strategic game G := (S 1,...,S n, p 1,...,p n ). Let s i, s i be strategies of player i. We say that

More information

Exercise 11. Solution. EVOLUTION AND THE THEORY OF GAMES (Spring 2009) EXERCISES Show that every 2 2 matrix game with payoff matrix

Exercise 11. Solution. EVOLUTION AND THE THEORY OF GAMES (Spring 2009) EXERCISES Show that every 2 2 matrix game with payoff matrix EOLUTION AND THE THEORY OF GAMES (Spring 009) EXERCISES 11-15 Exercise 11 Show that every matrix game with payoff matrix a,a c,b b,c d,d with a c or b d has at least one ESS. One should distinguish the

More information

Unique Nash Implementation for a Class of Bargaining Solutions

Unique Nash Implementation for a Class of Bargaining Solutions Unique Nash Implementation for a Class of Bargaining Solutions Walter Trockel University of California, Los Angeles and Bielefeld University Mai 1999 Abstract The paper presents a method of supporting

More information

Basic Game Theory. Kate Larson. January 7, University of Waterloo. Kate Larson. What is Game Theory? Normal Form Games. Computing Equilibria

Basic Game Theory. Kate Larson. January 7, University of Waterloo. Kate Larson. What is Game Theory? Normal Form Games. Computing Equilibria Basic Game Theory University of Waterloo January 7, 2013 Outline 1 2 3 What is game theory? The study of games! Bluffing in poker What move to make in chess How to play Rock-Scissors-Paper Also study of

More information

Game Theory and Algorithms Lecture 2: Nash Equilibria and Examples

Game Theory and Algorithms Lecture 2: Nash Equilibria and Examples Game Theory and Algorithms Lecture 2: Nash Equilibria and Examples February 24, 2011 Summary: We introduce the Nash Equilibrium: an outcome (action profile) which is stable in the sense that no player

More information

Math 1120 Calculus, sections 3 and 10 Test 1

Math 1120 Calculus, sections 3 and 10 Test 1 October 3, 206 Name The problems count as marked The total number of points available is 7 Throughout this test, show your work This is an amalgamation of the tests from sections 3 and 0 (0 points) Find

More information

CS 598RM: Algorithmic Game Theory, Spring Practice Exam Solutions

CS 598RM: Algorithmic Game Theory, Spring Practice Exam Solutions CS 598RM: Algorithmic Game Theory, Spring 2017 1. Answer the following. Practice Exam Solutions Agents 1 and 2 are bargaining over how to split a dollar. Each agent simultaneously demands share he would

More information

Outline for today. Stat155 Game Theory Lecture 17: Correlated equilibria and the price of anarchy. Correlated equilibrium. A driving example.

Outline for today. Stat155 Game Theory Lecture 17: Correlated equilibria and the price of anarchy. Correlated equilibrium. A driving example. Outline for today Stat55 Game Theory Lecture 7: Correlated equilibria and the price of anarchy Peter Bartlett s Example: October 5, 06 A driving example / 7 / 7 Payoff Go (-00,-00) (,-) (-,) (-,-) Nash

More information

Advanced Microeconomics

Advanced Microeconomics Advanced Microeconomics Leonardo Felli EC441: Room D.106, Z.332, D.109 Lecture 8 bis: 24 November 2004 Monopoly Consider now the pricing behavior of a profit maximizing monopolist: a firm that is the only

More information

MS&E 246: Lecture 12 Static games of incomplete information. Ramesh Johari

MS&E 246: Lecture 12 Static games of incomplete information. Ramesh Johari MS&E 246: Lecture 12 Static games of incomplete information Ramesh Johari Incomplete information Complete information means the entire structure of the game is common knowledge Incomplete information means

More information

Online Convex Optimization

Online Convex Optimization Advanced Course in Machine Learning Spring 2010 Online Convex Optimization Handouts are jointly prepared by Shie Mannor and Shai Shalev-Shwartz A convex repeated game is a two players game that is performed

More information

Mathematics 530. Practice Problems. n + 1 }

Mathematics 530. Practice Problems. n + 1 } Department of Mathematical Sciences University of Delaware Prof. T. Angell October 19, 2015 Mathematics 530 Practice Problems 1. Recall that an indifference relation on a partially ordered set is defined

More information

Consequences of Continuity

Consequences of Continuity Consequences of Continuity James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University October 4, 2017 Outline 1 Domains of Continuous Functions 2 The

More information

Global Games I. Mehdi Shadmehr Department of Economics University of Miami. August 25, 2011

Global Games I. Mehdi Shadmehr Department of Economics University of Miami. August 25, 2011 Global Games I Mehdi Shadmehr Department of Economics University of Miami August 25, 2011 Definition What is a global game? 1 A game of incomplete information. 2 There is an unknown and uncertain parameter.

More information

REPEATED GAMES. Jörgen Weibull. April 13, 2010

REPEATED GAMES. Jörgen Weibull. April 13, 2010 REPEATED GAMES Jörgen Weibull April 13, 2010 Q1: Can repetition induce cooperation? Peace and war Oligopolistic collusion Cooperation in the tragedy of the commons Q2: Can a game be repeated? Game protocols

More information

Discrete Mathematics

Discrete Mathematics Discrete Mathematics 309 (2009) 4456 4468 Contents lists available at ScienceDirect Discrete Mathematics journal homepage: www.elsevier.com/locate/disc Minimal and locally minimal games and game forms

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Solutions to Problem Sheet for Week 11

Solutions to Problem Sheet for Week 11 THE UNIVERSITY OF SYDNEY SCHOOL OF MATHEMATICS AND STATISTICS Solutions to Problem Sheet for Week MATH9: Differential Calculus (Advanced) Semester, 7 Web Page: sydney.edu.au/science/maths/u/ug/jm/math9/

More information

Game Theory: Lecture 3

Game Theory: Lecture 3 Game Theory: Lecture 3 Lecturer: Pingzhong Tang Topic: Mixed strategy Scribe: Yuan Deng March 16, 2015 Definition 1 (Mixed strategy). A mixed strategy S i : A i [0, 1] assigns a probability S i ( ) 0 to

More information

Theoretical Computer Science

Theoretical Computer Science Theoretical Computer Science 0 (009) 599 606 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: www.elsevier.com/locate/tcs Polynomial algorithms for approximating

More information

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ.

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ. Convexity in R n Let E be a convex subset of R n. A function f : E (, ] is convex iff f(tx + (1 t)y) (1 t)f(x) + tf(y) x, y E, t [0, 1]. A similar definition holds in any vector space. A topology is needed

More information

6 Evolution of Networks

6 Evolution of Networks last revised: March 2008 WARNING for Soc 376 students: This draft adopts the demography convention for transition matrices (i.e., transitions from column to row). 6 Evolution of Networks 6. Strategic network

More information

6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Potential Games

6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Potential Games 6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Asu Ozdaglar MIT March 2, 2010 1 Introduction Outline Review of Supermodular Games Reading: Fudenberg and Tirole, Section 12.3.

More information

Lecture 6: Communication Complexity of Auctions

Lecture 6: Communication Complexity of Auctions Algorithmic Game Theory October 13, 2008 Lecture 6: Communication Complexity of Auctions Lecturer: Sébastien Lahaie Scribe: Rajat Dixit, Sébastien Lahaie In this lecture we examine the amount of communication

More information

Game Theory. 2.1 Zero Sum Games (Part 2) George Mason University, Spring 2018

Game Theory. 2.1 Zero Sum Games (Part 2) George Mason University, Spring 2018 Game Theory 2.1 Zero Sum Games (Part 2) George Mason University, Spring 2018 The Nash equilibria of two-player, zero-sum games have various nice properties. Minimax Condition A pair of strategies is in

More information