Sequential Equilibria of Multi-Stage Games with Infinite Sets of Types and Actions

Similar documents
Open Sequential Equilibria of Multi-Stage Games with Infinite Sets of Types and Actions

Conditional equilibria of multi-stage games with infinite sets of signals and actions

Perfect Conditional -Equilibria of Multi-Stage Games with Infinite Sets of Signals and Actions (Preliminary and Incomplete)

Sequential Equilibria of Multi-Stage Games with In nite Sets of Types and Actions

Open Sequential Equilibria of Multi-Stage Games with In nite Sets of Types and Actions

Sequential Equilibrium in Multi-Stage Games with In nite Sets of Types and Actions

Subgame Perfect Equilibria in Infinite Stage Games: an alternative existence proof

Solving Extensive Form Games

Equilibrium Refinements

Bayesian Persuasion Online Appendix

Non-Existence of Equilibrium in Vickrey, Second-Price, and English Auctions

Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Extensive games with perfect information OR6and7,FT3,4and11

Economics 201B Economic Theory (Spring 2017) Bargaining. Topics: the axiomatic approach (OR 15) and the strategic approach (OR 7).

6.254 : Game Theory with Engineering Applications Lecture 8: Supermodular and Potential Games

Economics 703 Advanced Microeconomics. Professor Peter Cramton Fall 2017

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

Lebesgue Measure. Dung Le 1

BELIEFS & EVOLUTIONARY GAME THEORY

Economics 201A Economic Theory (Fall 2009) Extensive Games with Perfect and Imperfect Information

ON FORWARD INDUCTION

Abstract Measure Theory

1 The General Definition

Consistent Beliefs in Extensive Form Games

THE CANTOR GAME: WINNING STRATEGIES AND DETERMINACY. by arxiv: v1 [math.ca] 29 Jan 2017 MAGNUS D. LADUE

The Revenue Equivalence Theorem 1

CRITICAL TYPES. 1. Introduction

Extensive Form Games with Perfect Information

In N we can do addition, but in order to do subtraction we need to extend N to the integers

Chapter 4. Measure Theory. 1. Measure Spaces

Maths 212: Homework Solutions

ENDOGENOUS REPUTATION IN REPEATED GAMES

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

Wars of Attrition with Budget Constraints

The Folk Theorem for Finitely Repeated Games with Mixed Strategies

RESEARCH PAPER NO AXIOMATIC THEORY OF EQUILIBRIUM SELECTION IN SIGNALING GAMES WITH GENERIC PAYOFFS. Srihari Govindan.

Lecture Notes on Game Theory

Non-reactive strategies in decision-form games

1 Lattices and Tarski s Theorem

SEQUENTIAL EQUILIBRIA IN BAYESIAN GAMES WITH COMMUNICATION. Dino Gerardi and Roger B. Myerson. December 2005

STAT 7032 Probability Spring Wlodek Bryc

Lebesgue Measure on R n

Puri cation 1. Stephen Morris Princeton University. July Economics.

MORE ON CONTINUOUS FUNCTIONS AND SETS

Equivalences of Extensive Forms with Perfect Recall

Definitions and Proofs

NTU IO (I) : Classnote 03 Meng-Yu Liang March, 2009

Payoff Continuity in Incomplete Information Games

WEAKLY DOMINATED STRATEGIES: A MYSTERY CRACKED

SF2972 Game Theory Exam with Solutions March 15, 2013

Game Theory Lecture 10+11: Knowledge

A LITTLE REAL ANALYSIS AND TOPOLOGY

In N we can do addition, but in order to do subtraction we need to extend N to the integers

NASH IMPLEMENTATION USING SIMPLE MECHANISMS WITHOUT UNDESIRABLE MIXED-STRATEGY EQUILIBRIA

Economics 3012 Strategic Behavior Andy McLennan October 20, 2006

Microeconomics. 2. Game Theory

On the Unique D1 Equilibrium in the Stackelberg Model with Asymmetric Information Janssen, M.C.W.; Maasland, E.

REPEATED GAMES. Jörgen Weibull. April 13, 2010

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

A Short Review of Cardinality

arxiv: v1 [math.fa] 14 Jul 2018

Chapter 1 The Real Numbers

On the Notion of Perfect Bayesian Equilibrium

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication)

NAME: Mathematics 205A, Fall 2008, Final Examination. Answer Key

4: Dynamic games. Concordia February 6, 2017

Lecture Notes on Game Theory

Working Paper EQUILIBRIUM SELECTION IN COMMON-VALUE SECOND-PRICE AUCTIONS. Heng Liu

Subgame Perfect Equilibria in Stage Games

Tijmen Daniëls Universiteit van Amsterdam. Abstract

Notes for Math 290 using Introduction to Mathematical Proofs by Charles E. Roberts, Jr.

SF2972 Game Theory Written Exam with Solutions June 10, 2011

Game Theory and its Applications to Networks - Part I: Strict Competition

Topological properties of Z p and Q p and Euclidean models

Metric Spaces and Topology

Higher Order Beliefs in Dynamic Environments

Perfect Bayesian Equilibrium

Solutions to Tutorial 8 (Week 9)

Refinements - change set of equilibria to find "better" set of equilibria by eliminating some that are less plausible

Basic Game Theory. Kate Larson. January 7, University of Waterloo. Kate Larson. What is Game Theory? Normal Form Games. Computing Equilibria

האוניברסיטה העברית בירושלים

Weak Robust (Virtual) Implementation

Game Theory and Rationality

On the Informed Principal Model with Common Values

Robust Comparative Statics in Large Static Games

A Readable Introduction to Real Mathematics

The Condorcet Jur(ies) Theorem

A remark on discontinuous games with asymmetric information and ambiguity

Blackwell s Informativeness Theorem Using Category Theory

The Existence of Perfect Equilibrium in Discontinuous Games

1 Adeles over Q. 1.1 Absolute values

Preliminary Results on Social Learning with Partial Observations

Game Theory with Information: Introducing the Witsenhausen Intrinsic Model

ADVANCED CALCULUS - MTH433 LECTURE 4 - FINITE AND INFINITE SETS

Entropic Selection of Nash Equilibrium

Correlated Equilibria of Classical Strategic Games with Quantum Signals

Symmetric Separating Equilibria in English Auctions 1

Reputations. Larry Samuelson. Yale University. February 13, 2013

Robust Mechanism Design and Robust Implementation

Indeed, if we want m to be compatible with taking limits, it should be countably additive, meaning that ( )

(2) E M = E C = X\E M

Transcription:

Sequential Equilibria of Multi-Stage Games with Infinite Sets of Types and Actions by Roger B. Myerson and Philip J. Reny* Draft notes October 2011 http://home.uchicago.edu/~preny/papers/bigseqm.pdf Abstract: We formulate a definition of basic sequential equilibrium for multi-stage games with infinite type sets and infinite action sets, and we prove its general existence. We then explore several difficulties of this basic concept and, in light of them, propose the more restrictive definition of essential sequential equilibrium while maintaining general existence. * Department of Economics, University of Chicago 1

Goal: formulate a definition of sequential equilibrium for multi-stage games with infinite type sets and infinite action sets, and prove general existence. Sequential equilibria were defined for finite games by Kreps-Wilson 1982, but rigorously defined extensions to infinite games have been lacking. Various formulations of "perfect bayesian eqm" (defined for finite games in Fudenberg- Tirole 1991) have been used for infinite games. No general existence. Harris-Stinchcombe-Zame 2000 explored definitions with nonstandard analysis. It is well understood that sequential equilibria of an infinite game can be defined by taking limits of sequential equilibria of finite games that approximate it. The problem is to define what finite games are good approximations. It is easy to define sequences of finite games that seem to be converging to the infinite game (in some sense) but have limits of equilibria that seem wrong. We must try to define a class of finite approximations that yield limit-equilibria which include reasonable equilibria and exclude unreasonable equilibria. Here we present the best definitions that we have been able to find, but of course this depends on intuitive judgments about what is "reasonable". Others should explore alternative definitions. 2

Dynamic multi-stage games =(,N,K,A,T,p,,v) = {initial state space}, i N = {players}, finite set. k {1,...,K} periods of the game. Let L = {(i,k) i N, k {1,...,K}} = {dated players}. We write ik for (i,k). A ik = {possible actions for player i at period k}; history dependence through payoffs. T ik = {possible informational types for player i at period k}, disjoint sets. Algebras (closed under finite and complements) of measurable subsets are specified for and T ik, including all one-point sets as measurable. If is a finite set, then all subsets of and T ik are measurable. There is a finitely additive probability measure p on the measurable subsets of. A = h K i N A ih = {possible sequences of actions in the whole game}. The subscript, <k, denotes the projection onto periods before k. For example, A <k = h<k i N A ih = {possible action sequences before period k} (A <1 = { }), and for a A, a <k = h<k i N a ih is the partial sequence of actions before period k. Any player i's information at any period k is specified by a type function ik : A <k T ik such that, a A, ik (,a <k ) is a measurable function of. Assume perfect recall: ik L, m<k, ikm :T ik T im A im such that ikm ( ik (,a <k )) = ( im (,a <m ),a im ),, a A, and {t ik ikm (t ik ) R im {a im }} is measurable in T ik, measurable R im T im, a im A im. Let T ik (a i,<k ) = m<k {t ik T ik ikm (t ik ) T im {a im }} be the measurable set of all possible types of player i at date k given that i s previous actions are a i,<k. (T i1 (a i,<1 ) = T i1 ) Each player i has a bounded utility function v i : A such that v i (,a) is a measurable function of, a A. Bound v i (,a), (i,,a). 3

[Problems of finite support mixtures in approximating games] Example. (Kuhn) Consider a zero-sum game in which player 1 chooses a number a 1 in [0,1], and player 2 chooses a continuous function, f, from [0,1] into itself whose Lebesgue integral must be 1/2. Player 1 s payoff is f(a 1 ). Player 1 can guarantee 1/2 by choosing a 1 uniformly from [0,1] and player 2 can guarantee 1/2 by choosing f(x) = 1/2 for all x. But in any finite approximation in which player 2 can choose a function that is zero at each of player 1 s finitely many available actions, player 1 s equilibrium payoff is 0. One solution is to replace ordinary action sets with mixed action sets. Algebras (closed under finite and complements) of measurable subsets are specified also for each A ik, including all one-point sets as measurable. Each ik (,a <k ) and each v i (,a) is assumed jointly measurable in (,a). Let = ( ik [0,1]). In the state = (,( ik) ik L ), nature draws from the given p, and, for each ik, draws ik [0,1] independently from Lebesgue measure. This defines a new distribution p on. No player observes any of the new ik. Let à ik ={measurable maps ã ik :[0,1] A ik } be player ik s set of mixed actions. Then ṽ i (,ã) = v i (,ã( )) and ik(,ã <k ) = ( ik (,ã( ) <k ), ã i,<k ) are measurable in for each profile of mixed actions ã. All our analysis could be done for this model with Ã,,p instead of A,,p. 4

Problems of spurious signaling in approximating games Example. Nature chooses {-1,1}, prob. ½ each. Player 1 observes t 1 = and chooses a 1 [-1,1]. Player 2 observes t 2 = a 1 and chooses a 2 {-1,1}. Payoffs are as follows. a 2 = -1 a 2 = 1 = -1 (0,2) (1,0) = 1 (0,0) (1,1) No observation t 2 [-1,1] should lead player 2 to choose a 2 = 1 since player 1 always prefers a 2 = 1 and the set of signals player 1 can send does not depend on. But any approximation where 1 s finitely many actions are, e.g., all rational when t 1 = 1 and are all irrational when t 1 = -1 reveals, so a 2 = 1 when t 2 is any rational number. To avoid such spurious signaling, the finite set of actions available to a player should not depend on his informational type. But now consider any finite approximation in which player 1 s finite action set does not depend on his type and in which the action a 1 = x > 0 is available to player 1 but action a 1 = -x is not. Then t 2 = x implies = 1, leading player 2 to choose a 2 = 1. To avoid also the latter spurious signaling, we should limit player 2 s ability to distinguish her types and allow player 1 any finite subset of his actions. So, we will coarsen the players information by finitely partitioning each of their type spaces, T ik, and we will expand the players finite action sets to fill in the action spaces A ik faster than the type-space partitions become arbitrarily fine. 5

Problems of spurious knowledge in approximating games Some approximations of the state space can change the information structure and give players information that they do not have in the original game. Example. includes two independent uniform [0,1] random variables: the first (1's cost to sell some object) is observed only by player 1, the second (2's value of buying this object) is observed only by player 2. For any integer m > 0, we might finitely approximate this game by one in which the state space includes 10 2m equally likely pairs of numbers: the first number, observed by 1, ranges over the 10 m m-digit decimals in [0,1]; the second, observed by 2, is a 2m-digit decimal in which the first m digits range over all m-digit decimals but the last m digits repeat the first number in the pair. As m, these pairs uniformly fill the unit square in the state space, but they represent games in which player 2 knows 1's type. With finite partitions of the T ik we can keep the original state space and probability distribution p in all finite approximations, avoiding the above information distortion. By partitioning each T ik separately, we avoid giving spurious knowledge to any player. In some games, the ability to choose some special action in A ik or the ability to distinguish some special subsets of T ik may be particularly important. So we consider convergence of a net of approximations (not just a sequence) that are indexed on all finite subsets of A ik and all finite partitions of T ik. So, any finite collection of actions and any finite partition of types are available when considering the properties of the limits. 6

Observable events and action approximations For any a A and any measurable R ik T ik, let P(R ik a) = p({ ik (,a <k ) R ik }). (For k=1, we could write P(R i1 ) for P(R i1 a), ignoring the trivial a <1 =.) The set of observable events for i at k that can have positive probability is Q ik = {R ik T ik R ik is measurable and a A such that P(R ik a) > 0}. Let Q = ik L Q ik (a disjoint union) denote the set of all events that can be observed with positive probability by some dated player. We extend this notation to describe observable events that can have positive probability when players are restricted to actions in some subsets of the A ik. An action approximation is any C = ik L C ik such that each C ik is a nonempty finite subset of A ik, and so C A. For any action approximation C A, let Q ik (C) = {R ik T ik R ik is measurable and c C such that P(R ik c) > 0}. Let Q(C) = ik L Q ik (C) denote the set of all events that can be observed with positive probability by some dated player when all players use actions in C. Q is the union of all Q(C) over all finite approximations C. Action approximations are partially ordered by inclusion. If C C o then C is a better approximation than C o to the true action sets A. Fact. If C C o then Q(C) Q(C o ). 7

Finite approximations of the game An information approximation is any = ik L ik such that each ik is a finite partition of measurable subsets of T ik. (So elements of ik are disjoint measurable sets with union T ik.) Information approximations are partially ordered by the fineness of their partitions. Say that = ik L ik is finer than o = ik L o ik if ik is a finer partition of T ik than o ik ik L. (If is finer than o, then is a better approximation than o to the true type sets T.) Recall that T ik (a i,<k ) is the measurable set of all possible types of player i at date k given that i s previous actions are a i,<k. An information approximation and an action approximation C together define a finite approximation (,C) of the game Γ. After any history (,c <k ) C <k in the approximating game (,C), each player ik is informed of the ik ik containing his true type ik (,c <k ), and is also informed that his true type is in T ik (c i,<k ). The latter ensures, by perfect recall in, that action-recall is satisfied in (,C) even when we expand C to fill in the action space A for any fixed. Let S ik = { ik T ik (c i,<k ) : ik ik, c C} be the finite set of possible types s ik T ik of dated player ik in the finite approximation (,C). Fix. The approximation (,C) will satisfy perfect recall C if: ik L, ik ik, m<k, im im s.t. {(,a) A ik (,a <k ) ik } {(,a) A im (,a <m ) im }...(*) Let F denote the set of information approximations satisfying the inclusion (*). Fact. For any information approximation o, there is a finer approximation such that F, and so (,C) satisfies perfect recall C. 8

Strategy profiles for finite approximations Here, for F, let (,C) be a given finite approximation of the game. A strategy profile for the finite approximation (,C) is any = ( ik ) ik L s.t. each ik :S ik (C ik ). So ik (c ik s ik ) 0, c ik C ik, C_ik ik ( s ik )=1, s ik S ik. Let [t ik ] be the element of ik containing t ik T ik. measurable Z and c C, let P(Z,c ) = Z ik L ik (c ik [ ik (,c <k )] T ik (c i,<k ))p(d ). For any measurable R ik T ik, let P(R ik ) = c C P({ ik (,c <k ) R ik },c ). A totally mixed strategy profile for (,C) has ik (c ik s ik ) > 0 c ik C ik, s ik S ik. Any totally mixed for (,C) yields P(R ik ) > 0, R ik Q ik (C), ik L. With P(R ik ) > 0, let P(Z,c R ik, ) = P(Z { ik (,c <k ) R ik },c )/P(R ik ). We can extend this probability function to define P(Y R ik, ) = c C P(Y(c),c R ik, ) for any Y A such that, for each c C, Y(c) = { (,c) Y} is a measurable subset of. For all such Y and c ik C ik, define P(Y R ik,c ik, ) = P(Y R ik,(c ik, -ik )). Note. Changing the action of i at k does not change the probability of i's types at k, so P(R ik c ik, ik ) = P(R ik ) > 0. 9

Approximate sequential equilibria Here, for F, let (,C) be a given finite approximation of the game, and let be a totally mixed strategy profile for (,C). With the probability function P defined above, we can define V i ( R ik ) = c C v i (,c) P(d,c R ik, ), R ik Q ik (C). For any ik L and any c ik C ik, let (c ik, ik ) denote the strategy profile that differs from only in that i chooses action c ik at k with probability 1. Changing the action of i at k does not change the probability of any of i s types at k, so P(R ik c ik, ik ) = P(R ik ), R ik Q ik (C). So we can similarly define V i (c ik, ik R ik ) to be i's payoff from choosing action c ik at period k, given R ik, if others apply the strategies. For >0, is an -approximate sequential equilibrium for the finite approximation (,C) iff is a totally mixed strategy for (,C) and V i (c ik, ik s ik ) V i ( s ik ) +, ik L, s ik S ik Q ik (C), c ik C ik. Facts. Any finite approximation has an -approximate sequential equilibrium. In finite games with perfect recall, a strategy profile is part of a sequential equilibrium if and only if it is the limit as 0 of a sequence of -approximate sequential equilibria. 10

Assessments Let Y denote the set of all outcome events Y A such that { (,a) Y} is a measurable subset of, a A. For any action approximation, C, let W(C) = Q(C) ( ik L Q ik (C) C ik ). Let W = Q ( ik L Q ik A ik ) (the union of all such W(C)). An assessment is a vector of conditional probabilities (Y W) [0,1] Y Y, W W. So is in the compact (product topology) set [0,1] Y W, i.e., (Y Q ik ) [0,1] and (Y Q ik,a ik ) [0,1], ik L, Y Y, Q ik Q ik, a ik A ik. Basic sequential equilibria An assessment is a basic sequential equilibrium iff: for every > 0, for every finite subset Φ of Y W, for every information approximation o, there is a finer information approximation F such that for every action approximation C o, there is an action approximation C C o such that Φ Y W(C) and (,C) has some -approximate sequential equilibrium such that (Y W) P(Y W, ), (Y,W) Φ. Theorem. The set of basic sequential equilibria is nonempty and is a closed subset of [0,1] Y W in the product topology. The proof is based on Tychonoff's Theorem on compactness of products of compact sets. Kelley, 1955, p143. 11

Defining basic sequential equilibria with nets A partially ordered set is directed if for any two members there is a member at least as large as both. A net of real numbers is a collection of real numbers indexed by a directed set (D, ). A net of real numbers {x δ } δ D converges to the limit x o if for every ε > 0 there is an index δ ε s.t. x δ is within ε of x o whenever δ δ ε. Write lim δ D x δ = x o. Also, define lim δ D x δ = lim δ D (inf δ δ x δ ). The set of action approximations, partially ordered by set inclusion, and the set of information approximations, partially ordered by the fineness of the type-space partitions, are directed sets. Then is a basic sequential equilibrium iff for every finite subset Φ of Y W, there is a [ ] mapping such that lim 0+ lim F lim C max (Y,W) Φ (Y W) P(Y W, [,,C]) = 0, where each [,,C] is some -approximate sequential equilibrium of the finite approximation (,C). Note. 1. All of the finitely many conditioning events determined by Φ are given positive probability by [,,C] for all large enough C. 2. It would be equivalent to reverse the order of Φ and [ ] in the net definition, i.e., saying instead: iff there is a [ ] mapping such that for every finite subset Φ of Y W, the iterated limit is zero. 12

Elementary properties of basic sequential equilibria For any dated player ik and any observable event Q ik Q ik, let I(Q ik ) = {(,a) A ik (,a <k ) Q ik }. Let be a basic sequential equilibrium. Then has the following general properties, for any outcome events Y and Z and any observable events Q ik and Q jm : (Y Q ik ) [0,1], ( A Q ik ) = 1, ( Q ik ) = 0 (probabilities); if Y Z= then (Y Z Q ik ) = (Y Q ik ) + (Z Q ik ) (finite additivity); (Y Q ik ) = (Y I(Q ik ) Q ik ) (conditional support); (Y I(Q jm ) Q ik ) (I(Q ik ) Q jm ) = (Y I(Q ik ) Q jm ) (I(Q jm ) Q ik ) (Bayes consistency). Bayes consistency implies that (Y T ik ) = (Y T jm ), for all ik and jm in L. So the unconditional distribution on outcomes A for a basic sequential equilibrium can be defined by (Y) = (Y T ik ), Y A, ik L. The unconditional marginal distribution of on is the given prior p: (Z A) = p(z), for any measurable Z. Fact (sequential rationality). If is a basic sequential equilibrium then, ik L, Q ik Q ik, a ik A ik, v i (,a) (d,da Q ik ) v i (,a) (d,da Q ik,a ik ). The left-hand side integral is player i s equilibrium payoff conditional on Q ik. The right-hand side integral is i s payoff from deviating at date k to a ik conditional on Q ik. 13

Finite additivity of solutions Example: Consider a game where the winner is the player who chooses the smallest strictly positive number. In any finite approximation, let each player choose the smallest strictly positive number that is available to him. In the limit, we get (0 a 1 x) = 1, x>0, but (a 1 =0) = 0. The probability measure is not countably additive, only finitely additive, and this finite additivity lets us represent an infinitesimal positive action a 1. Expected utilities are well-defined with finite additivity, as utility is bounded. Suppose the players are {1,2}, and player 2 observes a 1 before choosing a 2. For any possible value of a 1, 2 would always choose a 2 <a 1 after observing this a 1 in any sufficiently large finite approximation. So in the limit, we get: x>0, (a 2 <a 1 a 1 =x) = 1 and (0<a 1 x) = 1. But multiple equilibria can have any prior P(2 wins) = (a 2 <a 1 ) in {0,1}. By considering a subnet of finite approximations in which 1 always has smaller actions than 2, we can get an equilibrium with (a 2 <a 1 ) = 0 even though (a 2 <a 1 a 1 =x) = 1 x>0 and (a 1 >0)=1. In this subnet, 1 always chooses a 1 where 2's strategy has not converged, and so the outcome cannot be derived by backward induction from 2's limiting strategy. Strategic actions avoid such problems, as would considering subnets where later players' actions grow faster than earlier players' (fast inner limits for later C k ). 14

Strategic entanglement (Milgrom-Weber 1985) Example: is uniform [0,1], observed by both players 1 and 2, who then simultaneously play a battle-of-sexes game where A 1 =A 2 ={1,2}, v i (,a 1,a 2 ) = 2 if a 1 =a 2 =i, v i = 1 if a 1 =a 2 i, and v i = 0 if a 1 a 2. Consider finite-approximation equilibria such that, for a large integer m, they do a 1 =a 2 =1 if the m'th digit of is odd, a 1 =a 2 =2 if the m'th digit of is even. As m, these equilibria converge to a limit where the players randomize between actions (1,1) and (2,2), each with probability 1/2, independently of. In the limit, the players' actions are not independent given in any positive interval. They are correlated by commonly observed infinitesimal details of. Given in any positive interval, player 1's expected payoff is 1.5 in eqm, but 1's conditional payoff from deviating to a 1 =1 is 0.5(2)+0.5(0) = 1, and 1's conditional payoff from deviating to a 1 =2 is 0.5(0)+0.5(1) = 0.5. Thus, the sequential-rationality inequalities can be strict for all actions. To tighten the sequential-rationality lower bounds for conditional expected payoffs in equilibrium, we could consider also deviations of the form: "deviate to action c ik when the equilibrium strategy would select an action in the set B ik " for any c ik A ik and any B ik A ik. In the next example, strategic entanglement is unavoidable, occurring in the only basic sequential equilibrium. 15

[(Unavoidable) Strategic entanglement] (Harris-Reny-Robson 1995) Example: Date 1: Player 1 chooses a 1 from [-1,1], player 2 chooses from {L,R}. Date 2: Players 3 and 4 observe the date 1 choices and each choose from {L,R}. For i=3,4, player i s payoff is -a 1 if i chooses L and a 1 if i chooses R. Player 2 s payoff depends on whether she matches 3 s choice. If 2 chooses L then she gets 1 if player 3 chooses L but -1 if 3 chooses R; and If 2 chooses R then she gets 2 if player 3 chooses R but -2 if 3 chooses L. Player 1 s payoff is the sum of three terms: (First term) If 2 and 3 match he gets - a 1, if they mismatch he gets a 1 ; plus (second term) if 3 and 4 match he gets 0, if they mismatch he gets -10; plus (third term) he gets - a 1 2. Approximations in which 1 s action set is {-1,,-2/m,-1/m,1/m,2/m,,1} have a unique subgame perfect (hence sequential) equilibrium in which player 1 chooses ±1/m with probability ½ each, player 2 chooses L and R each with probability ½, and players i=3,4 both choose L if a 1 =-1/m and both choose R if a 1 =1/m. Player 3 s and player 4 s strategies are entangled in the limit. The limit of every approximation produces (the same) strategic entanglement. 16

[(Spurious) Strategic entanglement] (Harris, Stinchcombe, Zame 2000) Example: is uniform [0,1], payoff irrelevant, observed by both players 1 and 2, who then simultaneously play the following 2x2 game. L R L 1,1 3,2 R 2,3 0,0 Consider finite approximations in which, for large integers m, each player observes the first m-1 digits of the ternary expansion of. Additionally, player 1 observes whether the m th digit is 1 or not and player 2 observes whether the m th digit is 2 or not. Consider equilibria in which each player i chooses R if the m th digit of the ternary expansion of is i and chooses L otherwise. As m, these converge to a correlated equilibrium of the 2x2 game where each cell but (R,R) obtains with probability 1/3, independently of. Not in the convex hull of Nash equilibria! The type of strategic entanglement generated here is impossible to generate in some approximations it depends on the fine details unlike the entanglement in the previous two examples. Note. For this example, our more restrictive definition below of essential sequential equilibrium eliminates the strategic entanglement described here. 17

Problems of spurious concealment in approximating games Example. The state is a 0-or-1 random variable, which is observed by player 1 as his type t 1. Then player 1 chooses a number a 1 in [0,1], which is subsequently observed by players 2 and 3. Consider a finite approximation in which 1 observes t 1, and then can choose any m-digit decimal number in [0,1]. Suppose that player 1 wants to share information with 2 but conceal it from 3. Such concealed signaling would be possible in a finite approximation where 2 can observe 1's action exactly while 3 can observe only its first m-1 digits. But in the real game, any message that 1 sends to 2 is also observed by 3, and it should not be possible for 1 and 2 to tunnel information past 3. The concealed-signaling equilibrium here depends on the fine details of the approximation. Possible solution: Require that type sets and action sets which are "the same" must have the same finite approximations. (How to recognize sameness?) Note. When the above discontinuous game arises as the first-stage reduced form of a twostage game with continuous payoff functions and continuous type functions (a technique due to HSZ), our more restrictive definition below of essential sequential equilibrium eliminates any spurious concealment. 18

[Problems of spuriously exclusive coordination in approximating games] Example: Three players choose simultaneously from [0,1]. Player 1 wishes to choose the smallest positive number. Players 2 and 3 wish to match player 1, receiving 1 if they do but 0 otherwise. It should not be possible for 2, but not 3, to match 1. But consider finite approximations in which the smallest positive action available to player 1 is available to player 2, but not to player 3. The only equilibria have 1 and 2 choosing that special common action and player 3 choosing any (other) action. Hence, there are basic sequential equilibria in which 2 matches 1 but 3 does not. The exclusive-coordination equilibrium here depends on the fine details of the approximation. Possible solution: Require that type sets and action sets which are "the same" must have the same finite approximations. (How to recognize sameness?) Note. When the above discontinuous game arises as the first-stage reduced form of a twostage game with continuous payoff functions and continuous type functions (a technique due to HSZ), our more restrictive definition below of essential sequential equilibrium eliminates any spuriously exclusive coordination. 19

Robustness to approximations The problems of spurious strategic entanglement, spurious concealment, and spuriously exclusive coordination arise when limits of approximate equilibria depend on the fine details of the approximation. One might then insist that equilibria be robust to all approximations. But this is not compatible with existence (e.g., two players each wish to choose the smallest strictly positive number). An alternative is to seek as much robustness as possible subject to existence. For completely mixed in (,C), let m( ) [0,1] Y W be the assessment whose probabilities agree with P(Y W, ) when (Y,W) Y W(C), and are zero otherwise. Essential sequential equilibria The set of essential sequential equilibria is the smallest closed set of assessments containing every closed set of assessments M that is minimal with respect to the following property: for every open set U containing M, there is a [ ] mapping such that lim 0+ lim F lim C I U (m( [,,C])) = 1, where I U ( ) is the indicator function for U, and each [,,C] is some -approximate sequential equilibrium of the finite approximation (,C). Theorem. The set of essential sequential equilibria is nonempty and is a closed subset of the set of basic sequential equilibria. (By Zorn s lemma and Tychonoff s Theorem such a minimal set M that is nonempty.) Note. It would be equivalent to reverse the order U and [ ] in the definition, i.e., saying instead: there exists a [ ] mapping such that for every open set U containing M, the iterated limit is equal to 1. 20

Limitations of step-strategy approximations Example: is uniform [0,1], player 1 observes, chooses a 1 [0,1], v 1 (,a 1 )=1 if a 1 =, v 1 (,a 1 )=0 if a 1. In any finite approximation, player 1's finite action set cannot allow any positive probability of c 1 matching exactly, and so 1's expected payoff must be 0. Player 1 would like to use the strategy "choose a 1 =," but this strategy can be only approximated by step functions when 1 has finitely many feasible actions. Step functions close to this strategy yield very different expected payoffs because the utility function is discontinuous. If we gave player 1 an action that simply applied this strategy, he would use it! (It may be hard to choose any real number exactly, but easy to say "I choose.") Thus, adding a strategic action that implements a strategy which is feasible in the limit game can significantly change our sequential equilibria. This problem follows from our principle of finitely approximating information and actions separately, which is needed to prevent spurious signaling. Example (Akerlof): uniform[0,1], 1 observes 1 =, 1 chooses a 1 [0, 1.5], 2 observes 2 =a 1, 2 chooses a 2 {0,1}, v 1 (,a 1,1) = a 1, v 2 (,a 1,1) = 1.5 a 1, v i (,a 1,0) = 0. Given any finite set of strategies for 1, we could always add some strategy b 1 such that < b 1 ( ) < 1.5, and b 1 ( ) has probability 1 of being in a range that has probability 0 under the other strategies in the given set. 21

Using strategic actions to solve problems of step-strategy approximations One way to avoid the problems of step-strategies is to permit the use of some strategic actions. This must be done with care to avoid the problem of spurious signaling. Algebras (closed under finite and complements) of measurable subsets are specified also for each A ik, including all one-point sets as measurable. If is a finite set, then all subsets of A ik can be assumed measurable. For any msble g: A, all ik (,g( ) <k ) and v i (,g( )) are assumed msble in. A strategic action for ik L is a measurable function, a ik *:T ik A ik. The analyst may wish to include a subset of strategic actions as available. Let A ik * denote a subset of ik s set of strategic actions. Assume that for each a ik A ik, A ik * contains the constant strategic action taking the value a ik on T ik. Thus, the set of intrinsic actions A ik is a subset of A ik *. For every a* A* and every, let (,a*) denote the action profile a A that is determined when the state is and each player plays according to the strategic action profile a*. Note that for each a* A*, (,a*) is measurable in. To ensure perfect recall, define new type spaces and type maps: T ik * = T ik ( h<k A ih *), with measurable subsets whose slices contained in T ik defined by any a <k * are measurable, and ik *(,a <k *)=( ik (, (,a*) <k ), h<k a ih *). Extend each v i (,a) from A to A* by defining v i (,a*)= v i (, (,a*)). Note that for every ik L and every a* A*, ik *(,a <k *) and v i (,a*) are measurable functions of. 22

This defines a dynamic multi-stage game, Γ*, with perfect recall and we may therefore employ all of the notation and definitions we previously developed. Q ik * is the set of observable events for player i at period k that can have positive probability in *. Q* is the (disjoint) union of these Q ik * over all ik. Approximations of Γ* are defined, as before, by (,C), where C is any finite set of (strategic) action profiles in A* and is any information approximation of all the T ik *. Perfect recall in finite approximations of * In a finite approximation of Γ*, perfect recall means that a player ik remembers, looking back to a previous date m<k, the partition element containing his information-type at that date and the strategic action chosen there. If the partition element contains more than one of his information types and his strategic action there is not constant across the projection of those types onto T im, then he will not recall the action in A im actually taken at the previous date. Thus, there is perfect recall with respect to strategic actions but not with respect to intrinsic actions. Of course, if the strategic action chosen is a constant function, i.e., equivalent to an intrinsic action, then the action too is recalled. 23

References: David Kreps and Robert Wilson, "Sequential Equilibria," 50:863-894 (1982). Drew Fudenberg and Jean Tirole, "Perfect Bayesian and Sequential Equilibrium,", Journal of Economic Theory 53:236-260 (1991). Christopher J. Harris, Maxwell B. Stinchcombe, and William R. Zame, "The Finitistic Theory of Infinite Games," UTexas.edu working paper. http://www.laits.utexas.edu/~maxwell/finsee4.pdf C. Harris, P. Reny, A. Robson, "The existence of subgame-perfect equilibrium in continuous games with almost perfect information," Econometrica 63(3):507-544 (1995). J. L. Kelley, General Topology (Springer-Verlag, 1955). Milgrom, P. and R. Weber (1985): "Distributional Strategies for Games with Incomplete Information," Mathematics of Operations Research, 10, 619-32. 24