Modeling Unawareness in Arbitrary State Spaces

Similar documents
EQUIVALENCE OF THE INFORMATION STRUCTURE WITH UNAWARENESS TO THE LOGIC OF AWARENESS. 1. Introduction

A Canonical Model for Interactive Unawareness

Epistemic Foundations for Set-algebraic Representations of Knowledge

Unawareness. Jing Li. August, 2004

Agreeing to disagree: The non-probabilistic case

Seminaar Abstrakte Wiskunde Seminar in Abstract Mathematics Lecture notes in progress (27 March 2010)

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

Probabilistic Unawareness

Games with Unawareness

On the Consistency among Prior, Posteriors, and Information Sets

The Value of Information Under Unawareness

Unawareness of Theorems. Spyros Galanis University of Rochester. August 2, 2006

A Note on the Existence of Ratifiable Acts

How Restrictive Are Information Partitions?

Automata Theory and Formal Grammars: Lecture 1

Ambiguous Language and Differences in Beliefs

Modal Dependence Logic

Tutorial on Mathematical Induction

Lecture 2: Syntax. January 24, 2018

Levels of Knowledge and Belief Computational Social Choice Seminar

Löwenheim-Skolem Theorems, Countable Approximations, and L ω. David W. Kueker (Lecture Notes, Fall 2007)

Probability theory basics

THE SURE-THING PRINCIPLE AND P2

Alternative Semantics for Unawareness

THE SURE-THING PRINCIPLE AND P2

Rough Sets for Uncertainty Reasoning

Trade and the Value of Information under Unawareness. Spyros Galanis PRELIMINARY. March 20, 2008

Inquiry Calculus and the Issue of Negative Higher Order Informations

Subjective expected utility in games

We set up the basic model of two-sided, one-to-one matching

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

UMASS AMHERST MATH 300 SP 05, F. HAJIR HOMEWORK 8: (EQUIVALENCE) RELATIONS AND PARTITIONS

Comment on The Veil of Public Ignorance

CONSTRUCTION OF THE REAL NUMBERS.

Axiomatic set theory. Chapter Why axiomatic set theory?

Introduction to Metalogic

Monotonic models and cycles

Set theory. Math 304 Spring 2007

From Constructibility and Absoluteness to Computability and Domain Independence

Foundations of Mathematics MATH 220 FALL 2017 Lecture Notes

Great Expectations. Part I: On the Customizability of Generalized Expected Utility*

3. Only sequences that were formed by using finitely many applications of rules 1 and 2, are propositional formulas.

2. Introduction to commutative rings (continued)

Lexicographic Choice under Variable Capacity Constraints

Notes on Ordered Sets

Chapter One. The Real Number System

Mathematics-I Prof. S.K. Ray Department of Mathematics and Statistics Indian Institute of Technology, Kanpur. Lecture 1 Real Numbers

Olivier Gossner, Elias Tsakas. A reasoning approach to introspection and unawareness RM/10/006

Formal Epistemology: Lecture Notes. Horacio Arló-Costa Carnegie Mellon University

On minimal models of the Region Connection Calculus

2.23 Theorem. Let A and B be sets in a metric space. If A B, then L(A) L(B).

Supplement to Framing Contingencies

MATH10040: Chapter 0 Mathematics, Logic and Reasoning

Equivalent Forms of the Axiom of Infinity

Reasoning About Knowledge of Unawareness

To every formula scheme there corresponds a property of R. This relationship helps one to understand the logic being studied.

Mathmatics 239 solutions to Homework for Chapter 2

Propositional Logic. Fall () Propositional Logic Fall / 30

A Canonical Model for Interactive Unawareness

Introduction to Metalogic

Robust Knowledge and Rationality

SOME TRANSFINITE INDUCTION DEDUCTIONS

Reasoning About Common Knowledge with Infinitely Many Agents

BIPARTITE GRAPHS AND THE SHAPLEY VALUE

Awareness. Burkhard C. Schipper. May 6, Abstract

a (b + c) = a b + a c

1 Basic Combinatorics

Tree sets. Reinhard Diestel

Lecture 4: Constructing the Integers, Rationals and Reals

Definitions: A binary relation R on a set X is (a) reflexive if x X : xrx; (f) asymmetric if x, x X : [x Rx xr c x ]

Alvaro Rodrigues-Neto Research School of Economics, Australian National University. ANU Working Papers in Economics and Econometrics # 587

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Nondeterministic finite automata

Products, Relations and Functions

Preference, Choice and Utility

Russell s logicism. Jeff Speaks. September 26, 2007

Significance Testing with Incompletely Randomised Cases Cannot Possibly Work

Dominance and Admissibility without Priors

Sequence convergence, the weak T-axioms, and first countability

Proving Completeness for Nested Sequent Calculi 1

Basic counting techniques. Periklis A. Papakonstantinou Rutgers Business School

Computability Crib Sheet

Inductive reasoning about unawareness

Hardy s Paradox. Chapter Introduction

Payoff Continuity in Incomplete Information Games

Lecture Notes 1 Basic Concepts of Mathematics MATH 352

An Inquisitive Formalization of Interrogative Inquiry

Predicates, Quantifiers and Nested Quantifiers

FORMAL PROOFS DONU ARAPURA

Reasoning About Rationality

Adding Modal Operators to the Action Language A

Nested Epistemic Logic Programs

Definitions and Proofs

AN ALGEBRAIC APPROACH TO GENERALIZED MEASURES OF INFORMATION

Abstract & Applied Linear Algebra (Chapters 1-2) James A. Bernhard University of Puget Sound

Discrete Mathematics for CS Fall 2003 Wagner Lecture 3. Strong induction

A Guide to Proof-Writing

TRUTH-THEORIES FOR FRAGMENTS OF PA

Bisimulation for conditional modalities

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

Transcription:

Modeling Unawareness in Arbitrary State Spaces Jing Li Duke University and University of Pennsylvania E-mail: jingli@econ.duke.edu December, 2006

Abstract Li (2006a) models unawareness by exploiting a product structure of the state space. In this paper, I model interactive unawareness in an arbitrary state space. Unawareness is characterized as a measurability constraint that results in players reasoning about a coarse subjective algebra of events. The model is shown to be equivalent to the product model in Li (2006a), indicating that such measurability constraint can be captured by restrictions on the dimensions of the state space without loss of generality. I also examine the special case of partial unawareness, where players are aware of the relevant uncertainties, but unaware of some resolutions. I show in this case, players may interpolate their information in a way that generates false knowledge, and such interpolation effects are best captured by letting players have subsets of the objective state space as their subjective state spaces. Keywords: unawareness, partial unawareness, information, information partition, the state space JEL Classification: C70, C72, D80, D82, D83

1 Introduction A person is unaware of an event if and only if he does not know it, and he does not know he does not know it, and so on. That unawareness plays an important role in important economic environments has been long recognized. For example, there are many informal arguments linking unawareness to incomplete contracts. Li (2006a) examines information structures under unawareness in a set-theoretic model along the line of Aumann (1976), exploiting a product structure of the state space. While this product model is intuitively appealing and easy to interpret, one can imagine situations where the product structure may not be most appropriate. In this paper, I develop a general model of unawareness based on arbitrary state spaces, and show it is equivalent to the product model. This gives rise to various equivalent characterizations of unawareness, including a special case called partial unawareness. Consider the following example. Suppose Alice is planning a trip to Florida. The relevant state space consists of three states: sunny (s), raining without hurricane (r), or raining with hurricane (h). Suppose Alice is unaware of hurricane. Then Alice must not be able to reason about r and h separately: r and h are identical except that in r there is no hurricane, while in h there is hurricane, while Alice is unaware of this difference. As a consequence, even if Alice receives a signal indicating the true state is neither s nor h, she can only recognize the former implication. 1 Furthermore, any event that is true in either r or h but not both must be beyond Alice too, as it requires Alice to be able to differentiate these two states. 2 This suggests unawareness imposes a measurability constraint on information processing. States that differ only on things of which the player is unaware are too fine for her to tell apart, and hence can only enter her reasoning as a whole. Intuitively, such sets of states consist of subjective states in the player s mind-set. Therefore, one can represent players subjective state spaces as partitions over the objective state space. For example, Alice s subjective state space is simply {{s}, {r, h}}. Each player can only reason about events in his subjective algebra, the set of events that can be represented as the union of the partition elements in his subjective state space. Consequently, players only recognize the projection of their (factual) signal on their subjective state spaces. For example, being unaware of hurricane, the signal neither sunny nor hurricane, i.e. {r}, appears to indicate only not sunny for Alice, i.e. {r, h}, the partition element (or union of the partition elements) intersecting with the factual signal. I show such measurability constraint indeed characterizes unawareness: at any state, the knowledge hierarchy under unawareness is obtained by removing all knowledge concerning events that are not in the player s subjective algebra from the knowledge hierarchy under full awareness. This characterization sharply differentiates the case of 1 For example, think of the signal as an online weather report that perfectly predicts the weather, but to check its content, one has to type in the inquiries. Since Alice is unaware of hurricane and hence never inquires about it, she would not see it even if it says there is no hurricane in the weather report. 2 In this sense, Alice is aware of neither r nor h. 1

unawareness from the case of being aware but assigning probability zero: if a person is unaware of an event, then the event is not in his probability space. Thus it is possible for a player to be unaware of events E 1, E 2 but assign positive probability to E 1 E 2 ; while if one assigns probability zero to E 1 and E 2, then one must also assign probability zero to E 1 E 2. I extend the result to multi-agent environment, and investigate the necessary conditions under which unawareness would not affect (interactive) information processing regarding events of which players are aware. The general model turns out to be equivalent to the product model studied in Li (2006a). Consider again the Florida trip example. Intuitively, there are two relevant questions in this decision problem, whether it is sunny and whether there is a hurricane, while Alice is unaware of the latter. One can consider a product objective state space, say, {sunny, raining} {hurricane, no hurricane}, and let Alice have the subjective state space {sunny, raining}, where the hurricane dimension is lacking due to her unawareness. It is easy to see the product model can always be translated into the general model: each subjective state corresponds to a product set in the objective state space. The content of the equivalence result is the other direction: there is no loss of generality in treating the measurability constraint imposed by unawareness as restrictions on the dimensions of a product space. In single-agent environment, the equivalence essentially follows from the observation that any state space can be represented as a set of binary strings. Fix a state space S, consider a set of partitions, denoted by F, satisfying the property that the join of all yields the finest state space. Then for any s S, there is a unique element in the Cartesian product of all partitions in F such that s is contained in every coordinate of it. For example, consider the product set {{s}, {r, h}} {{h}, {s, r}} = {({s}, {h}), ({s}, {s, r}), ({r, h}, {h}), ({r, h}, {s, r})} The states s, r, h corresponds to ({s}, {s, r}), ({r, h}, {h}), ({r, h}, {s, r}) respectively. In fact, {{s}, {r, h}} {{h}, {s, r}} is just the set representation of the product space {sunny, raining} {hurricane, no hurricane}. Notice the state (sunny, hurricane) has no counterpart in the objective state space: this is a contradictory state imposed by the product structure, in exchange for a more intuitive and simpler way to model unawareness. The equivalence result ensures adding these auxiliary states introduces no real restrictions. While the equivalence of single-agent knowledge hierarchy is entirely general, the equivalence of interactive knowledge hierarchy is not. The main issue is that constructing interactive knowledge hierarchy requires fully-specified subjective models. For example, suppose the website hosting the weather report may be attacked by hackers, in which case the weather report is inaccessible. However, Alice is unaware of hackers. Suppose Bob receives no factual information regarding the weather condition. Suppose there is no hacker attack and Alice learns it is sunny. The interactive knowledge Alice knows Bob knows Alice knows depends on Alice s perception of Bob s perception of her knowledge in all subjective states, including the raining state which Alice rules out as impossible and hence irrelevant for Alice s own knowledge hierarchy. But whether Alice 2

has access to the weather report at a raining state depends on whether there are hacker attacks, of which Alice is unaware. It seems plausible that the scenario Alice has in mind is the raining state where she has access to the weather report. After all, she can access the weather report at the current state, and given her unawareness of the hackers, there is no uncertainty regarding the availability of the weather report in her mind-set. This suggests associating the subjective models with the projections of those factual signals in states where uncertainties of which the player is unaware are resolved as in the true state. Formulating this idea requires an order structure on the state space, which is provided by the natural product order in the product space, but is lacking in the general model. 3 Consequently, the equivalence result for multi-agent model is obtained under stronger assumptions about the information structures. Notice that if the occurrence of events of which the player is unaware does not induce factual signals with different implications about things of which the player is aware, then the problem noted above does not arise. An implicit assumption here is all players share a common specification of uncertainties involved in this environment. This assumption is embedded in the common product space, but not in a common arbitrary state space in the general model: there are many collections of partitions whose join yields the finest partition. For example, in the Florida trip problem, one can also construct a product space using partitions {{s}, {r, h}} and {{r}, {s, h}}, although the latter does not have a sensible interpretation. Therefore, I fix a collection of meaningful algebras of events, and then require every player s factual signals to be decomposable in terms of these events and independent across different algebras. Another caveat in this environment is that a player may be unaware that his opponent can be unaware of an event of which he himself is aware. For example, it may be the case that Bob is aware of the hackers but is unaware that Alice is unaware of the hackers. Then in every subjective state Bob has in mind, Alice must either know or does not know events such as there is a hacker attack, leading to false interactive knowledge. To rule out such situations, I explore a condition that says, for any event of which the player is aware, he also considers the possibility that his opponents may not be aware of it. Under these two additional conditions, unawareness again is characterized as a measurability constraint on interactive knowledge hierarchy: i s knowledge of j s knowledge is obtained by removing those concerning events that are not in i s subjective algebra, or about which i is not sure whether they are in j s subjective algebra. Moreover, the equivalence result again holds in such situations, linking the measurability constraint to restrictions on the dimensions of subjective state spaces in multi-agent environment. Some unawareness situations can be naturally described as being aware of the involved uncertainties, while unaware of some possible resolutions. For example, in games, players may be aware of their opponents, and hence reason about their actions, but unaware of some actions the opponents can take. I refer to such situations as partial 3 See interactively rational product model in Li (2006a). 3

unawareness. Intuitively, in this case the subjective states a player has in mind specify those resolutions of which he is aware, and hence, what appears objective events in the modeler s eye are interpolated as objective states from the player s perspective. In this sense, under partial unawareness, the subjective state space as the player views it corresponds to a subset of the objective state space instead of partitions, and the subjective algebra of events is a relativization of the objective algebra. 4 Thus, while unawareness in general results in an (objective) knowledge hierarchy of a coarse subjective algebra, partial unawareness results in a (subjective) interpolated knowledge hierarchy of a relative subjective algebra. In particular, since players have a relative subjective algebra, they may interpolate their knowledge incorrectly and know events that are not true. In multi-agent environment, even if players share common awareness of some event, they may be different events from objective stand due to their different negations. 5 This paper complements Li (2006a) by examining information structures under unawareness in arbitrary state spaces. The characterization of unawareness as a measurability constraint is reminiscent of Savage s small worlds interpretation of the state space (1954). Fagin and Halpern (1988) first study implicit knowledge and explicit knowledge axiomatically, for which this model provide a set-theoretic treatment. Modica and Rustichini (1994, 1999) explore similar ideas in a semi-set-theoretic, single-agent model. Ely (1998) is closest to the current paper in the literature. He proposes, using an example, to model the player s information structure under unawareness as a subset of a partition over the objective state space. In an independently conceived work, Heifetz, Meier and Schipper (2006) proposes a set-theoretic model of unawareness exploiting a lattice structure on the state space. Finally, the special case of partial unawareness has been a main focus of recent papers on games with unawareness (Li 2006c, Copic and Galeotti 2006, Feinberg 2004, Feinberg 2005, Filiz 2006, Halpern and Rego 2006b, Heifetz, Meier and Schipper 2006b, Ozbay 2006). This paper establishes its connections with the general case, and explores its special features epistemically. The paper is organized as follows. Section 2 presents the general model. Section 3 discusses the interpretational issues. Section 3.2 characterizes unawareness as a measur- 4 Modica, Rustichini and Tallon (1997) investigate the case where players have subsets of the objective state space as subjective state spaces in a general equilibrium framework. 5 Technically, this approach looks similar to fixing a default resolution for uncertainties of which the player is unaware in the general model. The crucial difference is, under partial unawareness, the player interpolates each subjective state as an objectively deterministic world, while in general case, the subjective state leaves some uncertainties resolved. For example, in the Florida trip problem, although the subjective state it rains Alice has in mind arguably corresponds to the objective state it rains and there is no hurricane, without the description and there is no hurricane, it is not a deterministic world from the modeler s perspective. Thus this example is not a case of partial unawareness. In contrast, suppose Alice plays a game with Bob. Bob has three actions b1, b2 and b3, and Alice is unaware of b3. Then the subjective state space, although a partition over the objective state space from the modeler s perspective, from Alice s perspective, the two subjective states are simply one in which Bob plays b1, and another in which Bob plays b2, both of which indeed completely resolve the uncertainty about Bob s action. 4

ability constraint leading to players having coarse subjective algebras of events; Section 3.2 shows the general model is equivalent to the product model in Li (2006a); Section 3.3 discusses the special case of partial unawareness, and characterizes players subjective interpolated knowledge hierarchies. Proofs are collected in the Appendix. 2 The General Model 2.1 Primitives. Let S be a state space with typical elements denoted by lower-case letters s, t. A subjective state space is represented by a partition over S, denoted by lowercase Greek letters such as π, ν. The full state space embedding full awareness is identified by the finest partition over the state space, and is denoted by the special symbol S to emphasize its connection with S: S = {{s} : s S}. The trivial state space embedding complete unawareness is identified by the trivial partition {S}. There is a natural partial order on the set of partitions over S: given any partitions π and ν, π is weakly finer than ν, denoted by π ν, if every element in ν can be written as the disjoint union of elements in π. Thus corresponds to the notion of having (weakly) more awareness information than. Let denote the asymmetric part of the order. Definition 1 Given a state space S, a collection of partitions over S, denoted by F, is a frame for the full state space S if: 1. S = π F π; 2. for all ν F, S π; π F\{ν} Intuitively, each partition in a frame represents a question, and each set in the partition an answer to the question. A frame consists of a minimal set of questions one needs to ask in order to differentiate any two states in S. Specifically, condition 1 says, take any two states in S, they must have different answers to at least one question in the frame; condition 2 requires there be no redundant question: for each question π, there are at least two states that coincide in their answers to all other questions except π. For example, let S = {a, b, c, d}, then the following set is a frame for S : {{{a, b}, {c, d}}, {{a}, {b, c, d}}, {{d}, {a, b, c}}}. I focus attention on the set of subjective state spaces that can be generated by the frame. Let F = {Φ(F ) : F F} where Φ is defined by: Φ(F ) = { π F if F, {S} if F =. (2.1) 5

Lemma 1 The function Φ is one-to-one and the inverse function is defined by: for any π F, Φ 1 (π) = {ν F : π ν}. Moreover, Φ 1 (π) is a frame for π. For interpretation, the frame Φ 1 (π) represents the awareness information embedded in π. Let the information structure be denoted by a pair (W, P ), where: the awareness function W : S F associates each state with a subjective state space, representing the awareness information structure; the full possibility correspondence P : S 2 S \ { } associates each state with a nonempty subset of S, representing the factual information structure. Let N = {1,, n} denote the set of players, and (W i, P i ) denote player i s information. The general model is a tuple (S, F, W, P) where W = (W 1,, W n ), P = (P 1,, P n ). 2.2 Subjective models. For any π W, and any s, t S, let π(s) denote the partition element in π that contains s. Thus W (t)(s) denotes the partition element containing s in W (t), which is a partition over S. Slightly abusing notation, if ν π and E ν, then I let π(e) denote the element in π that is a weak superset of E, i.e. E π(e) π. At s, i s view of the uncertain environment is described by the subjective state space W i (s). It seems plausible to assume that, constrained by his awareness, the agent only recognizes the implications of the factual signal of which he is aware, i.e. at s, i considers subjective states in {W i (s)(t) : t P i (s)} possible; similarly, from i s perspective, j considers subjective states in {W i (s)(t) : t P j (s)} possible. Let the subjective possibility correspondence P j ( ((i, s))) describe j s factual information structure in i s subjective model at s. For simplicity, I use the symbol i s as a shorthand for ((i, s)). For any E W i (s), let {W { i (s)(t) : t P j (s)} } if E P j (s), P j (E i s ) = W i (s)(t) : t P (2.2) s j(s ) otherwise. E The definition assumes when reasoning about impossible subjective states, players only exclude those subjective states that can be excluded by all factual signals in the underlying objective states. This is an innocuous assumption. In single-agent environment, reasoning about impossible subjective states is irrelevant. On the other hand, in 6

the multi-agent model considered in this paper, I impose a condition implying the projections of factual signals in all objective states contained in a subjective state on the subjective state space coincide, which makes this assumption void. Analogous to the subjective possibility correspondence, I let the subjective awareness function W j ( i s ) describe j s awareness information structure in i s subjective model at s. For any E W i (s), [ ] W j (E i s ) = W i (s) W j(t). (2.3) t E This definition reflects the assumptions that (1) i can only reason about j s awareness about those questions of which he himself is aware (hence the meet); and (2) i could be unaware of the possibility that j may be unaware of a question he himself is aware, in which case he takes it for granted that j is aware of it (hence the join). In sum, at s, i s subjective model is the tuple (W i (s), Φ 1 (W i (s)), W( i s ), P( i s )) where W( i s ) = (W 1 ( i s ),, W n ( i s )), P( i s ) = (P 1 ( i s ),, P n ( i s )). Similarly, one can recursively construct higher-order subjective models. Since each player can only reason within his own subjective state space, I construct the domain of higher-order subjective models recursively as follows: 1 s = {(i s ) : i I}; For notational convenience, for δs 1 = (i s ), let W (δs) 1 = W i (s); s k+1 = { δs k + (j E ) : δs k k s, j I, E W (δs k ) }, k = 1, 2,, where + denotes concatenation. Here W (δs k ) denotes the subjective state space associated with the sequence δs k, for example, for δs 2 = (i s, j E ) 2 s, W (δs) 2 W j (E i s ), which is j s subjective state space at i s subjective state E, viewed from i s perspective at s. Suppose all subjective models of order k are defined. Fix s, consider δs k+1 s k+1. E ), then the relevant subjective state space is W i k+1(e δk s ) = Suppose δ k+1 s = δ k s + (i k+1 W (δ k+1 s ), and the k + 1-th order subjective model is the tuple (W (δ k+1 s ), Φ 1 (W i k+1(e δ k s )), W( δ k+1 s ), P( δ k+1 s )) where W( δs k+1 ) = (W 1 ( δs k+1 ),, W n ( δs k+1 )), P( δs k+1 ) = (P 1 ( δs k+1 ),, P n ( δs k+1 )), and they are defined as follows: for j = 1,, n and any F W (δs k+1 ), [ ] W j (F δs k+1 ) = W (δs k+1 ) W j (G δ k G F, G W (δs k s ), (2.4) ) { W (δ k s )(G) : G P j (E δ P j (F δs k+1 s k ) } { } if F [ P j (E δs k )], ) = W (δs k )(G) : G P j (G δ k G F, G W (δs k) s ) otherwise, where [ P j (E δ k s )] is the union of all sets in P j (E δ k s ). 7 (2.5)

2.3 Characterization of knowledge and unawareness. Let E = 2 S, with typical elements denoted by upper-case English letters E, F. This is the objective algebra of events in S. Given a subjective state space π, consider those events in E that can be written as disjoint union of partition elements in π, denoted by A(π): A(π) = {E E : E F = or E F = F for all F π}. Intuitively, A(π) is the set of events that can be expressed in π. Let f π : A(π) 2 π yield the subjective version of events in A(π): for any E A(π), f π (E) = {F π : F E}. (2.6) The map f π is one-to-one and onto. The set f π (A(π)) = 2 π is the set of events in the subjective state space π. In this sense, I refer to A(W i (s)) as i s subjective algebra of events at s. i is unaware of an event if and only if it is not a subjective event in his subjective state space: for any E E, U i (E) = {s S : E / A(W i (s))}. (2.7) Analogous to the standard model, I say i knows E if and only if E is true in all subjective states i considers possible: for any E E, K i (E) = { s S : E A(W i (s)), P i (W i (s)(s) i s ) f Wi (s)(e) }. (2.8) It is worth pointing out this is equivalent to: K i (E) = {s S : E A(W i (s)), P i (s) E}. (2.9) Higher-order knowledge is subtler. Does i know j knows E at s? This requires a characterization of the subjective event j knows E in i s own subjective model, which is a general model itself, leading to a recursive formula. To fix ideas, consider i s second-order knowledge: K i K j (E) = {s S : E A(W i (s)), P i (W i (s)(s) i s ) K j (E i s )}, (2.10) where K j ( i s ) is i s subjective knowledge operator at s, defined as in (2.9) for i s subjective model (W i (s), g 1 (W i (s)), W j ( i s ), P j ( i s )), i.e. for all E A(W i (s)), K j (E i s ) = { G W i (s) : E A(W j (G i s )), P j (G i s ) f Wi (s)(e) }. Similarly, applying (2.7) to i s subjective model enables one to discuss the event i knows j is unaware of E; applying (2.10) to second-order subjective models for reasoning 8

sequences ((i, s), ) 2 s enables one to construct third-order knowledge K i K j K l (E), i.e. i knows j knows l knows E, and so on. The need to track subjective models quickly leads to prohibitively cumbersome higher-order knowledge. Fortunately, when information structures are sufficiently nice, the burden can be substantially reduced. Consider the following conditions on (W, P ): 1. Factual partition: P induces an information partition over S; 2. Rational awareness: for any s, s S, s P (s ) W (s) = W (s ); 3. Nice factual partition: for all F F, and all s, t S, π F π(s) P (t) π F π(t) P (s) ; 4. Nice awareness: for any π F, suppose t, t are such that t π(t ) and t / ν(t ) for all ν F, ν π. Then s π(s ), t W (s)(t ) t W (s )(t ). Definition 2 The pair (W, P ) is rational if it satisfies factual partition and rational awareness; it is strongly rational if in addition, it also satisfies nice factual partition and nice awareness. Fix i I and suppose (W i, P i ) is rational. Since P i ( i s ) is obtained by projecting P i on the subjective state space W i (s), if P i induces an information partition over S, then P i ( i s ) induces a local information partition over W i (s): E P i (W i (s)(s) i s ) P i (E i s ) = P i (W i (s)(s) i s ). Rational awareness requires whenever the agent receives the same factual signal, he also receives the same awareness signal.it follows in i s subjective model, the subjective awareness information is the same in all subjective states he considers possible, and hence his second-order subjective model in these subjective states are identical to the first-order subjective model: E P i (W i (s)(s) i s ) W i (E i s ) = W i (s) for all s S. Thus all relevant subjective models for i s own knowledge hierarchy essentially reduce to a collection of standard models with partitional information structure: {(W i (s), P i ( i s ))} s S. Consequently, players own knowledge hierarchies satisfy the nice properties as those in the standard information partition model. Proposition 2 Fix a general model (S, F, W, P). If (W i, P i ) is rational, then the following formulae completely characterize i s knowledge hierarchy: for all E E, 1. U i (E) = K i (E) K i K i (E); 2. K i (E) = K i K i (E); 3. K i (E) U i (E) = K i K i (E). 9

Rational information structures do not guarantee nice higher-order interactive knowledge. That (W i, P i ) is rational for all i I does not imply subjective information structures are rational. In particular, subjective factual information need not induce an information partition over the entire subjective state space. Consequently, players nice own knowledge hierarchies may not be fully appreciated in interactive knowledge. For example, let S = {a, b, c, d}, W 1 (a) = W 2 (a) = {{a}, {b}, {c, d}}. Suppose P 1 induces the information partition {{a, d}, {b, c}} and P 2 induces the information partition {{a, b}, {c, d}}. Constrained by their unawareness, 1 s subjective factual information appears to have a non-partitional structure in the subjective state space: for i = 1, 2, P 1 ({b} i a ) = {{b}, {c, d}}, but P 1 ({c, d} i a ) = {{a}, {c, d}}. It follows that at a, 2 considers it possible that 1 knows the event {b, c, d} but does not know that he knows it. In fact, at a, although 2 is uncertain about whether 1 knows {b, c, d}, 2 does know that in any case, 1 does not know he knows it. Nice factual partition rules out the above situation by requiring factual signals to be decomposable into independent signals about each question in the frame. Let s π F π(s) P (t) and t π F π(t) P (s). The condition says, if two states (s and s ) coincide in their answers to the questions in the set F, then the factual signals in these two states must coincide in their implications regarding answers to these questions: if P (s ) cannot rule out t, then P (s) also cannot rule out some t which coincides with t in their answers to the questions in F. The following result verifies that this condition ensures the nice own-knowledge hierarchy is reflected in all interactive knowledge. Proposition 3 Fix (S, F, W, P). Let (W j, P j ) be rational and satisfy nice factual partition for j I. Then for all E E and any i 1,, i k I, 1. K i 1 K i kk j (E) = K i 1 K i nk j K j (E); 2. K i 1 K i k K j (E) K i 1 K i n U j (E) = K i 1 K i nk j K j (E). Under unawareness, interactive knowledge need not be correct. For example, suppose i and j receive public factual signals but private awareness signals, but i is unaware that j may have different awareness than himself. Then whatever i knows, he would know that j knows it too. In fact, from i s perspective, all knowledge he has is common knowledge between himself and j. Nice awareness condition rules out such false interactive knowledge by requiring one s awareness of a particular uncertainty only depends on how this uncertainty is resolved. More specifically, this condition says, if the agent is unaware of a question π at s, and s coincides with s in its answer to π, then he must also be unaware of π at s. In multi-agent environment, this implies a player is never unaware of the possibility that the opponents could be unaware of an event of which he is aware himself, which, combined with rational information, in turn implies all interactive knowledge are true. 10

Proposition 4 Fix (S, F, W, P). Suppose (W j, P j ) satisfies nice awareness and P i satisfies factual partition for all i I I. 6 Then for any i 1,, i k I and any E E, 1. K i 1 K i kk j (E) K j (E); 2. K i 1 K i ku j (E) U j (E). In combination, Propositions 3 and 4 say that, if players have strongly rational information structures, then the only extra implication of unawareness in multi-agent environment is interactive knowledge of such ignorance. In particular, this gives rise to a clean characterization of common knowledge. An event E is common knowledge if everyone knows E, everyone knows everyone knows E, and so on. Formally, for any i N, let I m i = {(i, i 2,, i m ) : i 2,, i m N} with typical element I m i. I write K(E I m i ) as a shorthand for the interactive knowledge K i K i m(e). The event E is common knowledge, denoted by CK(E), is simply: n CK(E) = i=1 m=1 Ii m Im i K(E I m i ) Let P = n j=1p j denote the meet of all partitions induced by P j, j = 1,, n, and P(s) denote the element in the meet containing s. Proposition 5 Fix the general model (S, F, W, P). If (W i, P i ) is strongly rational for all i, then for all E E, { } CK(E) = s S : E n i=1 A(W i (t)), P(s) E. (2.11) t P(s) Li (2006a) proves a version of this formula, which I invoke here in light of the equivalence result in Section 3.2. For details of the proof, see Theorem 9 in Li (2006a). 3 Interpretation of Unawareness and Knowledge 3.1 Unawareness as the measurability constraint. Consider the single-agent knowledge hierarchy where the agent has rational information structure. Fix a general model (S, F, W, P ) where (W, P ) is rational. Let ˆK n, n = 1, 2, denote the standard knowledge operators associated with the pair (S, P ), i.e., for all E S, ˆK(E) = {s S : P (s) E}, ˆK n (E) = ˆK( ˆK n 1 (E)). 6 To be precise, the factual partition condition is not necessary; the result only requires P i to satisfy a non-delusion condition, that is, s P i (s) for all s S. 11

By Proposition 2, K n (E) = ˆK n (E) U(E). (3.1) To interpret formula (3.1), first notice that since the factual partition and rational awareness conditions are formulated without referring to the frame, the single-agent model is essentially just the triple (S, W, P ). 7 Second, observe that if W (s) = S for all s S, then all subjective models are identical to the full model (S, W, P ), and all knowledge operators reduce to the standard ones: K n (E) = ˆK n (E). Therefore, ˆK can be interpreted as the agent s implicit knowledge, knowledge that could have been deduced from the factual signals were he fully aware at every state. In this sense, formula (3.1) merely says unawareness can be viewed as a measurability constraint: at each s S, only those knowledge in ˆK n concerning events in A(W (s)), events that are measurable with respect to the agent s subjective state space, becomes explicit in the agent s mind-set. 8 This interpretation extends to multi-agent environment with strongly rational information structures. Proposition 6 In the general model (S, F, W, P), suppose (W i, P i ) is strongly rational for all i I. Then for all i, j I and E E, K i K j (E) = ˆK i ˆKj (E) K i U j (E) (3.2) Finally, let ĈK(E) = {s S : P(s) E} denote the standard common knowledge operator associated with (S, P). Notice if W i (s) = S for all i I and all s S, then CK(E) characterized in (2.11) reduces to ĈK(E). Thus one can interpret ĈK(E) as the implicit common knowledge of E, common knowledge that players would have been able to entertain had all were fully aware in all states. For any m > 1, i I, let Ii m Ii m, and let KA(E Ii m ) be a shorthand for K i K i m 1 U i m(e). Then the event common knowledge of awareness of E, i.e., the event everyone is aware of E, and everyone knows everyone is aware of E, and so on, denoted by CA(E), is simply n CA(E) = KA(E Ii m ) U i (E). i=1 m=2 Ii m Im i It is straightforward to check that this event has a simple characterization: { } CA(E) = s S : E n i=1 A(W i(t)). t P(s) 7 The restriction on the range of awareness function is not binding: given an arbitrary set of partitions G, one can always find a frame F such that G F. 8 Fagin and Halpern (1988) develop a logic for awareness, implicit knowledge and explicit knowledge and obtain similar results. 12

Thus the formula of common knowledge given in Proposition 5 can be written as: for any E E, CK(E) = ĈK(E) CA(E). In words, under strongly rational information structures, implicit common knowledge becomes explicit if and only if there is common knowledge that the event is measurable in every player s subjective state space. 3.2 Equivalence with the product model. Li (2006a) constructs a model of unawareness by making use of a product structure on the state space. Specifically, the product model is a tuple (Ω, W, P ) where: Ω = Π D q where Q is an arbitrary index set of questions, representing all q Q relevant uncertainties, and D q is the set of answers to question q, representing all possible resolutions to this particular uncertainty; The awareness function Wi : Ω 2 Q \ { } associates each state with the set of questions of which i is aware; The possibility correspondence Pi : Ω 2 Ω \ { } associates each state with the set of states where i has the same factual signal. Consider the following conditions on (W, P ): 1. Factual partition: P induces an information partition over Ω ; 2. Rational awareness: ω 1 P (ω ) implies W (ω 1) = W (ω ); 3. Cylinder factual partition: there exists a collection of partitions {π q } q Q, π q is a partition over D q, such that P (ω ) = Π π q (ω q ) where ω q is the q-th coordinate q Q of ω ; 4. Nice awareness: for any ω 1, ω 2 Ω, [W (ω 1) W (ω 2)] {q Q : ω q 1 = ω q 2 } = where denotes symmetric difference. Definition 3 The pair (W, P ) is rational if it satisfies factual partition and rational awareness; it is strongly rational if, in addition, it also satisfies cylinder factual partition and nice awareness. The set of events in the product model is: E p = [{ } E Π D i : E i Q =Q Q ] { Q }. (3.3) 13

where Q is the empty set confined within the space Π D q Q q. For any E E p, let D E denote the unique subset of Q such that E Π D q. q DE The knowledge and unawareness operators are defined as follows: for any E E p, U p i (E) = {ω Ω : D E Wi (ω )} ; (3.4) { } K p i (E) = ω Ω : Wi (ω ) D E, Pi (ω ) E Π D q. (3.5) q Q\D E Given the recursive structures of the general model, it suffices to show that any general model with rational (strongly rational) information structures can be translated into a product model with rational (strongly rational) information structures, and vice versa; and the two models give rise to essentially identical single-agent knowledge hierarchies. For any event E E p, let the map Ψ : E p E p yield the set of less detailed descriptions of E, i.e., { } Ψ(E) = F E p : F Π D q = E. q D E \D F Theorem 7 The general model and the product model are equivalent. More specifically, 1. Fix a product model (Ω, W, P ) where (Wi, Pi ) is rational (strongly rational) for all i. Then there exists a general model (Ω, F, W, P) where (W i, P i ) is rational (strongly rational) for all i, and for all E 2 Ω \ { }, U i (E) = (F ), (3.6) K i (E) = F Ψ(E) F Ψ(E) U p i K p i (F ). (3.7) 2. Fix a general model (S, F, W, P) where (W i, P i ) is rational (strongly rational) for all i. Then there exists a product model (Ω, W, P ) and an injection Γ : S Ω such that for all i I, (Wi, Pi ) is rational (strongly rational) on Γ(S); and for any E E p, U p i (E) Γ(S) = Γ 1 (ω Π D q ) S), q Q\D E (3.8) U i (Γ ω Π D q q D E K p i (E) Γ(S) = Γ (K i (Γ 1 (E Π q Q\D E D q ) S)) ) U p i (E). (3.9) 14

This theorem says one can always paraphrase the product model into the general model, and vice versa. Intuitively, the frame F in the general model corresponds to the sets of answers {D q } q Q in the product model. In the general model, one fixes the full state space and represents answers using events, while in the product model, one starts with questions and answers and defines the full state space from them. They are really just two sides of the same coin. Depending on specific applications, either model could be more convenient. The frame is only needed in formulating nice factual partition and nice awareness conditions. In single-agent case, it is without loss of generality to capture unawareness as limitations on the dimensions of subjective state spaces; while in multi-agent environment, in general unawareness could entail distortions in interactive reasoning, when the signals players receive depend on how uncertainties are jointly resolved. Events are modeled differently in the two models. In the product model, an event is defined by both its factual content and its description which reflects the awareness content; while in the general model, an event is best understood as a factual description in its coarsest form, i.e. a minimal collection of factual statements without leaving out any nontrivial facts. Consequently, knowledge and unawareness are interpreted in slightly different ways in the two models: in the product model, knowing E means knowing E in the exact form it is described; while in the general model, knowing E means knowing the factual content of E in some form of description. Similarly, being unaware of E means being unaware of E as it is described in the product model, while in the general model, it means being unaware of all possible descriptions of this event. Equations (3.6) - (3.7) describe the connections formally. Apparently, either approach of modeling events can be adopted in either model. 3.3 Partial unawareness and the interpolation model. A particularly interesting case of unawareness is partial unawareness, where a player is aware of an uncertainty itself, but unaware of some of its possible resolutions. For example, suppose Alice and Bob play a game. Alice can take actions a1, a2 or A. Bob is aware that Alice is in the game, but he is only aware of her actions a1 or a2. Suppose Bob observes whether Alice takes a2. One can model this situation as follows. Let the state space S = {a1, a2, A}, and the frame F = {{{a2}, {a1, A}}, {{A}, {a1, a2}}}. In words, the uncertainty what action does Alice take? is rephrased as two uncertainties, whether Alice takes action a2 and whether Alice takes action A, of the latter Bob is unaware. Thus, Bob s subjective state space can be represented by the partition {{a2}, {a1, A}}. This approach explicitly models how the player confounds unaware states with states of which he is aware. Hence, the knowledge hierarchy in this model reflects the true knowledge the player has from the perspective of the modeler, the fully-aware outside observer. For example, suppose Bob is fully aware when Alice plays a2 and unaware of A otherwise, and consider the following knowledge hierarchy. 15

W (a1) = W (A) = {{a1, A}, {a2}}, W (a2) = S ; P induces the information partition {{a1, A}, {a2}}. K({a1, A}) = K( {a2}) = {a1, A}, K({a1}) =, K({a1, A, a2}) = S; U({a1, A, a2}) =, U({a1}) = {a1, A}. At A, Bob knows {a1, A}, while he does not know {a1}, in fact, he is unaware of {a1}. Here the set {a1, A} represents the subjective event {{a1, A}} = {{a2}}, interpreted as Alice does not play a2, which, from Bob s perspective, is equivalent to Alice plays a1; while the set {a1} represents the event {{a1}}, interpreted as both Alice plays a1 and Alice plays neither a2 nor A. Therefore, one can interpret this knowledge hierarchy as follows: at A, Bob is unaware of the objective content of the event Alice plays a1; he does have in mind a subjective understanding of this event, in fact, he knows Alice does not play a2, which he subjectively interpolates as Alice plays a1. Similarly, Bob knows the event {a1, A, a2} in both A and a2, but combining Bob s awareness at these two states, the knowledge should be interpreted differently. At A, Bob knows essentially the subjective event {{a1, A}, {a2}}, interpreted as Alice may play a2, and she may not, which, from Bob s perspective, is equivalent to Alice plays a2 or a1; while at a2, Bob knows {{a1}, {A}, {a2}} in the full state space, interpreted as Alice plays a1, A, or a2. In a sense, Bob subjectively knows different events in the two states, even though from the modeler s perspective, the factual content of Bob s knowledge is really the same: he cannot rule out anything. On the other hand, under partial unawareness, each subjective state contains one and only one state of which the player is fully aware, and the player essentially interpolates each subjective state as this corresponding full state, and each subjective event as the corresponding objective event. For example, from Bob s perspective, the (subjective) state space consists of two states, one in which Alice plays a1 and another in which Alice plays a2. From the modeler s perspective, Bob essentially interpolates the subjective state {a1, A} simply as a1. In the above example, at A, from Bob s perspective, he is certainly aware of the event Alice plays a1; in fact, he actually knows it: he observes Alice does not play a2, which happens if and only if Alice plays a1 given his partial unawareness of the uncertain environment. Such interpolation effects seem important for examining the implications of partial unawareness in applications. After all, it is what Bob subjectively knows that matters for his decision-making. Below I rephrase the general model to explicitly characterize the interpolated knowledge, knowledge from the players own perspectives. I call it the interpolation model for the obvious reason. For simplicity, I only consider the case where players are partially unaware at every state. The generalization is obvious. The idea is to replace each subjective state in the general model with the corresponding state into which the player interpolates it. Thus the subjective state space 16

is a subset of the objective state space. What s more, the subjective factual information structure is naturally characterized by the restriction of the corresponding factual signals on the subjective state space. For example, from Bob s perspective, at the (subjective) state where Alice plays a1, although he observes Alice does not play a1, i.e. P (a1) = {a1, A}, he only recognizes its implication on his interpolated subjective state space, i.e. {a1, A} {a1, a2} = {a1}. Thus, under partial unawareness, the subjective model, as the player understands it, can be regarded as the restriction of the model on the set of states of which the player is aware. Formally, I define the interpolation model as follows. Fix a state space S. Let the pair (Wi, Pi ) denote i s information structure. As in the general model, Pi denote the factual signal structure; but the partial awareness function Wi : S 2 S \ { } associates each state with a subset of the state space S, interpreted as the set of states specifying resolutions of which i is aware. Fix a state s S and j I, the subjective information structure for player j in i s subjective model is: for any t Wi (s), Wj (t i s ) = Wi (s) Wj (t); (3.10) Pj (t i s ) = Wi (s) Pj (t). (3.11) Higher-order subjective models are defined analogously. Consider the following conditions on the information pair (W, P ): 1. Interpolated partial unawareness: for all s S, P i (s) W i (s) ; 2. Non-trivial partial unawareness: for any (i, s) I S, there exists s Wi (s) Pi (s) such that s W j (t) s W j (t) for all j I. Interpolated partial unawareness says the player always has in mind some possible scenario(s). This condition can be regarded as a regularity condition: Since the player is aware of the underlying uncertainty, in this case he must consider possible a scenario specifying none of the above and be aware of it by definition. 9 Non-trivial partial unawareness extends interpolated partial unawareness to higher-orders: whenever j is aware of i and s, he is also aware that i is aware of some possible scenario s. This condition is more than just a regularity condition, as the state s is required to be in common awareness of all players who are aware of i and s. This condition guarantees interpolated partial unawareness holds in all subjective models. 9 One may argue in this case the player should become aware of the partial unawareness problem, i.e. he should become aware that there are other ways to resolve the uncertainty beyond those he can imagine. However, this argument is vacuous as long as only payoff uncertainties are involved: one can always take the real line as the state space, which encompasses all partial unawareness problems. The real content of the awareness of unawareness argument seems to lie on the dependence of one s awareness of own action set on one s awareness of external uncertainties. In other words, if Bob s own action set is a function of Alice s action set, then once Bob is aware there are actions of Alice s of which he is unaware, he also becomes aware there are actions of his own of which he is unaware. The latter seems to have important implications in economic environment. 17

Definition 4 In the interpolation model (S, W, P ), the information pair (Wi, Pi ) is rational if it satisfies factual partition, rational awareness and interpolated partial unawareness conditions; the vector (W, P ) is interactively rational if (Wi, Pi ) is rational for all i I, and, in addition, it satisfies non-trivial partial unawareness condition. At s, i can only reason about events in his own subjective state space, i.e. {E : E Wi (s)}, which is the interpolated subjective algebra of events i has in mind. Fix an event E Wi (s). Notice that from i s perspective, the event E is not true is represented by the set Wi (s) \ E; while objectively speaking, the set of states where E is not true is S \ E. I refer to S \ E as the objective negation of E, while Wi (s) \ E as i s subjective negation of E at s. Intuitively, in the standard model, an event E can also be equally described as the negation of S \ E, i.e. E = (S \ E). It is precisely this equivalence that breaks down when players have partial unawareness. To capture this subtlety in knowledge hierarchies, I let K + i (E) denote i s positive knowledge of E, i.e. i knows E; and K i (E) denote i s negative knowledge of E, i.e. i knows the negation of E. Similarly, let U + i (E) represent i is unaware of E and U i (E) represent i is unaware of the negation of E. In standard models, i has positive knowledge of E if an only if he has negative knowledge of S \E, i.e. K + i (E) = K i (S \E), so there is no need to have two operators. Let = +,. For any E E, U i (E) = {s S : E W i (s)}. (3.12) K + i K i (E) = {s S : [Pi (s) Wi (s)] E Wi (s)} ; (3.13) (E) = {s S : [Pi (s) Wi (s)] [Wi (s) \ E], E Wi (s)}. (3.14) Higher-order knowledge are defined recursively. To keep in line with standard notation and for simplicity, let K i K j (E) = K+ i K+ j (E), K i K j (E) = K i K j (E) and so on for all higher-order knowledge. For all E E, I define { } K i K j (E) = s S : [Pi (s) Wi (s)] K j (E i s), (3.15) where K j (E i s) is the first-order knowledge as defined in (3.13) and (3.14) adapted for the subjective interpolation model (Wi (s), Wj ( i s ), Pj ( i s )). The interactive knowledge K i U j and higher-order knowledge are defined analogously. Consider the following properties on the interpolated knowledge hierarchies. 1. s K + i (E) s K i (W i (s) \ E): One knows E if and only if one knows that its subjective negation, i.e. the event not E, is not true; 2. K i (E) S \ E: Negative knowledge of E are always true; 18

3. if s K + i (E), then s / E s / Wi (s): Suppose one knows E. Then such knowledge is false if and only if one is partially unaware of the current state; 4. U i (E) = K i (E) K i K i (E): One is unaware of E if and only if one has no positive (negative) knowledge of E, and one does not know one has no positive (negative) knowledge of E; 5. K i (E) = K ik i (E): One has positive (negative) knowledge of E if and only if one knows one has positive (negative) knowledge of E; 6. K i (E) U + i (E) = K i K i (E): One is aware of E yet does not have positive (negative) knowledge of E if and only if one knows one does not have positive (negative) knowledge of E; 7. K i 1 K i nk j (E) = K i 1 K i nk jk j (E): i1 knows i n knows j has positive (negative) knowledge of E if and only if i 1 knows i n knows j knows he has positive (negative) knowledge of E; 8. K i 1 K i n K j (E) K i 1 K i n U + j (E) = K i 1 K i nk j K j (E): i1 knows i n knows j is aware of E yet does not have positive (negative) knowledge of E if and only if i 1 knows i n knows j does not know he has no positive (negative) knowledge of E. Proposition 8 In the interpolation model (S, W, P ), suppose (W i, P i ) is rational. Then i s interpolated knowledge hierarchy satisfies properties 1-6 above; if in addition, (W, P ) is interactively rational, then the interactive interpolated knowledge hierarchy also satisfies 7-8. Property 1 makes sure the positive and negative knowledge are defined appropriately. In particular, it says positive knowledge of E is indeed equivalent to negative knowledge of its subjective negation in the subjective, just as in the standard model. Properties 2-3 say the interpolated knowledge hierarchy satisfies a weakening of the truth axiom, which states whenever the player knows E, E indeed be true. 10 The truth axiom is shown to be equivalent to the requirement that players never exclude the true state (Bacharach 1985), which becomes problematic when the player is partially unaware of the true state, and hence necessarily excludes it. Property 2 says the truth of one s negative knowledge is not affected by partial unawareness; while property 3 says false positive knowledge occurs precisely when the player is unaware of the true state, and in which case all his positive knowledge turns out to be false. Properties 4-6 extend those in Proposition 2, while properties 7-8 extend those in Proposition 3. Thus the interpolated knowledge hierarchies have structures parallel to that in the general model, characterized by essentially the same properties under analogous conditions on information structures. 10 The mathematical formula for the truth axiom is K(E) E. 19