Logic and Artificial Intelligence Lecture 7

Similar documents
Agreeing to disagree: The non-probabilistic case

Logics of Rational Agency Lecture 2

Agreement Theorems from the Perspective of Dynamic-Epistemic Logic

Logic and Artificial Intelligence Lecture 6

Levels of Knowledge and Belief Computational Social Choice Seminar

Logic and Artificial Intelligence Lecture 12

Logic and Artificial Intelligence Lecture 13

Changing Types. Dominik Klein Eric Pacuit. April 24, 2011

Logic and Artificial Intelligence Lecture 22

Probability Models of Information Exchange on Networks Lecture 3

Logic and Social Choice Theory. A Survey. ILLC, University of Amsterdam. November 16, staff.science.uva.

On the Consistency among Prior, Posteriors, and Information Sets

Logic and Artificial Intelligence Lecture 20

Rational expectations can preclude trades

Introduction to Formal Epistemology Lecture 5

Logic and Artificial Intelligence Lecture 21

Lecture 11: Topics in Formal Epistemology

Epistemic Game Theory

Epistemic Foundations for Set-algebraic Representations of Knowledge

Introduction to Epistemic Reasoning in Interaction

Ambiguous Language and Differences in Beliefs

Agreeing to Disagree After All (Extended Abstract)

Game Theory Lecture 10+11: Knowledge

Logics of Rational Agency Lecture 3

Contradicting Beliefs and Communication

Modeling Unawareness in Arbitrary State Spaces

BELIEF CONSISTENCY AND TRADE CONSISTENCY. E. Lehrer D. Samet. Working Paper No 27/2012 December Research No

HOW TO MAKE SENSE OF THE COMMON PRIOR ASSUMPTION UNDER INCOMPLETE INFORMATION. Giacomo Bonanno. Klaus Nehring *

Social Choice and Networks

PAIRWISE EPISTEMIC CONDITIONS FOR NASH EQUILIBRIUM 1

Lecture 8: Introduction to Game Logic

Competitive Equilibrium

Robust Knowledge and Rationality

Question 1. (p p) (x(p, w ) x(p, w)) 0. with strict inequality if x(p, w) x(p, w ).

A Nonspeculation Theorem with an Application to Committee Design

In Defense of Jeffrey Conditionalization

Neighborhood Semantics for Modal Logic Lecture 5

Information Aggregation in Dynamic Markets Under Ambiguity

PhD Qualifier Examination

The Complexity of Agreement

Introduction to Formal Epistemology

Probabilistic Knowledge and Probabilistic Common Knowledge

18.600: Lecture 39 Review: practice problems

Modal Logic. Introductory Lecture. Eric Pacuit. University of Maryland, College Park ai.stanford.edu/ epacuit. January 31, 2012.

Follow links for Class Use and other Permissions. For more information send to:

Social Learning Equilibria

Events Concerning Knowledge

COMMON BELIEF WITH THE LOGIC OF INDIVIDUAL BELIEF

Probabilistic representation and reasoning

The Value of Information Under Unawareness

Notes on Modal Logic

Models of Strategic Reasoning Lecture 4

Knowledge Based Obligations RUC-ILLC Workshop on Deontic Logic

Contracts under Asymmetric Information

WEAKLY DOMINATED STRATEGIES: A MYSTERY CRACKED

A Nonspeculation Theorem with an Application to Committee Design

Definitions and Proofs

Unawareness, Beliefs, and Speculative Trade

Decentralized Betting under Heterogeneous State-Contingent Preferences

Toward a Theory of Play: A Logical Perspective on Games and Interaction

Probabilistic Opinion Pooling

Philosophy 148 Announcements & Such. The Probability Calculus: An Algebraic Approach XI. The Probability Calculus: An Algebraic Approach XII

How Restrictive Are Information Partitions?

Modal Dependence Logic

What is DEL good for? Alexandru Baltag. Oxford University

Neighborhood Semantics for Modal Logic Lecture 3

Desire-as-belief revisited

Unawareness, Beliefs, and Speculative Trade

For True Conditionalizers Weisberg s Paradox is a False Alarm

For True Conditionalizers Weisberg s Paradox is a False Alarm

Algorithmic Game Theory and Applications

Unawareness, Beliefs, and Speculative Trade

Lecture 1: Shannon s Theorem

Asymptotic Learning on Bayesian Social Networks

Evidence with Uncertain Likelihoods

On Decentralized Incentive Compatible Mechanisms for Partially Informed Environments

arxiv:cs/ v1 [cs.cc] 30 Jun 2004

Strategy-proof and fair assignment is wasteful

Intro to Economic analysis

An Invitation to Modal Logic: Lecture 1

Logic. Introduction to Artificial Intelligence CS/ECE 348 Lecture 11 September 27, 2001

Comment on Leitgeb s Stability Theory of Belief

APPLIED MECHANISM DESIGN FOR SOCIAL GOOD

Common Knowledge of Rationality is Self-Contradictory. Herbert Gintis

Understanding the Brandenburger-Keisler Belief Paradox

Notes on Modal Logic

Social Choice Theory for Logicians Lecture 5

Preference and its Dynamics

Probabilistic Modal Logic

The Axiomatic Method in Social Choice Theory:

Strategic Properties of Heterogeneous Serial Cost Sharing

Observations on Cooperation

Topics in Social Software: Information in Strategic Situations (Draft: Chapter 4) Eric Pacuit Comments welcome:

Doxastic Logic. Michael Caie

Towards A Multi-Agent Subset Space Logic

Name Class Page 1. Conservation of Energy at the Skate Park

EC487 Advanced Microeconomics, Part I: Lecture 5

THE SURE-THING PRINCIPLE AND P2

Filtrations and Basic Proof Theory Notes for Lecture 5

Transcription:

Logic and Artificial Intelligence Lecture 7 Eric Pacuit Currently Visiting the Center for Formal Epistemology, CMU Center for Logic and Philosophy of Science Tilburg University ai.stanford.edu/ epacuit e.j.pacuit@uvt.nl September 20, 2011 Logic and Artificial Intelligence 1/19

Agreeing to Disagree Theorem: Suppose that n agents share a common prior and have different private information. If there is common knowledge in the group of the posterior probabilities, then the posteriors must be equal. Robert Aumann. Agreeing to Disagree. Annals of Statistics 4 (1976). Logic and Artificial Intelligence 2/19

Agreeing to Disagree Theorem: Suppose that n agents share a common prior and have different private information. If there is common knowledge in the group of the posterior probabilities, then the posteriors must be equal. Robert Aumann. Agreeing to Disagree. Annals of Statistics 4 (1976). Logic and Artificial Intelligence 2/19

G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript) 1997. Logic and Artificial Intelligence 3/19

2 Scientists Perform an Experiment w 1 w 2 w 3 w 4 w 5 w 6 w 7 They agree the true state is one of seven different states. Logic and Artificial Intelligence 4/19

2 Scientists Perform an Experiment 2 4 8 4 32 w 1 32 w 2 32w 3 32w 4 5 7 2 32 w 5 32 w 6 32w 7 They agree on a common prior. Logic and Artificial Intelligence 4/19

2 Scientists Perform an Experiment w 1 w 2 w 3 w 4 w 5 w 6 w 7 They agree that Experiment 1 would produce the blue partition. Logic and Artificial Intelligence 4/19

2 Scientists Perform an Experiment w 1 w 2 w 3 w 4 w 5 w 6 w 7 They agree that Experiment 1 would produce the blue partition and Experiment 2 the red partition. Logic and Artificial Intelligence 4/19

2 Scientists Perform an Experiment w 1 w 2 w 3 w 4 w 5 w 6 w 7 They are interested in the truth of E = {w 2, w 3, w 5, w 6 }. Logic and Artificial Intelligence 4/19

2 Scientists Perform an Experiment 2 4 8 4 32 w 1 32 w 2 32w 3 32w 4 5 7 2 32 w 5 32 w 6 32w 7 So, they agree that P(E) = 24 32. Logic and Artificial Intelligence 4/19

2 Scientists Perform an Experiment 2 4 8 4 32 w 1 32 w 2 32w 3 32w 4 5 7 2 32 w 5 32 w 6 32w 7 Also, that if the true state is w 1, then Experiment 1 will yield P(E I ) = P(E I ) P(I ) = 12 14 Logic and Artificial Intelligence 4/19

2 Scientists Perform an Experiment 2 4 8 4 32 w 1 32 w 2 32w 3 32w 4 5 7 2 32 w 5 32 w 6 32w 7 Suppose the true state is w 7 and the agents preform the experiments. Logic and Artificial Intelligence 4/19

2 Scientists Perform an Experiment 2 4 8 4 32 w 1 32 w 2 32w 3 32w 4 5 7 2 32 w 5 32 w 6 32w 7 Suppose the true state is w 7, then Pr 1 (E) = 12 14 Logic and Artificial Intelligence 4/19

2 Scientists Perform an Experiment 2 4 8 4 32 w 1 32 w 2 32w 3 32w 4 5 7 2 32 w 5 32 w 6 32w 7 Then Pr 1 (E) = 12 14 and Pr 2(E) = 15 21 Logic and Artificial Intelligence 4/19

2 Scientists Perform an Experiment 2 4 8 4 32 w 1 32 w 2 32w 3 32w 4 5 7 2 32 w 5 32 w 6 32w 7 Suppose they exchange emails with the new subjective probabilities: Pr 1 (E) = 12 14 and Pr 2(E) = 15 21 Logic and Artificial Intelligence 4/19

2 Scientists Perform an Experiment 2 4 8 4 32 w 1 32 w 2 32w 3 32w 4 5 7 2 32 w 5 32 w 6 32w 7 Agent 2 learns that w 4 is NOT the true state (same for Agent 1). Logic and Artificial Intelligence 4/19

2 Scientists Perform an Experiment 2 4 8 32 w 1 32 w 2 32w 3 5 7 2 32 w 5 32 w 6 32w 7 Agent 2 learns that w 4 is NOT the true state (same for Agent 1). Logic and Artificial Intelligence 4/19

2 Scientists Perform an Experiment 2 4 8 4 32 w 1 32 w 2 32w 3 32w 4 5 7 2 32 w 5 32 w 6 32w 7 Agent 1 learns that w 5 is NOT the true state (same for Agent 1). Logic and Artificial Intelligence 4/19

2 Scientists Perform an Experiment 2 4 8 32 w 1 32 w 2 32w 3 5 7 2 32 w 5 32 w 6 32w 7 The new probabilities are Pr 1 (E I ) = 7 9 and Pr 2(E I ) = 15 17 Logic and Artificial Intelligence 4/19

2 Scientists Perform an Experiment 2 4 8 32 w 1 32 w 2 32w 3 5 7 2 32 w 5 32 w 6 32w 7 After exchanging this information (Pr 1 (E I ) = 7 9 and Pr 2 (E I ) = 15 17 ), Agent 2 learns that w 3 is NOT the true state. Logic and Artificial Intelligence 4/19

2 Scientists Perform an Experiment 2 4 32 w 1 32 w 2 5 7 2 32 w 5 32 w 6 32w 7 No more revisions are possible and the agents agree on the posterior probabilities. Logic and Artificial Intelligence 4/19

Dissecting Aumann s Theorem Qualitative versions: like-minded individuals cannot agree to make different decisions. M. Bacharach. Some Extensions of a Claim of Aumann in an Axiomatic Model of Knowledge. Journal of Economic Theory (1985). J.A.K. Cave. Learning to Agree. Economic Letters (1983). D. Samet. The Sure-Thing Principle and Independence of Irrelevant Knowledge. 2008. Logic and Artificial Intelligence 5/19

The Framework Knowledge Structure: W, {Π i } i A where each Π i is a partition on W (Π i (w) is the cell in Π i containing w). Decision Function: Let D be a nonempty set of decisions. A decision function for i A is a function d i : W D. A vector d = (d 1,..., d n ) is a decision function profile. Let [d i = d] = {w d i (w) = d}. Logic and Artificial Intelligence 6/19

The Framework Knowledge Structure: W, {Π i } i A where each Π i is a partition on W (Π i (w) is the cell in Π i containing w). Decision Function: Let D be a nonempty set of decisions. A decision function for i A is a function d i : W D. A vector d = (d 1,..., d n ) is a decision function profile. Let [d i = d] = {w d i (w) = d}. (A1) Each agent knows her own decision: [d i = d] K i ([d i = d]) Logic and Artificial Intelligence 6/19

Comparing Knowledge [j i]: agent j is at least as knowledgeable as agent i. [j i] := (K i (E) K j (E)) = ( K i (E) K j (E)) E (W ) E (W ) Logic and Artificial Intelligence 7/19

Comparing Knowledge [j i]: agent j is at least as knowledgeable as agent i. [j i] := (K i (E) K j (E)) = ( K i (E) K j (E)) E (W ) E (W ) w [j i] then j knows at w every event that i knows there. Logic and Artificial Intelligence 7/19

Comparing Knowledge [j i]: agent j is at least as knowledgeable as agent i. [j i] := (K i (E) K j (E)) = ( K i (E) K j (E)) E (W ) E (W ) w [j i] then j knows at w every event that i knows there. [j i] = [j i] [i j] Logic and Artificial Intelligence 7/19

Interpersonal Sure-Thing Principle (ISTP) For any pair of agents i and j and decision d, K i ([j i] [d j = d]) [d i = d] Logic and Artificial Intelligence 8/19

Interpersonal Sure-Thing Principle (ISTP): Illustration Suppose that Alice and Bob, two detectives who graduated the same police academy, are assigned to investigate a murder case. Logic and Artificial Intelligence 9/19

Interpersonal Sure-Thing Principle (ISTP): Illustration Suppose that Alice and Bob, two detectives who graduated the same police academy, are assigned to investigate a murder case. If they are exposed to different evidence, they may reach different decisions. Logic and Artificial Intelligence 9/19

Interpersonal Sure-Thing Principle (ISTP): Illustration Suppose that Alice and Bob, two detectives who graduated the same police academy, are assigned to investigate a murder case. If they are exposed to different evidence, they may reach different decisions. Yet, being the students of the same academy, the method by which they arrive at their conclusions is the same. Logic and Artificial Intelligence 9/19

Interpersonal Sure-Thing Principle (ISTP): Illustration Suppose that Alice and Bob, two detectives who graduated the same police academy, are assigned to investigate a murder case. If they are exposed to different evidence, they may reach different decisions. Yet, being the students of the same academy, the method by which they arrive at their conclusions is the same. Suppose now that detective Bob, a father of four who returns home every day at five oclock, collects all the information about the case at hand together with detective Alice. Logic and Artificial Intelligence 9/19

Interpersonal Sure-Thing Principle (ISTP): Illustration However, Alice, single and a workaholic, continues to collect more information every day until the wee hours of the morning information which she does not necessarily share with Bob. Logic and Artificial Intelligence 10/19

Interpersonal Sure-Thing Principle (ISTP): Illustration However, Alice, single and a workaholic, continues to collect more information every day until the wee hours of the morning information which she does not necessarily share with Bob. Obviously, Bob knows that Alice is at least as knowledgeable as he is. Logic and Artificial Intelligence 10/19

Interpersonal Sure-Thing Principle (ISTP): Illustration However, Alice, single and a workaholic, continues to collect more information every day until the wee hours of the morning information which she does not necessarily share with Bob. Obviously, Bob knows that Alice is at least as knowledgeable as he is. Suppose that he also knows what Alices decision is. Logic and Artificial Intelligence 10/19

Interpersonal Sure-Thing Principle (ISTP): Illustration However, Alice, single and a workaholic, continues to collect more information every day until the wee hours of the morning information which she does not necessarily share with Bob. Obviously, Bob knows that Alice is at least as knowledgeable as he is. Suppose that he also knows what Alices decision is. Since Alice uses the same investigation method as Bob, he knows that had he been in possession of the more extensive knowledge that Alice has collected, he would have made the same decision as she did. Thus, this is indeed his decision. Logic and Artificial Intelligence 10/19

Implications of ISTP Proposition. If the decision function profile d satisfies ISTP, then [i j] ([d i = d] [d j = d]) d D Logic and Artificial Intelligence 11/19

ISTP Expandability Agent i is an epistemic dummy if it is always the case that all the agents are at least as knowledgeable as i. That is, for each agent j, [j i] = W A decision function profile d on W, Π 1,..., Π n is ISTP expandable if for any expanded structure W, Π 1,..., Π n+1 where n + 1 is an epistemic dummy, there exists a decision function d n+1 such that (d 1, d 2,..., d n+1 ) satisfies ISTP. Logic and Artificial Intelligence 12/19

ISTP Expandability: Illustration Suppose that after making their decisions, Alice and Bob are told that another detective, one E.P. Dummy, who graduated the very same police academy, had also been assigned to investigate the same case. Logic and Artificial Intelligence 13/19

ISTP Expandability: Illustration Suppose that after making their decisions, Alice and Bob are told that another detective, one E.P. Dummy, who graduated the very same police academy, had also been assigned to investigate the same case. In principle, they would need to review their decisions in light of the third detectives knowledge: knowing what they know about the third detective, his usual sources of information, for example, may impinge upon their decision. Logic and Artificial Intelligence 13/19

ISTP Expandability: Illustration But this is not so in the case of detective Dummy. It is commonly known that the only information source of this detective, known among his colleagues as the couch detective, is the TV set. Logic and Artificial Intelligence 14/19

ISTP Expandability: Illustration But this is not so in the case of detective Dummy. It is commonly known that the only information source of this detective, known among his colleagues as the couch detective, is the TV set. Thus, it is commonly known that every detective is at least as knowledgeable as Dummy. Logic and Artificial Intelligence 14/19

ISTP Expandability: Illustration But this is not so in the case of detective Dummy. It is commonly known that the only information source of this detective, known among his colleagues as the couch detective, is the TV set. Thus, it is commonly known that every detective is at least as knowledgeable as Dummy. The news that he had been assigned to the same case is completely irrelevant to the conclusions that Alice and Bob have reached. Obviously, based on the information he gets from the media, Dummy also makes a decision. Logic and Artificial Intelligence 14/19

ISTP Expandability: Illustration But this is not so in the case of detective Dummy. It is commonly known that the only information source of this detective, known among his colleagues as the couch detective, is the TV set. Thus, it is commonly known that every detective is at least as knowledgeable as Dummy. The news that he had been assigned to the same case is completely irrelevant to the conclusions that Alice and Bob have reached. Obviously, based on the information he gets from the media, Dummy also makes a decision. We may assume that the decisions made by the three detectives satisfy the ISTP, for exactly the same reason we assumed it for the two detectives decisions Logic and Artificial Intelligence 14/19

Generalized Agreement Theorem If d is an ISTP expandable decision function profile on a partition structure W, Π 1,..., Π n, then for any decisions d 1,..., d n which are not identical, C( i [d i = d i ]) =. Logic and Artificial Intelligence 15/19

Analyzing Agreement Theorems in Dynamic Epistemic/Doxastic Logic C. Degremont and O. Roy. Agreement Theorems in Dynamic-Epistemic Logic. in A. Heifetz (ed.), Proceedings of TARK XI, 2009, pp.91-98, forthcoming in the JPL. L. Demey. Agreeing to Disagree in Probabilistic Dynamic Epistemic Logic. ILLC, Masters Thesis, 2010. Logic and Artificial Intelligence 16/19

Dissecting Aumann s Theorem No Trade Theorems (Milgrom and Stokey); from probabilities of events to aggregates (McKelvey and Page); Common Prior Assumption, etc. Logic and Artificial Intelligence 17/19

Dissecting Aumann s Theorem No Trade Theorems (Milgrom and Stokey); from probabilities of events to aggregates (McKelvey and Page); Common Prior Assumption, etc. How do the posteriors become common knowledge? J. Geanakoplos and H. Polemarchakis. We Can t Disagree Forever. Journal of Economic Theory (1982). Logic and Artificial Intelligence 17/19

Geanakoplos and Polemarchakis Revision Process: Given event A, 1: My probability of A is q, 2: My probability of A is r, 1: My probability of A is now q, 2: My probability of A is now r, etc. Logic and Artificial Intelligence 18/19

Geanakoplos and Polemarchakis Revision Process: Given event A, 1: My probability of A is q, 2: My probability of A is r, 1: My probability of A is now q, 2: My probability of A is now r, etc. Assuming that the information partitions are finite, given an event A, the revision process converges in finitely many steps. Logic and Artificial Intelligence 18/19

Geanakoplos and Polemarchakis Revision Process: Given event A, 1: My probability of A is q, 2: My probability of A is r, 1: My probability of A is now q, 2: My probability of A is now r, etc. Assuming that the information partitions are finite, given an event A, the revision process converges in finitely many steps. For each n, there are examples where the process takes n steps. Logic and Artificial Intelligence 18/19

Geanakoplos and Polemarchakis Revision Process: Given event A, 1: My probability of A is q, 2: My probability of A is r, 1: My probability of A is now q, 2: My probability of A is now r, etc. Assuming that the information partitions are finite, given an event A, the revision process converges in finitely many steps. For each n, there are examples where the process takes n steps. An indirect communication equilibrium is not necessarily a direct communication equilibrium. Logic and Artificial Intelligence 18/19

Parikh and Krasucki Protocol: graph on the set of agents specifying the legal pairwise communication (who can talk to who). Logic and Artificial Intelligence 19/19

Parikh and Krasucki Protocol: graph on the set of agents specifying the legal pairwise communication (who can talk to who). If the protocol is fair, then the limiting probability of an event A will be the same for all agents in the group. Logic and Artificial Intelligence 19/19

Parikh and Krasucki Protocol: graph on the set of agents specifying the legal pairwise communication (who can talk to who). If the protocol is fair, then the limiting probability of an event A will be the same for all agents in the group. Consensus can be reached without common knowledge: everyone must know the common prices of commodities; however, it does not make sense to demand that everyone knows the details of every commercial transaction. Logic and Artificial Intelligence 19/19