Logic and Artificial Intelligence Lecture 13
|
|
- Cornelius Ball
- 5 years ago
- Views:
Transcription
1 Logic and Artificial Intelligence Lecture 13 Eric Pacuit Currently Visiting the Center for Formal Epistemology, CMU Center for Logic and Philosophy of Science Tilburg University ai.stanford.edu/ epacuit October 13, 2011 Logic and Artificial Intelligence 1/23
2 Epistemic Plausibility Models P P w v Epistemic-Plausibility Model: M = W, { i } i A, { i } i A, V Language: ϕ := p ϕ ϕ ψ K i ϕ B ϕ ψ [ i ]ϕ B s ϕ Truth: [[ϕ]] M = {w M, w = ϕ} M, w = K i ϕ iff for all v W, if w i v then M, v = ϕ M, w = B ϕ i ψ iff for all v Min i ([[ϕ]] M [w] i ), M, v = ψ M, w = [ i ]ϕ iff for all v W, if v i w then M, v = ϕ M, w = B s ϕ iff [[ϕ]] M [w] i and [[ϕ]] M i [[ ϕ]] M Logic and Artificial Intelligence 2/23
3 Grades of Doxastic Strength v 0 v 1 w v 2 Suppose that w is the current state. Knowledge (KP) Belief (BP) Safe Belief ( P) Strong Belief (B s P) Logic and Artificial Intelligence 3/23
4 Grades of Doxastic Strength v 0 v 1 w v 2 Suppose that w is the current state. Knowledge (KP) Belief (BP) Robust Belief ( P) Strong Belief (B s P) Logic and Artificial Intelligence 3/23
5 Grades of Doxastic Strength p p p p v 0 v 1 w v 2 Suppose that w is the current state. Belief (Bp) Robust Belief ( P) Strong Belief (B s P) Knowledge (KP) Logic and Artificial Intelligence 3/23
6 Grades of Doxastic Strength p p p p v 0 v 1 w v 2 Suppose that w is the current state. Belief (Bp) Robust Belief ([ ]p) Strong Belief (B s P) Knowledge (KP) Logic and Artificial Intelligence 3/23
7 Grades of Doxastic Strength p p p p v 0 v 1 w v 2 Suppose that w is the current state. Belief (Bp) Robust Belief ([ ]p) Strong Belief (B s p) Knowledge (KP) Logic and Artificial Intelligence 3/23
8 Grades of Doxastic Strength p p p p v 0 v 1 w v 2 Suppose that w is the current state. Belief (Bp) Robust Belief ([ ]p) Strong Belief (B s p) Knowledge (Kp) Logic and Artificial Intelligence 3/23
9 Agents may differ in precisely how they incorporate new information into their epistemic states. These differences are based, in part, on the agents perception of the source of the information. For example, an agent may consider a particular source of information infallible (not allowing for the possibility that the source is mistaken) or merely trustworthy (accepting the information as reliable, though allowing for the possibility of a mistake). Logic and Artificial Intelligence 4/23
10 Hard and Soft Updates M = W, { i } i A, { i } i A, V Find out that ϕ M = W, { i } i A, { i } i A, V W Logic and Artificial Intelligence 5/23
11 T 1, H 2 w 2 H 1, H 2 T 1, T 2 w 1 H 1, T 2 w 4 w 3 Min ([w 1 ]) = {w 4 }, so w 1 = B(H 1 H 2 ) Min ([w 1 ] [[T 1 ]] M ) = {w 2 }, so w 1 = B T 1 H 2 Min ([w 1 ] [[T 1 ]] M ) = {w 3 }, so w 1 = B T 2 H 1 Logic and Artificial Intelligence 6/23
12 T 1, H 2 w 2 H 1, H 2 T 1, T 2 w 1 H 1, T 2 w 4 w 3 Suppose the agent finds out that T 1 is/may be true. Logic and Artificial Intelligence 6/23
13 T1, H2 w 1 T1, T2 w 2 H1, T2 w 3 w 4 H1, H2!(T 1 ) = T 1, T 2 w 1 T 1, H 2 w 2 Suppose the agent finds out that T 1 is/may be true. (T 1 ) = T 1, T 2 H 1, T 2 Logic and Artificial Intelligence 6/16 w 1 B T 2 H 1 w 3 w 4 H 1, H 2 w 2 T 1, H 2 (T 1 ) = H 1, T 2 w 3 B T 2 T 1 w 4 H 1, H 2 w 1 T 1, T 2 w 2 T 1, H 2 Logic and Artificial Intelligence 6/23
14 Informative Actions E D B A ϕ C Public Announcement: Information from an infallible source (!ϕ): A i B Conservative Upgrade: Information from a trusted source ( ϕ): A i C i D i B E Radical Upgrade: Information from a strongly trusted source ( ϕ): A i B i C i D i E Logic and Artificial Intelligence 7/23
15 Informative Actions E D B A ϕ C Incorporate the new information ϕ(!ϕ): A i B Conservative Upgrade: Information from a trusted source ( ϕ): A i C i D i B E Radical Upgrade: Information from a strongly trusted source ( ϕ): A i B i C i D i E Logic and Artificial Intelligence 7/23
16 Informative Actions E D B A ϕ C Public Announcement: Information from an infallible source (!ϕ): A i B M!ϕ = W!ϕ, {!ϕ i } i A, V!ϕ W!ϕ = [[ϕ]] M!ϕ i = i (W!ϕ W!ϕ )!ϕ i = i (W!ϕ W!ϕ ) Logic and Artificial Intelligence 7/23
17 Informative Actions E D B A ϕ C Radical Upgrade: ( ϕ): A i B i C i D i E, M ϕ = W, { i } i A, { ϕ i } i A, V Let [[ϕ]] w i = {x M, x = ϕ} [w] i for all x [[ϕ]] w i and y [[ ϕ]] w i, set x ϕ i y, for all x, y [[ϕ]] w i, set x ϕ i y iff x i y, and for all x, y [[ ϕ]] w i, set x ϕ i y iff x i y. Logic and Artificial Intelligence 7/23
18 Informative Actions E D B A ϕ C Conservative Upgrade: ( ϕ): A i C i D i B E Conservative upgrade is radical upgrade with the formula best i (ϕ, w) := Min i ([w] i {x M, x = ϕ}) 1. If v best i (ϕ, w) then v ϕ i x for all x [w] i, and 2. for all x, y [w] i best i (ϕ, w), x ϕ i y iff x i y. Logic and Artificial Intelligence 7/23
19 Reduction Axioms [ ϕ]b ψ χ (L(ϕ [ ϕ]ψ) B ϕ [ ϕ]ψ [ ϕ]χ) ( L(ϕ [ ϕ]ψ) B [ ϕ]ψ [ ϕ]χ) Logic and Artificial Intelligence 8/23
20 Reduction Axioms [ ϕ]b ψ χ (L(ϕ [ ϕ]ψ) B ϕ [ ϕ]ψ [ ϕ]χ) ( L(ϕ [ ϕ]ψ) B [ ϕ]ψ [ ϕ]χ) [ ϕ]b ψ χ (B ϕ [ ϕ]ψ B [ ϕ]ψ [ ϕ]χ) ( B ϕ [ ϕ]ψ B ϕ [ ϕ]ψ [ ϕ]χ) Logic and Artificial Intelligence 8/23
21 Composition [!ϕ][!ψ]χ [!(ϕ [!ϕ]ψ)]χ Logic and Artificial Intelligence 9/23
22 Composition p, q p, q w 1 w 2 (p q) w 3 p, q p, q p, q p, q w 2 w 3 (p) w 1 p, q p, q w 3 w 2 w 1 p, q Logic and Artificial Intelligence 9/23
23 What happens as beliefs change over time (iterated belief revision)? Logic and Artificial Intelligence 10/23
24 O i (S) P j (S ) O j (T ) P j (T ) O i (S) P j (S ) nothing new!ϕ 1!ϕ 2!ϕ 3!ϕ n M 0 M 1 M 2 M f initial model fixed-point Where do the ϕ k come from? from the players practical reasoning/rational requirements Logic and Artificial Intelligence 11/23
25 O i (S) P j (S ) O j (T ) P j (T ) O i (S) P j (S ) nothing new ϕ 1 ϕ 2 ϕ 3 ϕ n M 0 M 1 M 2 M f initial model fixed-point Where do the ϕ k come from? from the players practical reasoning/rational requirements Logic and Artificial Intelligence 11/23
26 O i (S) P j (S ) O j (T ) P j (T ) O i (S) P j (S ) nothing new!ϕ 1 ϕ 2 ϕ 3 ϕ n M 0 M 1 M 2 M f initial model fixed-point Where do the ϕ k come from? from the players practical reasoning/rational requirements Logic and Artificial Intelligence 11/23
27 O i (S) P j (S ) O j (T ) P j (T ) O i (S) P j (S ) nothing new τ(ϕ 1 ) τ(ϕ 2 ) τ(ϕ 3 ) τ(ϕ n ) M 0 M 1 M 2 M f initial model fixed-point Where do the ϕ k come from? from the players practical reasoning/rational requirements Logic and Artificial Intelligence 11/23
28 Iterated Updates!ϕ 1,!ϕ 2,!ϕ 3,...,!ϕ n always reaches a fixed-point p p p Contradictory beliefs leads to oscillations ϕ, ϕ,... Simple beliefs may never stabilize ϕ, ϕ,... Simple beliefs stabilize, but conditional beliefs do not A. Baltag and S. Smets. Group Belief Dynamics under Iterated Revision: Fixed Points and Cycles of Joint Upgrades. TARK, Logic and Artificial Intelligence 12/23
29 r n d w 1 w 2 w 3 (r (d Bd) ( d Bd) r d n w 1 w 3 w 2 (r (d Bd) ( d Bd) r n d w 1 w 2 w 3 Logic and Artificial Intelligence 13/23
30 Let ϕ be (r (B r q p) (B r p q)) w 3 p w 2 q w 3 p w 2 q ϕ = w 3 p ϕ = w 2 q ϕ = w 1 r w 1 r w 1 r M 1 M 2 M 3 Logic and Artificial Intelligence 14/23
31 Suppose that you are in the forest and happen to a see strange-looking animal. Logic and Artificial Intelligence 15/23
32 Suppose that you are in the forest and happen to a see strange-looking animal. You consult your animal guidebook and find a picture that seems to match the animal you see. Logic and Artificial Intelligence 15/23
33 Suppose that you are in the forest and happen to a see strange-looking animal. You consult your animal guidebook and find a picture that seems to match the animal you see. The guidebook says that the animal is a type of bird, so that is what you conclude: The animal before you is a bird. After looking more closely, you also notice that the animal is also red. Logic and Artificial Intelligence 15/23
34 Suppose that you are in the forest and happen to a see strange-looking animal. You consult your animal guidebook and find a picture that seems to match the animal you see. The guidebook says that the animal is a type of bird, so that is what you conclude: The animal before you is a bird. After looking more closely, you also notice that the animal is also red. So, you also update your beliefs with that fact. Logic and Artificial Intelligence 15/23
35 Suppose that you are in the forest and happen to a see strange-looking animal. You consult your animal guidebook and find a picture that seems to match the animal you see. The guidebook says that the animal is a type of bird, so that is what you conclude: The animal before you is a bird. After looking more closely, you also notice that the animal is also red. So, you also update your beliefs with that fact. Now, suppose that an expert (whom you trust) happens to walk by and tells you that the animal is, in fact, not a bird. Logic and Artificial Intelligence 15/23
36 b, r b, r b, r b, r b b, r b, r b, r b, r r b, r b, r b, r b, r M 0 M 1 b M 2 b, r b, r b, r b, r M 3 Logic and Artificial Intelligence 16/23
37 Note that in the last model, M 3, the agent does not believe that the bird is red. Logic and Artificial Intelligence 17/23
38 Note that in the last model, M 3, the agent does not believe that the bird is red. The problem is that there does not seem to be any justification for why the agent drops her belief that the bird is red. This seems to result from the accidental fact that the agent started by updating with the information that the animal is a bird. Logic and Artificial Intelligence 17/23
39 Note that in the last model, M 3, the agent does not believe that the bird is red. The problem is that there does not seem to be any justification for why the agent drops her belief that the bird is red. This seems to result from the accidental fact that the agent started by updating with the information that the animal is a bird. In particular, note that the following sequence of updates is not problematic: Logic and Artificial Intelligence 17/23
40 b, r b, r b, r b, r r b, r b, r b, r b, r b b, r b, r b, r b, r M 0 M 1 b M 2 b, r b, r b, r b, r M 3 Logic and Artificial Intelligence 18/23
41 t 0 b r (b r) t 1 t 2 t 3 r b t 4 t 5 Logic and Artificial Intelligence 19/23
42 UUU UUD UDU UDD DDD DDU DUD DUU Three switches wired such that a light is on iff all three switches are up or all three are down. Three independent (reliable) observers report on the switches: Alice says switch 1 is U, Bob says switch 2 is D and Carla says switch 3 is U. I receive the information that the light is on. What should I believe? Cautious: UUU, DDD; Bold: UUU Logic and Artificial Intelligence 20/23
43 UUU UUD UDU UDD DDD DDU DUD DUU Three switches wired such that a light is on iff all three switches are up or all three are down. Three independent (reliable) observers report on the switches: Alice says switch 1 is U, Bob says switch 2 is D and Carla says switch 3 is U. I receive the information that the light is on. What should I believe? Cautious: UUU, DDD; Bold: UUU Logic and Artificial Intelligence 20/23
44 UUU UUD UDU UDD DDD DDU DUD DUU Three switches wired such that a light is on iff all three switches are up or all three are down. Three independent (reliable) observers report on the switches: Alice says switch 1 is U, Bob says switch 2 is D and Carla says switch 3 is U. I receive the information that the light is on. What should I believe? Cautious: UUU, DDD; Bold: UUU Logic and Artificial Intelligence 20/23
45 UUU UUD UDU UDD DDD DDU DUD DUU Three switches wired such that a light is on iff all three switches are up or all three are down. Three independent (reliable) observers report on the switches: Alice says switch 1 is U, Bob says switch 2 is D and Carla says switch 3 is U. I receive the information that the light is on. What should I believe? Cautious: UUU, DDD; Bold: UUU Logic and Artificial Intelligence 20/23
46 UUU UUD UDU UDD DDD DDU DUD DUU Suppose there are two switches: L 1 is the main switch and L 2 is a secondary switch controlled by the first two lights. (So L 1 L 2, but not the converse) Suppose I receive L 1 L 2, this does not change the story. Suppose I learn that L 2. This is irrelevant to Carla s report, but it means either Ann or Bob is wrong. Now, after learning L 1, the only rational thing to believe is that all three switches are up. Logic and Artificial Intelligence 20/23
47 UUU UUD UDU UDD DDD DDU DUD DUU Suppose there are two switches: L 1 is the main switch and L 2 is a secondary switch controlled by the first two lights. (So L 1 L 2, but not the converse) Suppose I receive L 1 L 2, this does not change the story. Suppose I learn that L 2. This is irrelevant to Carla s report, but it means either Ann or Bob is wrong. Now, after learning L 1, the only rational thing to believe is that all three switches are up. Logic and Artificial Intelligence 20/23
48 UUU UUD UDU UDD DDD DDU DUD DUU Suppose there are two switches: L 1 is the main switch and L 2 is a secondary switch controlled by the first two lights. (So L 1 L 2, but not the converse) Suppose I receive L 1 L 2, this does not change the story. Suppose I learn that L 2. This is irrelevant to Carla s report, but it means either Ann or Bob is wrong. Now, after learning L 1, the only rational thing to believe is that all three switches are up. Logic and Artificial Intelligence 20/23
49 UUU UUD UDU UDD DDD DDU DUD DUU Suppose there are two switches: L 1 is the main switch and L 2 is a secondary switch controlled by the first two lights. (So L 1 L 2, but not the converse) Suppose I receive L 1 L 2, this does not change the story. Suppose I learn that L 2. This is irrelevant to Carla s report, but it means either Ann or Bob is wrong. Now, after learning L 1, the only rational thing to believe is that all three switches are up. Logic and Artificial Intelligence 20/23
50 UUU UUD UDU UDD DDD DDU DUD DUU Suppose there are two switches: L 1 is the main switch and L 2 is a secondary switch controlled by the first two lights. (So L 1 L 2, but not the converse) Suppose I receive L 1 L 2, this does not change the story. Suppose I learn that L 2. This is irrelevant to Carla s report, but it means either Ann or Bob is wrong. Now, after learning L 1, the only rational thing to believe is that all three switches are up. Logic and Artificial Intelligence 20/23
51 UUU UUD UDU UDD DDD DDU DUD DUU Suppose there are two switches: L 1 is the main switch and L 2 is a secondary switch controlled by the first two lights. (So L 1 L 2, but not the converse) Suppose I receive L 1 L 2, this does not change the story. Suppose I learn that L 2. This is irrelevant to Carla s report, but it means either Ann or Bob is wrong. Now, after learning L 1, the only rational thing to believe is that all three switches are up. Logic and Artificial Intelligence 20/23
52 Many of the recent developments in this area have been driven by analyzing concrete examples. This raises an important methodological issue: Implicit assumptions about what the actors know and believe about the situation being modeled often guide the analyst s intuitions. In many cases, it is crucial to make these underlying assumptions explicit. The general point is that how the agent(s) come to know or believe that some proposition p is true is as important (or, perhaps, more important) than the fact that the agent(s) knows or believes that p is the case Logic and Artificial Intelligence 21/23
53 meta-information: information about how trusted or reliable the sources of the information are. Logic and Artificial Intelligence 22/23
54 meta-information: information about how trusted or reliable the sources of the information are. This is particularly important when analyzing how an agent s beliefs change over an extended period of time. For example, rather than taking a stream of contradictory incoming evidence (i.e., the agent receives the information that p, then the information that q, then the information that p, then the information that q) at face value (and performing the suggested belief revisions), a rational agent may consider the stream itself as evidence that the source is not reliable Logic and Artificial Intelligence 22/23
55 meta-information: information about how trusted or reliable the sources of the information are. This is particularly important when analyzing how an agent s beliefs change over an extended period of time. For example, rather than taking a stream of contradictory incoming evidence (i.e., the agent receives the information that p, then the information that q, then the information that p, then the information that q) at face value (and performing the suggested belief revisions), a rational agent may consider the stream itself as evidence that the source is not reliable procedural information: information about the underlying protocol specifying which events (observations, messages, actions) are available (or permitted) at any given moment. A protocol describes what the agents can or cannot do (say, observe) in a social interactive situation or rational inquiry. Logic and Artificial Intelligence 22/23
56 Discussion A key aspect of any formal model of a (social) interactive situation or situation of rational inquiry is the way it accounts for the...information about how I learn some of the things I learn, about the sources of my information, or about what I believe about what I believe and don t believe. If the story we tell in an example makes certain information about any of these things relevant, then it needs to be included in a proper model of the story, if it is to play the right role in the evaluation of the abstract principles of the model. (Stalnaker, pg. 203) R. Stalnaker. Iterated Belief Revision. Erkentnis 70, pgs , Logic and Artificial Intelligence 23/23
Models of Strategic Reasoning Lecture 4
Models of Strategic Reasoning Lecture 4 Eric Pacuit University of Maryland, College Park ai.stanford.edu/~epacuit August 9, 2012 Eric Pacuit: Models of Strategic Reasoning 1/55 Game Plan Lecture 4: Lecture
More informationTen Puzzles and Paradoxes about Knowledge and Belief
Ten Puzzles and Paradoxes about Knowledge and Belief ESSLLI 2013, Düsseldorf Wes Holliday Eric Pacuit August 16, 2013 Puzzles of Knowledge and Belief 1/34 Taking Stock w P P v Epistemic Model: M = W, {R
More informationLogic and Artificial Intelligence Lecture 12
Logic and Artificial Intelligence Lecture 12 Eric Pacuit Currently Visiting the Center for Formal Epistemology, CMU Center for Logic and Philosophy of Science Tilburg University ai.stanford.edu/ epacuit
More informationLogical and Probabilistic Models of Belief Change
Logical and Probabilistic Models of Belief Change Eric Pacuit Department of Philosophy University of Maryland, College Park pacuit.org August 7, 2017 Eric Pacuit 1 Plan Day 1 Introduction to belief revision,
More informationLogic and Artificial Intelligence Lecture 6
Logic and Artificial Intelligence Lecture 6 Eric Pacuit Currently Visiting the Center for Formal Epistemology, CMU Center for Logic and Philosophy of Science Tilburg University ai.stanford.edu/ epacuit
More informationLogic and Artificial Intelligence Lecture 22
Logic and Artificial Intelligence Lecture 22 Eric Pacuit Currently Visiting the Center for Formal Epistemology, CMU Center for Logic and Philosophy of Science Tilburg University ai.stanford.edu/ epacuit
More informationLogic and Artificial Intelligence Lecture 7
Logic and Artificial Intelligence Lecture 7 Eric Pacuit Currently Visiting the Center for Formal Epistemology, CMU Center for Logic and Philosophy of Science Tilburg University ai.stanford.edu/ epacuit
More informationLogics of Rational Agency Lecture 3
Logics of Rational Agency Lecture 3 Eric Pacuit Tilburg Institute for Logic and Philosophy of Science Tilburg Univeristy ai.stanford.edu/~epacuit July 29, 2009 Eric Pacuit: LORI, Lecture 3 1 Plan for the
More informationToward a Theory of Play: A Logical Perspective on Games and Interaction
Games 2011, 2, 52-86; doi:10.3390/g2010052 OPEN ACCESS games ISSN 2073-4336 www.mdpi.com/journal/games Article Toward a Theory of Play: A Logical Perspective on Games and Interaction Johan van Benthem
More informationBelief Revision in Social Networks
Tsinghua University and University of Amsterdam 8-10 July 2015, IRIT, Université Paul Sabatier, Toulouse Outline 1 2 Doxastic influence Finite state automaton Stability and flux Dynamics in the community
More informationLogic and Artificial Intelligence Lecture 20
Logic and Artificial Intelligence Lecture 20 Eric Pacuit Currently Visiting the Center for Formal Epistemology, CMU Center for Logic and Philosophy of Science Tilburg University ai.stanford.edu/ epacuit
More informationEpistemic Game Theory
Epistemic Game Theory Lecture 3 ESSLLI 12, Opole Eric Pacuit Olivier Roy TiLPS, Tilburg University MCMP, LMU Munich ai.stanford.edu/~epacuit http://olivier.amonbofis.net August 8, 2012 Eric Pacuit and
More informationWhat is DEL good for? Alexandru Baltag. Oxford University
Copenhagen 2010 ESSLLI 1 What is DEL good for? Alexandru Baltag Oxford University Copenhagen 2010 ESSLLI 2 DEL is a Method, Not a Logic! I take Dynamic Epistemic Logic () to refer to a general type of
More information1. Belief Convergence (or endless cycles?) by Iterated Learning 2. Belief Merge (or endless disagreements?) by Iterated Sharing (and Persuasion)
LMCS (ESSLLI) 2009 Bordeaux 1 Group Belief Dynamics under Iterated Upgrades 1. Belief Convergence (or endless cycles?) by Iterated Learning 2. Belief Merge (or endless disagreements?) by Iterated Sharing
More informationAgreement Theorems from the Perspective of Dynamic-Epistemic Logic
Agreement Theorems from the Perspective of Dynamic-Epistemic Logic Olivier Roy & Cedric Dégremont November 10, 2008 Olivier Roy & Cedric Dégremont: Agreement Theorems & Dynamic-Epistemic Logic, 1 Introduction
More informationLogics of Informational Attitudes and Informative Actions
Logics of Informational ttitudes and Informative ctions Eric Pacuit September 28, 2010 1 Introduction There is an extensive literature focused on using logical methods to reason about communities of agents
More informationLogic and Artificial Intelligence Lecture 21
Logic and Artificial Intelligence Lecture 21 Eric Pacuit Currently Visiting the Center for Formal Epistemology, CMU Center for Logic and Philosophy of Science Tilburg University ai.stanford.edu/ epacuit
More informationProduct Update and Looking Backward
Product Update and Looking Backward Audrey Yap May 21, 2006 Abstract The motivation behind this paper is to look at temporal information in models of BMS product update. That is, it may be useful to look
More informationToward a Theory of Play: A Logical Perspective on Games and Interaction
Toward a Theory of Play: A Logical Perspective on Games and Interaction Johan van Benthem ILLC & Stanford Eric Pacuit TiLPS, Tilburg University Olivier Roy Faculty of Philosophy University of Groningen
More informationWhen all is done but not (yet) said: Dynamic rationality in extensive games
When all is done but not (yet) said: Dynamic rationality in extensive games Alexandru Baltag Oxford baltag@comlab.ox.ac.uk Sonja Smets Brussels & Oxford sonsmets@vub.ac.be Jonathan A. Zvesper Amsterdam
More informationAgent Communication and Belief Change
Agent Communication and Belief Change Satoshi Tojo JAIST November 30, 2014 Satoshi Tojo (JAIST) Agent Communication and Belief Change November 30, 2014 1 / 34 .1 Introduction.2 Introspective Agent.3 I
More informationWhen is an example a counterexample?
When is an example a counterexample? [Extended Abstract] Eric Pacuit University of Maryland TiLPS, Tilburg University e.j.pacuit@uvt.nl Arthur Paul Pedersen Department of Philosophy Carnegie Mellon University
More informationChanging Types. Dominik Klein Eric Pacuit. April 24, 2011
Changing Types Dominik Klein Eric Pacuit April 24, 2011 The central thesis of the epistemic program in game theory (Brandenburger, 2007) is that the basic mathematical models of a game situation should
More informationA Qualitative Theory of Dynamic Interactive Belief Revision
A Qualitative Theory of Dynamic Interactive Belief Revision Alexandru Baltag 1 Sonja Smets 2,3 1 Computing Laboratory Oxford University Oxford OX1 3QD, United Kingdom 2 Center for Logic and Philosophy
More informationDynamic Logics of Knowledge and Access
Dynamic Logics of Knowledge and Access Tomohiro Hoshi (thoshi@stanford.edu) Department of Philosophy Stanford University Eric Pacuit (e.j.pacuit@uvt.nl) Tilburg Center for Logic and Philosophy of Science
More informationIntroduction to Epistemic Reasoning in Interaction
Introduction to Epistemic Reasoning in Interaction Eric Pacuit Center for Logic and Philosophy of Science Tilburg University ai.stanford.edu/~epacuit December 9, 2009 Eric Pacuit 1 We are interested in
More informationLearning by Questions and Answers:
Learning by Questions and Answers: From Belief-Revision Cycles to Doxastic Fixed Points Alexandru Baltag 1 and Sonja Smets 2,3 1 Computing Laboratory, University of Oxford, Alexandru.Baltag@comlab.ox.ac.uk
More informationConditional Logic and Belief Revision
Conditional Logic and Belief Revision Ginger Schultheis (vks@mit.edu) and David Boylan (dboylan@mit.edu) January 2017 History The formal study of belief revision grew out out of two research traditions:
More informationDynamics for Inference
Dynamics for Inference Alexei Angelides 1 Introduction Consider the following two situations: (DS) from P Q, and P, infer Q. and (WS) I order beef, Jesse orders fish, Darko orders veg. A new waiter walks
More informationLecture 1: The Humean Thesis on Belief
Lecture 1: The Humean Thesis on Belief Hannes Leitgeb LMU Munich October 2014 What do a perfectly rational agent s beliefs and degrees of belief have to be like in order for them to cohere with each other?
More informationLogics For Epistemic Programs
1 / 48 Logics For Epistemic Programs by Alexandru Baltag and Lawrence S. Moss Chang Yue Dept. of Philosophy PKU Dec. 9th, 2014 / Seminar 2 / 48 Outline 1 Introduction 2 Epistemic Updates and Our Target
More informationCapturing Lewis s Elusive Knowledge
Zhaoqing Xu Department of Philosophy, Peking University zhaoqingxu@gmail.com September 22, 2011 1 Introduction 2 Philosophical Background Dretske s Relevant Alternatives Theory Lewis s Elusive Knowledge
More informationLevels of Knowledge and Belief Computational Social Choice Seminar
Levels of Knowledge and Belief Computational Social Choice Seminar Eric Pacuit Tilburg University ai.stanford.edu/~epacuit November 13, 2009 Eric Pacuit 1 Introduction and Motivation Informal Definition:
More informationSchematic Validity in Dynamic Epistemic Logic: Decidability
This paper has been superseded by W. H. Holliday, T. Hoshi, and T. F. Icard, III, Information dynamics and uniform substitution, Synthese, Vol. 190, 2013, 31-55. Schematic Validity in Dynamic Epistemic
More informationAgency and Interaction in Formal Epistemology
Agency and Interaction in Formal Epistemology Vincent F. Hendricks Department of Philosophy / MEF University of Copenhagen Denmark Department of Philosophy Columbia University New York / USA CPH / August
More informationIntroduction to Formal Epistemology Lecture 5
Introduction to Formal Epistemology Lecture 5 Eric Pacuit and Rohit Parikh August 17, 2007 Eric Pacuit and Rohit Parikh: Introduction to Formal Epistemology, Lecture 5 1 Plan for the Course Introduction,
More informationModal Logic. Introductory Lecture. Eric Pacuit. University of Maryland, College Park ai.stanford.edu/ epacuit. January 31, 2012.
Modal Logic Introductory Lecture Eric Pacuit University of Maryland, College Park ai.stanford.edu/ epacuit January 31, 2012 Modal Logic 1/45 Setting the Stage Modern Modal Logic began with C.I. Lewis dissatisfaction
More informationDon t Plan for the Unexpected: Planning Based on Plausibility Models
Don t Plan for the Unexpected: Planning Based on Plausibility Models Thomas Bolander, DTU Informatics, Technical University of Denmark Joint work with Mikkel Birkegaard Andersen and Martin Holm Jensen
More informationNotes on BAN Logic CSG 399. March 7, 2006
Notes on BAN Logic CSG 399 March 7, 2006 The wide-mouthed frog protocol, in a slightly different form, with only the first two messages, and time stamps: A S : A, {T a, B, K ab } Kas S B : {T s, A, K ab
More informationNeighborhood Semantics for Modal Logic Lecture 5
Neighborhood Semantics for Modal Logic Lecture 5 Eric Pacuit University of Maryland, College Park pacuit.org epacuit@umd.edu August 15, 2014 Eric Pacuit 1 Course Plan Introduction and Motivation: Background
More informationIn Defense of Jeffrey Conditionalization
In Defense of Jeffrey Conditionalization Franz Huber Department of Philosophy University of Toronto Please do not cite! December 31, 2013 Contents 1 Introduction 2 2 Weisberg s Paradox 3 3 Jeffrey Conditionalization
More informationFor True Conditionalizers Weisberg s Paradox is a False Alarm
For True Conditionalizers Weisberg s Paradox is a False Alarm Franz Huber Department of Philosophy University of Toronto franz.huber@utoronto.ca http://huber.blogs.chass.utoronto.ca/ July 7, 2014; final
More informationFor True Conditionalizers Weisberg s Paradox is a False Alarm
For True Conditionalizers Weisberg s Paradox is a False Alarm Franz Huber Abstract: Weisberg (2009) introduces a phenomenon he terms perceptual undermining He argues that it poses a problem for Jeffrey
More informationFrom conditional probability to the logic of doxastic actions
From conditional probability to the logic of doxastic actions Alexandru Baltag and Sonja Smets Abstract We investigate the discrete (finite) case of the Popper-Renyi theory of conditional probability,
More informationValentin Goranko Stockholm University. ESSLLI 2018 August 6-10, of 29
ESSLLI 2018 course Logics for Epistemic and Strategic Reasoning in Multi-Agent Systems Lecture 5: Logics for temporal strategic reasoning with incomplete and imperfect information Valentin Goranko Stockholm
More informationReversed Squares of Opposition in PAL and DEL
Center for Logic and Analytical Philosophy, University of Leuven lorenz.demey@hiw.kuleuven.be SQUARE 2010, Corsica Goal of the talk public announcement logic (PAL), and dynamic epistemic logic (DEL) in
More informationSEMANTICAL CONSIDERATIONS ON NONMONOTONIC LOGIC. Robert C. Moore Artificial Intelligence Center SRI International, Menlo Park, CA 94025
SEMANTICAL CONSIDERATIONS ON NONMONOTONIC LOGIC Robert C. Moore Artificial Intelligence Center SRI International, Menlo Park, CA 94025 ABSTRACT Commonsense reasoning is "nonmonotonic" in the sense that
More informationStabilizing Boolean Games by Sharing Information
John Grant Sarit Kraus Michael Wooldridge Inon Zuckerman Stabilizing Boolean Games by Sharing Information Abstract. We address the issue of manipulating games through communication. In the specific setting
More informationDoxastic Logic. Michael Caie
Doxastic Logic Michael Caie There are at least three natural ways of interpreting the object of study of doxastic logic. On one construal, doxastic logic studies certain general features of the doxastic
More informationBelief Revision II: Ranking Theory
Belief Revision II: Ranking Theory Franz Huber epartment of Philosophy University of Toronto, Canada franz.huber@utoronto.ca http://huber.blogs.chass.utoronto.ca/ penultimate version: please cite the paper
More informationEvidence-Based Belief Revision for Non-Omniscient Agents
Evidence-Based Belief Revision for Non-Omniscient Agents MSc Thesis (Afstudeerscriptie) written by Kristina Gogoladze (born May 11th, 1986 in Tbilisi, Georgia) under the supervision of Dr Alexandru Baltag,
More informationA Game Semantics for a Non-Classical Logic
Can BAŞKENT INRIA, Nancy can@canbaskent.net www.canbaskent.net October 16, 2013 Outlook of the Talk Classical (but Extended) Game Theoretical Semantics for Negation Classical Game Theoretical Semantics
More informationIntroduction to Epistemic Game Theory
Introduction to Epistemic Game Theory Eric Pacuit University of Maryland, College Park pacuit.org epacuit@umd.edu May 25, 2016 Eric Pacuit 1 Plan Introductory remarks Games and Game Models Backward and
More informationThe paradox of knowability, the knower, and the believer
The paradox of knowability, the knower, and the believer Last time, when discussing the surprise exam paradox, we discussed the possibility that some claims could be true, but not knowable by certain individuals
More informationMULTI-AGENT ONLY-KNOWING
MULTI-AGENT ONLY-KNOWING Gerhard Lakemeyer Computer Science, RWTH Aachen University Germany AI, Logic, and Epistemic Planning, Copenhagen October 3, 2013 Joint work with Vaishak Belle Contents of this
More informationBelief Change in the Context of Fallible Actions and Observations
Belief Change in the Context of Fallible Actions and Observations Aaron Hunter and James P. Delgrande School of Computing Science Faculty of Applied Sciences Simon Fraser University Burnaby, BC, Canada
More informationKnowledge Based Obligations RUC-ILLC Workshop on Deontic Logic
Knowledge Based Obligations RUC-ILLC Workshop on Deontic Logic Eric Pacuit Stanford University November 9, 2007 Eric Pacuit: Knowledge Based Obligations, RUC-ILLC Workshop on Deontic Logic 1 The Kitty
More informationResearch Statement Christopher Hardin
Research Statement Christopher Hardin Brief summary of research interests. I am interested in mathematical logic and theoretical computer science. Specifically, I am interested in program logics, particularly
More informationTopics in Social Software: Information in Strategic Situations (Draft: Chapter 4) Eric Pacuit Comments welcome:
Topics in Social Software: Information in Strategic Situations (Draft: Chapter 4) Eric Pacuit Comments welcome: epacuit@cs.gc.cuny.edu February 12, 2006 Chapter 1 Communication Graphs The previous chapter
More informationThe Ins and Outs of Reason Maintenance
Reset reproduction of CMU Computer Science report CMU-CS-83-126. Published in IJCAI 83, pp. 349 351. Reprinted July 1994. Reprinting c Copyright 1983, 1994 by Jon Doyle. Current address: MIT Laboratory
More informationAn Introduction to Bayesian Reasoning
Tilburg Center for Logic and Philosophy of Science (TiLPS) Tilburg University, The Netherlands EPS Seminar, TiLPS, 9 October 2013 Overview of the Tutorial This tutorial aims at giving you an idea of why
More informationEpistemic Informativeness
Epistemic Informativeness Yanjing Wang, Jie Fan Department of Philosophy, Peking University 2nd AWPL, Apr. 12th, 2014 Motivation Epistemic Informativeness Conclusions and future work Frege s puzzle on
More informationA Unifying Semantics for Belief Change
A Unifying Semantics for Belief Change C0300 Abstract. Many belief change formalisms employ plausibility orderings over the set of possible worlds to determine how the beliefs of an agent ought to be modified
More informationBelief revision: A vade-mecum
Belief revision: A vade-mecum Peter Gärdenfors Lund University Cognitive Science, Kungshuset, Lundagård, S 223 50 LUND, Sweden Abstract. This paper contains a brief survey of the area of belief revision
More informationDialogical Logic. 1 Introduction. 2.2 Procedural Rules. 2.3 Winning. 2 Organization. 2.1 Particle Rules. 3 Examples. Formula Attack Defense
1 Introduction Dialogical Logic Jesse Alama May 19, 2009 Dialogue games are one of the earliest examples of games in logic. They were introduced by Lorenzen [1] in the 1950s; since then, major players
More informationUnderstanding the Brandenburger-Keisler Belief Paradox
Understanding the Brandenburger-Keisler Belief Paradox Eric Pacuit Institute of Logic, Language and Information University of Amsterdam epacuit@staff.science.uva.nl staff.science.uva.nl/ epacuit March
More informationLecture 11: Topics in Formal Epistemology
Lecture 11: Topics in Formal Epistemology Eric Pacuit ILLC, University of Amsterdam staff.science.uva.nl/ epacuit epacuit@science.uva.nl Lecture Date: May 4, 2006 Caput Logic, Language and Information:
More informationPreference and its Dynamics
Department of Philosophy,Tsinghua University 28 August, 2012, EASLLC Table of contents 1 Introduction 2 Betterness model and dynamics 3 Priorities and dynamics 4 Relating betterness and priority dynamics
More informationFiltrations and Basic Proof Theory Notes for Lecture 5
Filtrations and Basic Proof Theory Notes for Lecture 5 Eric Pacuit March 13, 2012 1 Filtration Let M = W, R, V be a Kripke model. Suppose that Σ is a set of formulas closed under subformulas. We write
More informationUpdate As Evidence: Belief Expansion
Update As Evidence: Belief Expansion Roman Kuznets and Thomas Studer Institut für Informatik und angewandte Mathematik Universität Bern {kuznets, tstuder}@iam.unibe.ch http://www.iam.unibe.ch/ltg Abstract.
More informationCOMP219: Artificial Intelligence. Lecture 19: Logic for KR
COMP219: Artificial Intelligence Lecture 19: Logic for KR 1 Overview Last time Expert Systems and Ontologies Today Logic as a knowledge representation scheme Propositional Logic Syntax Semantics Proof
More informationDiscrete Mathematics for CS Fall 2003 Wagner Lecture 3. Strong induction
CS 70 Discrete Mathematics for CS Fall 2003 Wagner Lecture 3 This lecture covers further variants of induction, including strong induction and the closely related wellordering axiom. We then apply these
More informationDr. Truthlove, or How I Learned to Stop Worrying and Love Bayesian Probabilities
Dr. Truthlove, or How I Learned to Stop Worrying and Love Bayesian Probabilities Kenny Easwaran 10/15/2014 1 Setup 1.1 The Preface Paradox Dr. Truthlove loves believing things that are true, and hates
More informationCOMP219: Artificial Intelligence. Lecture 19: Logic for KR
COMP219: Artificial Intelligence Lecture 19: Logic for KR 1 Overview Last time Expert Systems and Ontologies Today Logic as a knowledge representation scheme Propositional Logic Syntax Semantics Proof
More informationModels of Strategic Reasoning Lecture 2
Models of Strategic Reasoning Lecture 2 Eric Pacuit University of Maryland, College Park ai.stanford.edu/~epacuit August 7, 2012 Eric Pacuit: Models of Strategic Reasoning 1/30 Lecture 1: Introduction,
More informationEPISTEMIC LOGIC AND INFORMATION UPDATE
EPISTEMIC LOGIC AND INFORMATION UPDATE A. Baltag, H. P. van Ditmarsch, and L. S. Moss 1 PROLOGUE Epistemic logic investigates what agents know or believe about certain factual descriptions of the world,
More informationTowards A Multi-Agent Subset Space Logic
Towards A Multi-Agent Subset Space Logic A Constructive Approach with Applications Department of Computer Science The Graduate Center of the City University of New York cbaskent@gc.cuny.edu www.canbaskent.net
More informationPart Six: Reasoning Defeasibly About the World
Part Six: Reasoning Defeasibly About the World Our in building a defeasible reasoner was to have an inference-engine for a rational agent capable of getting around in the real world. This requires it to
More informationPaul D. Thorn. HHU Düsseldorf, DCLPS, DFG SPP 1516
Paul D. Thorn HHU Düsseldorf, DCLPS, DFG SPP 1516 High rational personal probability (0.5 < r < 1) is a necessary condition for rational belief. Degree of probability is not generally preserved when one
More informationManipulating Games by Sharing Information
John Grant Sarit Kraus Michael Wooldridge Inon Zuckerman Manipulating Games by Sharing Information Abstract. We address the issue of manipulating games through communication. In the specific setting we
More informationReasoning with Inconsistent and Uncertain Ontologies
Reasoning with Inconsistent and Uncertain Ontologies Guilin Qi Southeast University China gqi@seu.edu.cn Reasoning Web 2012 September 05, 2012 Outline Probabilistic logic vs possibilistic logic Probabilistic
More informationJustified Belief and the Topology of Evidence
Justified Belief and the Topology of Evidence Alexandru Baltag 1, Nick Bezhanishvili 1, Aybüke Özgün 1,2, Sonja Smets 1 1 University of Amsterdam, The Netherlands 2 LORIA, CNRS - Université de Lorraine,
More informationRobust Knowledge and Rationality
Robust Knowledge and Rationality Sergei Artemov The CUNY Graduate Center 365 Fifth Avenue, 4319 New York City, NY 10016, USA sartemov@gc.cuny.edu November 22, 2010 Abstract In 1995, Aumann proved that
More informationIn Defence of a Naïve Conditional Epistemology
In Defence of a Naïve Conditional Epistemology Andrew Bacon 28th June 2013 1 The data You pick a card at random from a standard deck of cards. How confident should I be about asserting the following sentences?
More informationTowards Symbolic Factual Change in Dynamic Epistemic Logic
Towards Symbolic Factual Change in Dynamic Epistemic Logic Malvin Gattinger ILLC, Amsterdam July 18th 2017 ESSLLI Student Session Toulouse Are there more red or more blue points? Are there more red or
More informationMulti-agent belief dynamics: bridges between dynamic doxastic and doxastic temporal logics
Multi-agent belief dynamics: bridges between dynamic doxastic and doxastic temporal logics Johan van Benthem ILLC Amsterdam Stanford University johan@science.uva.nl Cédric Dégremont ILLC Amsterdam cdegremo@science.uva.nl
More informationFormal Epistemology: Lecture Notes. Horacio Arló-Costa Carnegie Mellon University
Formal Epistemology: Lecture Notes Horacio Arló-Costa Carnegie Mellon University hcosta@andrew.cmu.edu Bayesian Epistemology Radical probabilism doesn t insists that probabilities be based on certainties;
More informationAn Egalitarist Fusion of Incommensurable Ranked Belief Bases under Constraints
An Egalitarist Fusion of Incommensurable Ranked Belief Bases under Constraints Salem Benferhat and Sylvain Lagrue and Julien Rossit CRIL - Université d Artois Faculté des Sciences Jean Perrin Rue Jean
More informationMaximal Introspection of Agents
Electronic Notes in Theoretical Computer Science 70 No. 5 (2002) URL: http://www.elsevier.nl/locate/entcs/volume70.html 16 pages Maximal Introspection of Agents Thomas 1 Informatics and Mathematical Modelling
More informationEvidence with Uncertain Likelihoods
Evidence with Uncertain Likelihoods Joseph Y. Halpern Cornell University Ithaca, NY 14853 USA halpern@cs.cornell.edu Riccardo Pucella Cornell University Ithaca, NY 14853 USA riccardo@cs.cornell.edu Abstract
More informationAmbiguous Language and Differences in Beliefs
Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning Ambiguous Language and Differences in Beliefs Joseph Y. Halpern Computer Science Dept. Cornell
More informationDescription Logics. Foundations of Propositional Logic. franconi. Enrico Franconi
(1/27) Description Logics Foundations of Propositional Logic Enrico Franconi franconi@cs.man.ac.uk http://www.cs.man.ac.uk/ franconi Department of Computer Science, University of Manchester (2/27) Knowledge
More informationArbitrary Announcements in Propositional Belief Revision
Arbitrary Announcements in Propositional Belief Revision Aaron Hunter British Columbia Institute of Technology Burnaby, Canada aaron hunter@bcitca Francois Schwarzentruber ENS Rennes Bruz, France francoisschwarzentruber@ens-rennesfr
More informationAn Inquisitive Formalization of Interrogative Inquiry
An Inquisitive Formalization of Interrogative Inquiry Yacin Hamami 1 Introduction and motivation The notion of interrogative inquiry refers to the process of knowledge-seeking by questioning [5, 6]. As
More informationA Survey of Topologic
Illuminating New Directions Department of Computer Science Graduate Center, the City University of New York cbaskent@gc.cuny.edu // www.canbaskent.net/logic December 1st, 2011 - The Graduate Center Contents
More informationSOME SEMANTICS FOR A LOGICAL LANGUAGE FOR THE GAME OF DOMINOES
SOME SEMANTICS FOR A LOGICAL LANGUAGE FOR THE GAME OF DOMINOES Fernando R. Velázquez-Quesada Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas Universidad Nacional Autónoma de México
More informationINTRODUCTION TO NONMONOTONIC REASONING
Faculty of Computer Science Chair of Automata Theory INTRODUCTION TO NONMONOTONIC REASONING Anni-Yasmin Turhan Dresden, WS 2017/18 About the Course Course Material Book "Nonmonotonic Reasoning" by Grigoris
More informationModal Logics. Most applications of modal logic require a refined version of basic modal logic.
Modal Logics Most applications of modal logic require a refined version of basic modal logic. Definition. A set L of formulas of basic modal logic is called a (normal) modal logic if the following closure
More informationConfirmation Theory. Pittsburgh Summer Program 1. Center for the Philosophy of Science, University of Pittsburgh July 7, 2017
Confirmation Theory Pittsburgh Summer Program 1 Center for the Philosophy of Science, University of Pittsburgh July 7, 2017 1 Confirmation Disconfirmation 1. Sometimes, a piece of evidence, E, gives reason
More informationPLAYING WITH KNOWLEDGE AND BELIEF
PLAYING WITH KNOWLEDGE AND BELIEF Virginie Fiutek PLAYING WITH KNOWLEDGE AND BELIEF ILLC Dissertation Series DS-2013-02 For further information about ILLC-publications, please contact Institute for Logic,
More information