Belief Revision and Truth-Finding

Similar documents
Ockham Efficiency Theorem for Randomized Scientific Methods

Church s s Thesis and Hume s s Problem:

Laboratoire d'informatique Fondamentale de Lille

Özgür L. Özçep Ontology Change 2

On iterated revision in the AGM framework

Confirmation Theory. Pittsburgh Summer Program 1. Center for the Philosophy of Science, University of Pittsburgh July 7, 2017

Iterated Belief Change: A Transition System Approach

An Egalitarist Fusion of Incommensurable Ranked Belief Bases under Constraints

In Defense of Jeffrey Conditionalization

A Semantic Approach for Iterated Revision in Possibilistic Logic

Ranking Theory. Franz Huber Department of Philosophy University of Toronto, Canada

2 KEVIN KELLY, OLIVER SCHULTE, VINCENT HENDRICKS In this paper, we provide a learning theoretic analysis of the eects of the AGM axioms on reliabile i

For True Conditionalizers Weisberg s Paradox is a False Alarm

For True Conditionalizers Weisberg s Paradox is a False Alarm

ON THE SOLVABILITY OF INDUCTIVE PROBLEMS: A STUDY IN EPISTEMIC TOPOLOGY

Learning by Questions and Answers:

Comment on Leitgeb s Stability Theory of Belief

A Note On Comparative Probability

Agency and Interaction in Formal Epistemology

Game Theory Lecture 10+11: Knowledge

1. Belief Convergence (or endless cycles?) by Iterated Learning 2. Belief Merge (or endless disagreements?) by Iterated Sharing (and Persuasion)

Game Theory and Rationality

Reviewed by Martin Smith, University of Glasgow

Ex Post Cheap Talk : Value of Information and Value of Signals

Belief Change in the Context of Fallible Actions and Observations

Testability and Ockham s Razor: How Formal and Statistical Learning Theory Converge in the New Riddle of. Induction

Gödel s Argument for Cantor s Cardinals Matthew W. Parker Centre for Philosophy of Natural and Social Science

Formal Epistemology: Lecture Notes. Horacio Arló-Costa Carnegie Mellon University

Robust Knowledge and Rationality

Russell s logicism. Jeff Speaks. September 26, 2007

Joyce s Argument for Probabilism

Arbitrary Announcements in Propositional Belief Revision

The Arithmetical Hierarchy

Desire-as-belief revisited

Agreement Theorems from the Perspective of Dynamic-Epistemic Logic

Ordering functions. Raphaël Carroy. Hejnice, Czech Republic January 27th, Universität Münster 1/21

Changing Types. Dominik Klein Eric Pacuit. April 24, 2011

Propositional Logic: Models and Proofs

Belief Revision II: Ranking Theory

Conceivability and Modal Knowledge

Doxastic Logic. Michael Caie

Logics for Belief as Maximally Plausible Possibility

What can you prove by induction?

Review CHAPTER. 2.1 Definitions in Chapter Sample Exam Questions. 2.1 Set; Element; Member; Universal Set Partition. 2.

Proving Completeness for Nested Sequent Calculi 1

Notes on induction proofs and recursive definitions

Logic and Artificial Intelligence Lecture 13

Understanding Computation

CL16, 12th September 2016

Probability Calculus. Chapter From Propositional to Graded Beliefs

One-to-one functions and onto functions

SOME TRANSFINITE INDUCTION DEDUCTIONS

Topology and experimental distinguishability

Belief Revision and Progression of KBs in the Epistemic Situation Calculus

Rigorous Science - Based on a probability value? The linkage between Popperian science and statistical analysis

Rigorous Science - Based on a probability value? The linkage between Popperian science and statistical analysis

I. Induction, Probability and Confirmation: Introduction

The Lottery Paradox. Atriya Sen & Selmer Bringsjord

CS1800: Mathematical Induction. Professor Kevin Gold

Climbing an Infinite Ladder

Two hours. Note that the last two pages contain inference rules for natural deduction UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE

MATH MW Elementary Probability Course Notes Part I: Models and Counting

A Bad Day Surfing is Better than a Good Day Working: How to Revise a Total Preorder

HOW TO DO THINGS WITH AN INFINITE REGRESS

Probability Calculus. p.1

Measuring Agent Intelligence via Hierarchies of Environments

Finitely generated free Heyting algebras; the well-founded initial segment

Latches. October 13, 2003 Latches 1

CA320 - Computability & Complexity

Chapter 7 R&N ICS 271 Fall 2017 Kalev Kask

Propositional Logic: Methods of Proof (Part II)

The paradox of knowability, the knower, and the believer

WEAKLY DOMINATED STRATEGIES: A MYSTERY CRACKED

Lecture 1: The Humean Thesis on Belief

Formalizing Probability. Choosing the Sample Space. Probability Measures

Introduction to Artificial Intelligence. Learning from Oberservations

Belief Revision with Unreliable Observations

Inductive vs. Deductive Statistical Inference

THE GROUP OF UNITS OF SOME FINITE LOCAL RINGS I

Notes on ordinals and cardinals

Alpha-Beta Pruning: Algorithm and Analysis

Alpha-Beta Pruning: Algorithm and Analysis

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 27

Learning from Observations. Chapter 18, Sections 1 3 1

Problem. Problem Given a dictionary and a word. Which page (if any) contains the given word? 3 / 26

Iterated Belief Change in the Situation Calculus

How to Learn Concepts, Consequences, and Conditionals

Two-Stage-Partitional Representation and Dynamic Consistency 1

To Infinity and Beyond

Alpha-Beta Pruning: Algorithm and Analysis

Bounded revision: Two-dimensional belief change between conservative and moderate revision. Hans Rott

Models of Strategic Reasoning Lecture 4

An Infinite-Game Semantics for Well-Founded Negation in Logic Programming

Reinforcement Belief Revision

Notation Index. gcd(a, b) (greatest common divisor) NT-16

CS 380: ARTIFICIAL INTELLIGENCE

Math 144 Summer 2012 (UCR) Pro-Notes June 24, / 15

Math From Scratch Lesson 28: Rational Exponents

CSE 20 DISCRETE MATH. Fall

Sequential-Simultaneous Information Elicitation in Multi-Agent Systems

Transcription:

Belief Revision and Truth-Finding Kevin T. Kelly Department of Philosophy Carnegie Mellon University kk3n@andrew.cmu.edu

Further Reading (with O. Schulte and V. Hendricks) Reliable Belief Revision, in Logic and Scientific Methods, Dordrecht: Kluwer,, 1997. The Learning Power of Iterated Belief Revision, in Proceedings of the Seventh TARK Conference,, 1998. Iterated Belief Revision, Reliability, and Inductive Amnesia, Erkenntnis,, 50: 1998

The Idea Belief revision theory... rational belief change Learning theory......reliable belief change Conflict? Truth

Part I Iterated Belief Revision

Baian (Vanilla) Updating Propositional epistemic state B

Baian (Vanilla) Updating New belief is intersection Perfect memory No inductive leaps new evidence E B

Baian (Vanilla) Updating New belief is intersection Perfect memory No inductive leaps E B B

Baian (Vanilla) Updating New belief is intersection Perfect memory No inductive leaps E B B

Epistemic Hell (a.k.a. Nirvana) B

Epistemic Hell (a.k.a. Nirvana) E Surprise! B

Epistemic Hell (a.k.a. Nirvana) Scientific revolutions Suppositional reasoning Conditional pragmatics Decision theory Game theory Data bases E B Epistemic hell

Ordinal Epistemic States Spohn 88 Ordinal-valued degrees of implausibility Belief state is bottom level ω + 1 ω 2 1 b (S) 0 S

Iterated Belief Revision epistemic state trajectory S 0 initial state input propositions E 0 E 1 E 2 *

Iterated Belief Revision epistemic state trajectory S 0 S 1 input propositions E 1 E 2 *

Iterated Belief Revision epistemic state trajectory S 0 S 1 input propositions E 1 E 2 *

Iterated Belief Revision epistemic state trajectory S 0 S 1 S 2 input proposition E 2 *

Iterated Belief Revision epistemic state trajectory S 0 S 1 S 2 input proposition E 2 *

Iterated Belief Revision epistemic state trajectory S 0 S 1 S 2 S 3 *

Iterated Belief Revision epistemic state trajectory S 0 S 1 S 2 S 3 b (S 0 ) b (S 1 ) b (S 2 ) b (S 3 ) belief state trajectory *

Generalized Conditioning *C* Spohn 88 S

Generalized Conditioning *C* Spohn 88 Condition entire epistemic state E S

Generalized Conditioning *C* Spohn 88 Condition entire epistemic state E *C S B S *C E

Generalized Conditioning *C* Spohn 88 Condition entire epistemic state Perfect memory Inductive leaps No epistemic hell if evidence sequence is consistent E *C S B S *C E

Lexicographic Updating *L* Spohn 88, Nayak 94 S

Lexicographic Updating *L* Spohn 88, Nayak 94 Lift refuted possibilities above non-refuted possibilities preserving order. E S

Lexicographic Updating *L* Spohn 88, Nayak 94 Lift refuted possibilities above non-refuted possibilities preserving order. E *L S B S *L E

Lexicographic Updating *L* Spohn 88, Nayak 94 Lift refuted possibilities above non-refuted possibilities preserving order. Perfect memory on consistent data sequences Inductive leaps No epistemic hell E *L B S S *L E

Minimal or Natural Updating *M* Spohn 88, Boutilier 93 S

Minimal or Natural Updating *M* Spohn 88, Boutilier 93 Drop the lowest possibilities consistent with the data to the bottom and raise everything else up one notch E S

Minimal or Natural Updating *M* Spohn 88, Boutilier 93 Drop the lowest possibilities consistent with the data to the bottom and raise everything else up one notch E *M S S *M E

Minimal or Natural Updating *M* Spohn 88, Boutilier 93 Drop the lowest possibilities consistent with the data to the bottom and raise everything else up one notch inductive leaps No epistemic hell E *M S S *M E

The Flush-to-α Method *F,* F,α Goldszmidt and Pearl 94 S

The Flush-to-α Method *F,* F,α Goldszmidt and Pearl 94 Send non-e worlds to α and drop E -worlds rigidly to the bottom boost parameter α E S

The Flush-to-α Method *F,* F,α Goldszmidt and Pearl 94 Send non-e worlds to α and drop E -worlds rigidly to the bottom α E *F,α S S *F,α E

The Flush-to-α Method *F,* F,α Goldszmidt and Pearl 94 Send non-e worlds to α and drop E -worlds rigidly to the bottom Perfect memory on sequentially consistent data if α is high enough Inductive leaps No epistemic hell E *F,α α S S *F,α E

Ordinal Jeffrey Conditioning *J,* J,α Spohn 88 S

Ordinal Jeffrey Conditioning *J,* J,α Spohn 88 E S

Ordinal Jeffrey Conditioning *J,* J,α Spohn 88 Drop E worlds to the bottom. Drop non-e worlds to the bottom and then jack them up to level α E α S

Ordinal Jeffrey Conditioning *J,* J,α Spohn 88 Drop E worlds to the bottom. Drop non-e worlds to the bottom and then jack them up to level α E *J,α α S S *J,α E

Ordinal Jeffrey Conditioning *J,* J,α Spohn 88 Drop E worlds to the bottom. Drop non-e worlds to the bottom and then jack them up to level α Perfect memory on consistent sequences if α is large enough No epistemic hell But... E S α *J,α S *J,α E

Empirical Backsliding Ordinal Jeffrey conditioning can increase the plausibility of a refuted possibility E α

The Ratchet Method *R,* R,α Darwiche and Pearl 97 S

The Ratchet Method *R,* R,α Darwiche and Pearl 97 Like ordinal Jeffrey conditioning except refuted possibilities move up by α from their current positions β β + α E S

The Ratchet Method *R,* R,α Darwiche and Pearl 97 Like ordinal Jeffrey conditioning except refuted possibilities move up by α from their current positions β β + α E *R,α B S B S *R,α E

The Ratchet Method *R,* R,α Darwiche and Pearl 97 Like ordinal Jeffrey conditioning except refuted possibilities move up by α from their current positions Perfect memory if α is large enough Inductive leaps No epistemic hell E B *R,α B β β + α S S *R,α E

Part II Properties of the Methods

Timidity and Stubbornness Timidity: no inductive leaps without refutation. Stubbornness: no retractions without refutation Examples: all the above Nutty! B B

Timidity and Stubbornness Timidity: no inductive leaps without refutation. Stubbornness: no retractions without refutation Examples: all the above Nutty! B B

Timidity and Stubbornness Timidity: no inductive leaps without refutation. Stubbornness: no retractions without refutation Examples: all the above Nutty! B B

Local Consistency Local consistency: new belief must be consistent with the current consistent datum Examples: all the above

Positive Order-invariance Positive order-invariance: preserve original ranking inside conjunction of data Examples: * C, * L, * R, α, * J, J, α.

Data-Precedence Data-precedence: Each world satisfying all the data is placed above each world failing to satisfy some datum. Examples: * C, * L * R, α, * J, α, if α is above S. S

Enumerate and Test Enumerate-and-test: locally consistent, positively invariant data-precedent Examples: * C, * L * R, α, * J, α, if α is above S. epistemic dump for refuted possibilities preserved implausibility structure

Part III Belief Revision as Learning

A Very Simple Learning Paradigm data trajectory mysterious system

A Very Simple Learning Paradigm data trajectory mysterious system

A Very Simple Learning Paradigm data trajectory mysterious system

A Very Simple Learning Paradigm data trajectory mysterious system

Possible Outcome Trajectories possible data trajectories e e n

Finding the Truth (*, S 0 ) identifies e for all but finitely many n, b(s 0 * ([0, e(0)],..., [n,[ e(n)])) = {e}{

Finding the Truth (*, S 0 ) identifies e for all but finitely many n, b(s 0 * ([0, e(0)],..., [n,[ e(n)]) = {e}{ truth

Finding the Truth (*, S 0 ) identifies e for all but finitely many n, b(s 0 * ([0, e(0)],..., [n,[ e(n)]) = {e}{ truth

Finding the Truth (*, S 0 ) identifies e for all but finitely many n, b(s 0 * ([0, e(0)],..., [n,[ e(n)]) = {e}{ truth

Finding the Truth (*, S 0 ) identifies e for all but finitely many n, b(s 0 * ([0, e(0)],..., [n,[ e(n)]) = {e}{ truth

Finding the Truth (*, S 0 ) identifies e for all but finitely many n, b(s 0 * ([0, e(0)],..., [n,[ e(n)]) = {e}{ completely true belief

Reliability is No Accident Let K be a range of possible outcome trajectories (*, S 0 ) identifies K (*, S 0 ) identifies each e in K. Fact: K is identifiable K is countable.

Completeness * is complete for each identifiable K there is an S 0 such that, K is identifiable by (*,( S 0 ). Else * is restrictive.

Completeness Proposition: If * enumerates and tests, * is complete.

Completeness Proposition: If * enumerates and tests, * is complete. Enumerate K Choose arbitrary e in K e

Completeness Proposition: If * enumerates and tests, * is complete.

Completeness Proposition: If * enumerates and tests, * is complete. data precedence positive invariance

Completeness Proposition: If * enumerates and tests, * is complete.

Completeness Proposition: If * enumerates and tests, * is complete. data precedence positive invariance

Completeness Proposition: If * enumerates and tests, * is complete.

Completeness Proposition: If * enumerates and tests, * is complete. data precedence convergence local consistency

Amnesia Without data precedence, memory can fail Same example, using * J,1.

Amnesia Without data precedence, memory can fail Same example, using * J,1.

Amnesia Without data precedence, memory can fail Same example, using * J,1.

Amnesia Without data precedence, memory can fail Same example, using * J,1.

Amnesia Without data precedence, memory can fail Same example, using * J,1.

Amnesia Without data precedence, memory can fail Same example, using * J,1.

Amnesia Without data precedence, memory can fail Same example, using * J,1.

Amnesia Without data precedence, memory can fail Same example, using * J,1. E E is forgotten

Duality conjectures and refutations tabula rasa... predicts may forget remembers doesn t predict

Rationally Imposed Tension compression for memory Can both be accommodated? rarefaction for inductive leaps

Inductive Amnesia compression for memory Bang! Restrictiveness: No possible initial state resolves the pressure rarefaction for inductive leaps

Question Which methods are guilty? Are some worse than others?

Part IV: The Goodman Hierarchy

The Grue Operation Nelson Goodman n e e n

Grue Complexity Hierarchy G ω (e) finite variants of e, e finite variants of e G ω even (e) G 4 (e) G 3 (e) G 2 (e) G 1 (e) G 0 (e) G 2 even (e) G 1 even (e) G 0 even (e)

Classification: even grues G ω even (e) Min Flush Jeffrey Ratch Lex Cond no α = ω α = 1 α = 1 G n even (e) no α = n +1 α = 1 α = 1 G 2 even (e) G 1 even (e) G 0 even (e) no no α = 3 α = 2 α = 0 α = 1 α = 1 α = 1 α = 1 α = 0 α = 0

Classification: even grues G ω even (e) Min Flush Jeffrey Ratch Lex Cond no α = ω α = 1 α = 1 G n even (e) no α = n +1 α = 1 α = 1 G 2 even (e) G 1 even (e) G 0 even (e) no no α = 3 α = 2 α = 0 α = 1 α = 1 α = 1 α = 1 α = 0 α = 0

Hamming Algebra a H b mod e a differs from e only where b does. Hamming 1 1 1 1 1 0 1 0 1 0 1 1 1 0 0 0 1 0 0 0 1 0 0 0

*R,1,*J,1 J,1 can identify G ω even even (e) Example a e a Learning as rigid hypercube rotation e

*R,1,*J,1 J,1 can identify G ω even even (e) a Learning as rigid hypercube rotation e

*R,1,*J,1 J,1 can identify G ω even even (e) e Learning as rigid hypercube rotation a

*R,1,*J,1 J,1 can identify G ω even even (e) e Learning as rigid hypercube rotation convergence a

Classification: even grues G ω even (e) Min Flush Jeffrey Ratch Lex Cond no α = ω α = 1 α = 1 G n even (e) no α = n +1 α = 1 α = 1 G 2 even (e) G 1 even (e) G 0 even (e) no no α = 3 α = 2 α = 0 α = 1 α = 1 α = 1 α = 1 α = 0 α = 0

Classification: arbitrary grues G ω (e) Min Flush Jeffrey Ratch Lex Cond no α = ω α = 2 α = 2 G 3 (e) no α = n +1 α = 2 α = 2 G 2 (e) no α = 3 α = 2 α = 2 G 1 (e) G 0 (e) no α = 2 α = 2 α = 1 α = 0 α = 0 α = 0

Classification: arbitrary grues G ω (e) Min Flush Jeffrey Ratch Lex Cond no α = ω α = 2 α = 2 G 3 (e) no α = n +1 α = 2 α = 2 G 2 (e) no α = 3 α = 2 α = 2 G 1 (e) G 0 (e) no α = 2 α = 2 α = 1 α = 0 α = 0 α = 0

*R,2 is Complete Impose the Hamming distance ranking on each finite variant class Now raise the nth Hamming ranking by n S C 0 C 1 C 2 C 3 C 4

*R,2 is Complete Data streams in the same column just barely make it because they jump by 2 for each difference from the truth S 1 difference from truth 2 differences from truth C 0 C 1 C 2 C 3 C 4

Classification: arbitrary grues G ω (e) G 3 (e) Min Flush Jeffrey Ratch Lex Cond no α = ω α = 2 α = 2 no Can t use Hamming rank α = n +1 α = 2 α = 2 G 2 (e) no α = 3 α = 2 α = 2 G 1 (e) G 0 (e) no α = 2 α = 2 α = 1 α = 0 α = 0 α = 0

Wrench In the Works Suppose * J,2 succeeds with Hamming rank. Feed e until it is uniquely at the bottom. k e By convergent success

Wrench In the Works So for some later n, a b k n Hamming rank and positive invariance. e If empty, things go even worse! Still alone since timid and stubborn

Wrench In the Works b moves up at most 1 step since e is still alone (rule) a k n b e Refuted worlds touch bottom and get lifted by at most two.

Wrench In the Works So b never rises above a when a is true (positive invariance) Now a and b agree forever, so can never be separated. So never converges in a or forgets refutation of b. k n a b e a

Hamming vs. Goodman Algebras a H b mod e a differs from e only where b does. a G b mod e a grues e only where b does. Hamming 1 1 1 1 0 1 Goodman 1 1 0 1 0 1 0 1 1 1 0 0 1 1 0 0 1 0 1 0 0 0 1 0 0 0 1 1 1 1 0 1 1 0 0 1 0 0 0 0 0 0

Epistemic States as Boolean Ranks Hamming Goodman G ω odd (e) G ω even (e) G ω (e) e e

*J,2 can identify G ω (e) Proof: Use the Goodman ranking as initial state Then * J,2 always believes that the observed grues are the only ones that will ever occur. Note: Ockham with respect to reversal counting problem.

Classification: arbitrary grues G ω (e) Min Flush Jeffrey Ratch Lex Cond no α = ω α = 2 α = 2 G 3 (e) no α = n +1 α = 2 α = 2 G 2 (e) no α = 3 α = 2 α = 2 G 1 (e) G 0 (e) no α = 2 α = 2 α = 1 α = 0 α = 0 α = 0

Methods *J,1* J,1; ; *M* Fail on G 1 (e) Proof: Suppose otherwise Feed e until e is uniquely at the bottom e data so far

Methods *J,1* J,1; ; *M* Fail on G 1 (e) By the well-ordering condition,...else infinite descending chain e data so far

Methods *J,1* J,1; ; *M* Fail on G 1 (e) Now feed e forever By stage n, the picture is the same e e positive order invariance e e n timidity and stubbornness

Methods *J,1* J,1; ; *M* Fail on G 1 (e) e e At stage n +1, e stays at the bottom (timid and stubborn). So e can t t travel down (rule) e doesn t t rise (rule) Now e makes it to the bottom at least as soon as e e e n

Classification: arbitrary grues G ω (e) Min Flush Jeffrey Ratch Lex Cond no α = ω α = 2 α = 2 G 3 (e) no α = n +1 α = 2 α = 2 G 2 (e) no α = 3 forced backsliding α = 2 α = 2 G 1 (e) G 0 (e) no α = 2 α = 2 α = 1 α = 0 α = 0 α = 0

Method *R,1* Fails on G 2 (e) with Oliver Schulte Proof: : Suppose otherwise Bring e uniquely to the bottom, say at stage k k e

Method *R,1* Fails on G 2 (e) Start feeding a = e k with Oliver Schulte k e a

Method *R,1* Fails on G 2 (e) with Oliver Schulte By some stage k, a is uniquely down So between k + 1 and k,, there is a first stage j when no finite variant of e is at the bottom k k e a

Method *R,1* Fails on G 2 (e) with Oliver Schulte Let c in G 2 (e ) be a finite variant of e that rises to level 1 at j k j k c a

Method *R,1* Fails on G 2 (e) with Oliver Schulte Let c in G 2 (e ) be a finite variant of e that rises to level 1 at j k j k c a

Method *R,1* Fails on G 2 (e) with Oliver Schulte k j k So c(j - 1) is not a(j - 1) c a

Method *R,1* Fails on G 2 (e) with Oliver Schulte k j k Let d be a up to j and e thereafter So is in G 2 (e) Since d differs from e, d is at least as high as level 1 at j 1 d c a

Method *R,1* Fails on G 2 (e) with Oliver Schulte Show: c agrees with e after j. k j k 1 d c a

Method *R,1* Fails on G 2 (e) with Oliver Schulte k j k Case: j = k+1 Then c could have been chosen as e since e is uniquely at the bottom at k 1 d c a

Method *R,1* Fails on G 2 (e) with Oliver Schulte Case: j > k+1 k j k Then c wouldn t t have been at the bottom if it hadn t t agreed with a (disagreed with e) 1 d c a

Method *R,1* Fails on G 2 (e) with Oliver Schulte k j k Case: j > k+1 So c has already used up its two grues against e 1 d c a

Method *R,1* Fails on G 2 (e) with Oliver Schulte Feed c forever after k j k By positive invariance, either never projects or forgets the refutation of c at j-1 1 d c d

Without Well-Ordering G ω (e) Min Flush Jeffrey Ratch Lex Cond no G 3 (e) no infinite descending chains can help! G 2 (e) no G 1 (e) G 0 (e)

Summary Belief revision constrains possible inductive strategies No induction without contradiction (?!!) Rationality weakens learning power of ideal agents. Prediction vs. memory Precise recommendations for rationalists: boosting by 2 vs.. 1 backslide vs.. ratchet well-ordering Hamming vs. Goodman rank