In Defence of a Naïve Conditional Epistemology

Similar documents
Stalnaker s Thesis in Context

Against Preservation

Indicative conditionals

Conditionals, indeterminacy, and triviality

In Newcomb s problem, an agent is faced with a choice between acts that

Capturing Lewis s Elusive Knowledge

Confirmation Theory. Pittsburgh Summer Program 1. Center for the Philosophy of Science, University of Pittsburgh July 7, 2017

First-Degree Entailment

SIMILARITY IS A BAD GUIDE TO COUNTERFACTUAL TRUTH. The Lewis-Stalnaker logic systems [2] account for a wide range of intuitions about

Hedging Your Ifs and Vice Versa

Conditionals. Daniel Bonevac. February 12, 2013

Wondering What Might Be

CAUSATION CAUSATION. Chapter 10. Non-Humean Reductionism

Handout 8: Bennett, Chapter 10

Doxastic Logic. Michael Caie

The Conjunction and Disjunction Theses

127: Lecture notes HT17. Week 8. (1) If Oswald didn t shoot Kennedy, someone else did. (2) If Oswald hadn t shot Kennedy, someone else would have.

PROOF-THEORETIC REDUCTION AS A PHILOSOPHER S TOOL

Philosophy 244: #8 Counterfactuals, Neighborhood Semantics, Probability, Predicative Necessity, etc.

Epistemic Modals and Informational Consequence

(1) If Bush had not won the last election, then Nader would have won it.

Conditional Logic and Belief Revision

Old Evidence, Logical Omniscience & Bayesianism

triviality results for probabilistic modals

La logique des conditionnels. Rentrée Cogmaster

Philosophy 5340 Epistemology. Topic 3: Analysis, Analytically Basic Concepts, Direct Acquaintance, and Theoretical Terms. Part 2: Theoretical Terms

Philosophy 148 Announcements & Such

Contingent Existence and Iterated Modality

Type Systems as a Foundation for Reliable Computing

CONDITIONALS: WEEK 8, INTRO TO SUBJUNCTIVES. 1. Strict Conditionals and Variably Strict Conditionals

1.1 Statements and Compound Statements

Natural deduction for truth-functional logic

Antecedents of counterfactuals violate de Morgan s law

Strengthening Principles and Counterfactual Semantics

Section 3.1: Direct Proof and Counterexample 1

Relevant Logic. Daniel Bonevac. March 20, 2013

Model-theoretic Vagueness vs. Epistemic Vagueness

LOCAL AND GLOBAL INTERPRETATIONS OF CONDITIONALS STEFAN KAUFMANN ERICA WINSTON DEBORAH ZUTTY

Favoring, Likelihoodism, and Bayesianism

Counterfactuals as Strict Conditionals

Lecture Notes: Axiomatic Semantics and Hoare-style Verification

Dr. Truthlove, or How I Learned to Stop Worrying and Love Bayesian Probabilities

Truth, Subderivations and the Liar. Why Should I Care about the Liar Sentence? Uses of the Truth Concept - (i) Disquotation.

Conditionals in Causal Decision Theory

For True Conditionalizers Weisberg s Paradox is a False Alarm

Nested Sequent Calculi for Normal Conditional Logics

For True Conditionalizers Weisberg s Paradox is a False Alarm

The paradox of knowability, the knower, and the believer

Tooley on backward causation

Conceivability and Modal Knowledge

Outline. A Uniform Theory of Conditionals Stalnaker and Beyond. Stalnaker (1975) Uniform Theory of Conditionals and Response to Direct Argument

A Little Deductive Logic

Truthmaker Maximalism defended again. Eduardo Barrio and Gonzalo Rodriguez-Pereyra

Supplementary Logic Notes CSE 321 Winter 2009

Counterfactuals and comparative similarity

about conditionals. It is a good example of what might be called systematic ordinary

In Defense of Jeffrey Conditionalization

Accuracy, Language Dependence and Joyce s Argument for Probabilism

A Little Deductive Logic

Symbolic Logic 3. For an inference to be deductively valid it is impossible for the conclusion to be false if the premises are true.

Semantics and Pragmatics of NLP

Objective probability-like things with and without objective indeterminism

Section 1.2: Propositional Logic

CAUSATION. Chapter 14. Causation and the Direction of Time Preliminary Considerations in Support of a Causal Analysis of Temporal Concepts

Two-Dimensional Modal Logic

Russell s logicism. Jeff Speaks. September 26, 2007

Introduction: What Is Modal Logic?

Two-Dimensional Modal Logic

Gender in conditionals

5. And. 5.1 The conjunction

Michael Franke Fritz Hamm. January 26, 2011

Breaking de Morgan s law in counterfactual antecedents

Overview. I Review of natural deduction. I Soundness and completeness. I Semantics of propositional formulas. I Soundness proof. I Completeness proof.

Fake Tense! in structural models. Katrin Schulz! ILLC, University of Amsterdam

Reasoning with Uncertainty

Abstract: In this paper, I present a schema for generating counterexamples to the

cis32-ai lecture # 18 mon-3-apr-2006

Imaging and Sleeping Beauty A Case for Double-Halfers

A new resolution of the Judy Benjamin problem

Beliefs, we will assume, come in degrees. As a shorthand, we will refer to these. Syracuse University

Proof Theoretical Studies on Semilattice Relevant Logics

Logic and Philosophical Logic. 1 Inferentialism. Inferentialism and Meaning Underdetermination

A. Brief review of classical propositional logic (PL)

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 10

Agency and Interaction in Formal Epistemology

KRIPKE S THEORY OF TRUTH 1. INTRODUCTION

Lecture 1: The Humean Thesis on Belief

Tutorial on Mathematical Induction

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14

The claim that composition is identity is an intuition in search of a formulation.

Classical Menu of Pronouns

Philosophy 240 Symbolic Logic. Russell Marcus Hamilton College Fall 2013

A Lottery Paradox for Counterfactuals Without Agglomeration

5. And. 5.1 The conjunction

Approximations of Modal Logic K

Description Logics. Foundations of Propositional Logic. franconi. Enrico Franconi

Math 3361-Modern Algebra Lecture 08 9/26/ Cardinality

Kaplan s Paradox and Epistemically Possible Worlds

Philosophy of Mathematics Structuralism

CS 3110: Proof Strategy and Examples. 1 Propositional Logic Proof Strategy. 2 A Proof Walkthrough

Transcription:

In Defence of a Naïve Conditional Epistemology Andrew Bacon 28th June 2013 1 The data You pick a card at random from a standard deck of cards. How confident should I be about asserting the following sentences? 1. The selected card is an ace if it s red? 2. It s spades if it s black? 3. It s diamonds if it s an eight? To be clear: I mean how confident should I be in the propositions I would assert by uttering these sentences. 1 The obvious answer to these questions are: 13, 1 2, 1 4,... Note: triviality results don t obviously refute particular judgements like these. How do we explain this systematic connection? 2 Stalnaker s Thesis An initially attractive thesis, that explains the relevant data. Stalnaker s Thesis: The degree of belief one should assign to a conditional sentence, if A then B, should be identical to ones conditional degree of belief in B given A. 1 The thesis, as formulated above (and by Stalnaker), is not careful enough about use and mention. In what follows I shall read the thesis as saying that, if p, q and r are the propositions that would be asserted by the sentences A, B and if A then B in a given context, then ones degree of belief in r must be identical to ones conditional degree of belief in q given p. 2.1 Stalnaker s thesis and contextualism Enthusiasm for this theory (amongst truth conditional theorists) was short lived due to the triviality results. Another, seemingly independent reason to worry about it if you are a contextualist. 1 If Cr is a function representing your degrees of belief, then your conditional degree of belief in B given A, Cr(B A), is defined to be Cr(A B) Cr(A) when Cr(A) > 0. 1

Contextualism: (i) conditional sentences can be used to assert propositions, (ii) which proposition is asserted by a conditional sentence can depend on the context of utterance even when the antecent and consequent do not and (iii) which proposition is asserted depends on some contextually salient piece of evidence or knowledge. Of course there is a rich philosophical tradition that rejects (i). Nonetheless, if you are a truth conditional theorist there are a number of puzzles that contextualism helps explain. Among them: Or-to-if inferences: the apparent validity of the move from either not A or B to if A then B, which leads to the collapse of the indicative conditional to a material conditional. Gibbard cases: cases where two people are apparently able to truthfully assert if A then B and if A then not B (when A is not known to be false by any participants of the conversation.) Stalnaker s thesis, as precisified above, makes little sense if conditionals are context sensitive. The problem, in essence, is that there are many different propositions a conditional sentence can express, and only one conditional probability for the probability of those conditionals to be identical to. The point is that even when A and B are not context sensitive, if A then B can express many different propositions with potentially different probabilities, whereas the conditional probability is determined determined only by the propositions expressed by A and B. 2.2 Lewis s result If C is a class of probability functions that is closed under conditioning then either there are functions in C which do not satisfy Stalnaker s thesis or the probability of any conditional with non-zero probability antecedent is the probability of it s consequent. Proof: let be the connective expressed by an indicative conditional in some context and suppose that Stalnaker s thesis does hold throughout C. Then for any A and B and Cr C, Cr(A B) = Cr(A B B)Cr(B) + Cr(A B B)Cr( B) by probability theory. Applying Stalnaker s thesis it follows that this = Cr(B AB)Cr(B) + Cr(B A B)Cr( B) = 1.Cr(B) + 0.Cr( B) = Cr(B) 3 Revising the theory The conclusion: Stalnaker s thesis (or at least the above precisification of it) is false. But it wasn t particularly plausible from a contextualist perspective in the first place. What to make of these problems? One radical response, often made in connection to the triviality results, is to take these highly theoretical arguments to undermine the original probability judgments to 1-3. To my mind this response is excessive: triviality results do not cast doubt on particular probability judgments such as those reported in 1-3. They merely refute a general theory that predicts those judgments the judgments themselves do not imply the refuted theory. Furthermore, if the answers I listed to 1-3 are not correct then those who make the radical response owe us an 2

answer to the question: what are the correct answers to these particular questions? If they are not respectively 1 13, 1 2 and 1 4 then what on earth are they? What we want is a better theory. One that (a) predicts the probability judgments (b) is at least compatible with contextualism about conditional statements and finally (c) is provably consistent (over a rich set of probability functions, with a plausible logic, et cetera.) In my view the contextualist has a very natural theory that satisfies all these constraints. Note that in order to determine the probability of an utterance of a conditional we need to know the following two things: 1. In order to know what proposition is expressed by the utterance we need to know what the contextually salient evidence, E, is. What proposition is expressed depends on the evidence. 2. In order to know what the probability of that proposition is (i.e. once we ve determined E) we need to know how probable it is which depends on (a) your priors, P r, and crucially (b) your total evidence E. Thus two pieces of evidence play a role in this theory. The first piece of evidence determines the proposition to evaluate, the second determines how probable that proposition is. Hypothesis: in the context described in the examples 1-3 above the two pieces of evidence (the contextually salient evidence, and agents evidence) are identical. The principle that would predict this: when the contextually salient evidence is E, and the agents total evidence is also E, the probability of the proposition expressed by If A then B should be the probability of the proposition expressed by B conditional on the proposition expressed by A. The proposition expressed: to represent the proposition expressed by a conditional sentence in a context where E is salient I shall use the notation A E B. In the null context, where E is tautologous, I shall simply write A B, I ll call this the ur-conditional. (Note: a substantive principle states that A E B := A E B. This is true in one of my models, but I will not assume this in what follows.) The probability function: I shall assume Bayesianism: your current probability gets determined your total evidence E and your ur-priors (credences you d have if you had no evidence at all) by conditioning the latter on the former. Thus your current probability is just P r( E). A few consequences of our notation Your current conditional credence in B given A is just P r(b A E). Your current credence in A E B is just P r(a E B E). Thus we can state our new principle as follows: The New Thesis: For any ur-prior P r and possible evidence E, P r(a E B E) = P r(b A E}) Good things about CP (The New Thesis): It predicts the data. 3

It handles the context sensitivity. It is consistent (it has a possible worlds semantics.) Notes about the tenability result: There are actually two distinct models. They are both based on a selection function based semantics, a slight variant of Stalnaker s semantics (see the logic below.) They provide a uniform interpretation of the conditional (other tenability results can be modified to give something a bit like the new thesis, at the cost of making the structure of the space of worlds depend on the context!) There is no restriction on the way that conditionals iterate! This distinguishes the result from many other similar results. However in one of the models your total evidence cannot be hypothetical (although your evidence can properly include hypothetical propositions.) In this model the ur-selection function satisfies Harper s condition : f E (A, x) = f(a E, x). In the other model arbitrary propositions can be your total evidence, although it does not satisfy Harper s condition. 3.1 How does it avoid Lewis s result? Technically speaking there are lots of different conditionals, A E B, for each evidence E. The ur-conditional, A B, avoids it because the ur-conditional only satisfies Stalnaker s thesis for ur-priors, and these are not closed under conditioning. The conditionals A E B only satisfy the thesis for probability functions of the form P r( E), and these are not closed under conditioning. Consequence: the probability of A E B and the conditional probability of B on A can come apart when acquire new evidence. However: our intuitive probability judgments don t seem to track anything this strong. When you have required new evidence, E, we only need the conditional probability connection with A E B. 4 Static Triviality Results Lewis s result is a dynamic triviality result. It shows that probability conditionalconditional probability link cannot be preserved throughout a class of probability functions. A static triviality result shows that the thesis cannot hold for even a single probability function. Note that the new thesis requires the connection to hold for each conditional. Hájek and Hall, building on an argument by Stalnaker, show that any conditional satisfying: 4

CSO A B, A C B C. cannot support CP. (In terms of selection functions this just means that if f(a, x) B and f(b, x) A then f(a, x) = f(b, x).) CSO entails limited forms of antecedent strengthening and transitivity CM A B, A C AB C RT A B, AB C A C Conclusion: we must give up these principles. Here are three sources of unease people have about this. 1. Direct intuitions about the validity of CSO (and related principles.) 2. The principle is validated in semantics based on the idea of closeness. 4.1 Direct intuitions Firstly: it s a complicated principle no-one really has good intuitions about it. Secondly, direct intuitions overgenerate: CSO might seem valid because it is a weakening of the transitivity of the conditional A B, B C A C. Transitivity, however, is highly controversial: it has several putative counterexamples and it straightforwardly entails antecedent strengthening. Thirdly, there is actually a fairly non-trivial literature giving counterexamples to CSO. (Tichy, Maartenson, Tooley, Ahmed.) Finally, there are many to explain away the validity of CSO. 1. In the model I provide the above principles are valid when A, B and C are restricted to categorical propositions. All the failures (at least in this model) involve iterated conditionals. 2. The principle CSO (and the others mentioned above) when formulated using the connective A E B are all probabilistically valid in the sense that they get probability one for every probability function of the form P r( E). That is, any utterance of CSO will express a proposition you are fully confident in. For example, CSO for the ur-conditional gets probability one relative to all ur-priors (and thus, also, relative to all rational credences.) 4.2 Closeness based semantics This is, in my view, the strongest case for CSO. Two questions: Should closeness play an role in the semantics of indicative conditionals. (We can remain neutral on the case of subjunctives, although my view is no in both cases.) Can we still model indicatives using selection functions, and if so, how should we interpret the functions? 5

An apparent conflict: Unlike in the case of subjunctives, conditional excluded middle seems to be uncontroversially true for simple past tense indicatives. See: either the coin landed heads if it was flipped or it landed tails. On a closeness interpretation of the selection function the only way to validate CEM is if we stipulate that there is always a unique world that is closest to the actual world. This is well known to be controversial. Lewis s semantics for counterfactuals allows there to be multiple equally close worlds. However it comes at the cost of invalidating CEM. This may be not obviously problematic for counterfactuals, where CEM is more controversial, it is not suitable for simple past tense indicatives. Edgington s counterexample. Car has some unknown amount of petrol in it, which will make it run out somewhere between 0 and 100 miles. In fact it runs out at 36 miles. By closeness assumption: If the car ran for at least 50 miles it ran for exactly 50 miles. How to understand the selection function without assuming a closeness based semantics? For counterfactuals: f(a, x) is the world that would have obtained (instead of x) had A obtained. For indicatives: f(a, x) is the world that obtains if A obtains. Stalnaker makes the substantive assumption that the world that would have obtained if A had obtained is just the closest world. Another story might say: if there is more than one closest world it might be randomly selected from among them (compare Schulz [REF].) This would have the effect of validating CEM without requiring uniqueness. 4.3 Standardness Finally there s the feeling that CSO is somehow part of a standard logic of conditionals and a principle of epistemic conservativeness should mean that we should have good reasons to revise it. I don t have much to say about this except that my view is that we do have good enough reasons (but I m also inclined to deny that it s a standard logic for indicatives in the first place.) The logic you have instead is just the following: RCN if ψ then φ ψ RCEA if φ ψ then (φ χ) (ψ χ) CK (φ (ψ χ)) ((φ ψ) (φ χ)) ID φ φ MP (φ ψ) (φ ψ) C1 (φ ψ) ((ψ ) (φ )) CEM (φ ψ) (φ ψ) 6