Inference and Explanation in Counterfactual Reasoning

Size: px
Start display at page:

Download "Inference and Explanation in Counterfactual Reasoning"

Transcription

1 Cognitive Science (2013) 1 29 Copyright 2013 Cognitive Science Society, Inc. All rights reserved. ISSN: print / online DOI: /cogs Inference and Explanation in Counterfactual Reasoning Lance J. Rips, Brian J. Edwards Psychology Department, Northwestern University Received 10 February 2012; received in revised form 29 June 2012; accepted 9 July 2012 Abstract This article reports results from two studies of how people answer counterfactual questions about simple machines. Participants learned about devices that have a specific configuration of components, and they answered questions of the form If component X had not operated [failed], would component Y have operated? The data from these studies indicate that participants were sensitive to the way in which the antecedent state is described whether component X had not operated or had failed. Answers also depended on whether the device is deterministic or probabilistic whether X s causal parents always or only usually cause X to operate. Participants explanations of their answers often invoked non-operation of causally prior components or unreliability of prior connections. They less often mentioned independence from these causal elements. Keywords: Counterfactual conditionals; Bayes nets; Explanation; Reasoning 1. Introduction Counterfactual thoughts are of interest to psychologists because people consider hypothetical situations as part of many different mental activities planning, decision making, and problem solving, to name just a few. Troubleshooting, for example, often means thinking about what would have happened if a particular component had failed to work. At a more abstract level, an account of counterfactual thinking could lead to a general theory of how people understand the concepts of necessity and possibility (see Williamson, 2007). A statement is necessarily true if whatever were the case it would still be true. For example, a statement of arithmetic is necessarily true since no matter what was the case, the arithmetic statement would still hold. But how are we able to envision situations that have not actually occurred? The proposals we consider here are variations on an intuitively appealing idea: We often know Correspondence should be sent to Lance J. Rips, Psychology Department, Northwestern University, 2029 Sheridan Road, Evanston, IL rips@northwestern.edu

2 2 L. J. Rips, B. J. Edwards / Cognitive Science (2013) about the causes and effects that happen in everyday life, and so we may be able to use this causal knowledge to simulate how things would turn out in hypothetical contexts. Although this appeal to causation probably does not help us with all counterfactuals, it may work with sufficiently many of them to provide a useful framework. To go beyond this vague idea, however, we need to specify the representations people use for causal systems and the way they alter them when they try to imagine counterfactual states. One of the big advances in Judea Pearl s (2000) book Causality is that it gives a clear proposal about causal representations that can handle counterfactual queries. Although the original proposal was a computational and statistical model, it also has implications for cognitive theories. This article compares rival Bayes-net models of counterfactual thinking against the data from experiments in which we ask people to consider possible states of simple mechanical devices Counterfactual asymmetry In considering counterfactuals, we typically imagine that the event in the if-clause or antecedent sets in motion later changes that may depart substantially from those of the actual world. If Gregory had gone to Beta College instead of Alpha College, he might have majored in biology rather than astronomy, he might have received mostly B s rather than mostly A s, and he might have met Beth rather than Allie as his future partner. However, we usually take events prior to that of the antecedent to be roughly the same as in the actual world. Gregory s counterfactual college attendance would seem to leave him with the same parents he actually had, the same early childhood experiences, perhaps the same SAT scores, and so on. Counterfactuals are thus temporally asymmetric: Events in the counterfactual world prior to the antecedent would be more similar than would subsequent events to those of the actual world (e.g., Edgington, 2004; Lewis, 1979). This asymmetry may be related to the finding that people have an easier time imagining counterfactual alternatives to later events in a sequence than to earlier events (Miller & Gunasegaram, 1990). The earlier events seem fixed, the later events more mutable, at least when no further relations constrain them (see Byrne, 2005; Kahneman & Miller, 1986; Parker & Tetlock, 2006). However, this temporal asymmetry is not complete. In a deterministic universe, if all the events prior to the antecedent event were the same as in the actual world, things would have happened exactly as they did. For example, if all events leading up to Gregory s attending college were the same as in the actual world, then he would have gone to Alpha College, just as he did in fact. Some events must change before the antecedent to initiate the counterfactual state. We certainly do not imagine the counterfactual situation as one in which Gregory decides on Alpha College, boards a plane or train for Alpha, but suddenly finds himself at Beta instead. The counterfactual course of events must diverge from the actual course sometime before the antecedent event and provide a not-too-abrupt transition to that event (as Bennett, 2003, has argued). The theories we examine in this article differ on how this transition takes place.

3 L. J. Rips, B. J. Edwards / Cognitive Science (2013) Bayes nets as theories of counterfactual conditionals The theories we discuss here share a common background, and as an example of how they work, consider a simple machine with just four components, which we label A, B, C, and D, as shown at the top of Fig. 1. The arrows in this diagram represent direct causal connections: Component A, when it is operating, always causes both B and C to operate, and B and C always cause D to operate (and they do so separately). The nodes of the diagram represent the states of the components, and for the examples we will be describing here, the components have just two states: on or off. We will indicate the state of a component by using, for example, A = 1 to mean that component A is on, and A = 0 to mean that A is off. So according to Fig. 1a, all four components are currently operating Pruning theory One Bayes-net proposal for counterfactuals comes directly from Pearl (2000), and we will refer to it as pruning theory. To deal with a counterfactual, such as If component B had not operated, would component D have operated?, pruning theory first updates the Bayes net to indicate the present state of the device. Since all components in Fig. 1a are currently working, the values of the components are set to 1, as they were before. We then focus on the antecedent of the counterfactual, if component B had not operated. To simulate this possibility, we pretend that the usual cause of B s operation (a) (b) (c) Fig. 1. (a) A four-component device in which component A always causes components B and C to operate, and B and C each always causes component D to operate. Arrows stand for direct causal pathways, and nodes for components (with 1 indicating that the component is operating and 0 that it is not operating). (b) The counterfactual state in which B had not operated, according to pruning theory. (c) The counterfactual state in which B had not operated, according to minimal-network theory.

4 4 L. J. Rips, B. J. Edwards / Cognitive Science (2013) is no longer controlling B as if someone pruned the connection into B. We then directly intervene on B to make it stop, and we observe the effect of this intervention on the rest of the system. In the resulting state, shown in Fig. 1b, turning off B will not change the state of D. Component A still operates, and it will cause component C to operate, according to the principles that govern the device, mentioned earlier. Component C will in turn keep D operating. The answer to the counterfactual question, then, is that if B had not operated, D would still have operated. (For other ways to simulate interventions within a Bayes-net system, see Waldmann, Cheng, Hagmayer, & Blaisdell, 2008.) Why prune the model in this way? One intuition is that in imagining a counterfactual state, we have control of how the antecedent comes about. The state no longer depends solely on the antecedent s usual causes. As mentioned earlier, in a deterministic system such as that of Fig. 1, something needs to change prior to B to prevent B from operating. But we are free to imagine B s being off by making the change at the last possible moment, just before B occurs, by severing incoming connections to B Minimal-network theory Disagreement might exist, though, about whether the sort of change that pruning theory envisions is the most reasonable one. A second Bayes-net proposal from Hiddleston (2005) has a different way of determining minimal changes, and we will call this proposal minimal-network theory. Minimal-network theory, like pruning theory, proposes changes causally upstream to make the counterfactual s antecedent true. However in doing so, the theory keeps in place the causal principles that govern the device. In particular, it avoids pruning causal connections that are an inherent part of how the system works. For example, if the device description includes the fact that component A s operating always causes component B to operate, then this principle must also hold in the counterfactual state of affairs. Pruning causally necessary connections would produce a system with a different causal structure from the original, so minimal-network theory avoids such changes. In the case of the Fig. 1 example, only one state of the device exists that is both legal under the original description of how the device works (as given earlier) and in which the antecedent, B had not operated, is true. (By legal, we mean that the counterfactual state preserves all deterministic connections between components.) This state appears in Fig. 1c. In this state, the values of all four components have changed from on to off. Component A must be off because A s operating always causes B to operate, and the antecedent stipulates that B is off. But because A is off, C will be off as well. Finally, since both B and C are off, D too must be off. We can then determine the truth of the counterfactual from the resulting minimally different state. As D is not operating in this state, the answer to the question If component B had not operated, would D have operated? is no. In making these inferences, we are reasoning diagnostically from an effect (B s not operating) to its cause (A s not operating) a pattern of inferences we will call causal backtracking. The result of these inferences disagrees with the conclusion from

5 pruning theory, and the disagreement makes it possible to test the two theories as descriptions of people s judgments about counterfactuals The state of play L. J. Rips, B. J. Edwards / Cognitive Science (2013) 5 Previous experiments have found mixed evidence on pruning theory s predictions about counterfactuals. On the one hand, an initial study by Sloman and Lagnado (2005) found that people believe that the causes of the variable mentioned in a counterfactual s antecedent do not change their values in the counterfactual state Sloman and Lagnado s undoing effect. In the example of Fig. 1, undoing means answering yes to the question in (1a). As Figs. 1b, c illustrate, this answer is consistent with pruning theory but inconsistent with minimal-network theory. On the other hand, Sloman and Lagnado s (2005) data also indicate that the percentage of yes responses to questions like (1a) is smaller than to those about direct interventions, for example, (1b): (1) a. If component B had not operated, would component A have operated? b. If someone prevented component B from operating, would component A operate? Pruning theory predicts the answer to both questions should be yes, since the same operation on the graph mediates these responses, according to this theory. More recent studies (Dehghani, Iliev, & Kaufmann, 2012; Rips, 2010) have found cases in which the majority of responses to questions such as (1a) is no. In such situations, people seem to do causal backtracking. For example, when participants learn that A s operating always causes B to operate and are then asked (1a), they tend to respond negatively (Rips, 2010). Figs. 1b, c show that this answer accords with minimal-network theory but not pruning theory. The present experiments reexamine the predictions of pruning and minimal-network theories but enlarge the scope of the investigation in several ways. First, we aim to identify some boundary conditions in applying these two theories. In considering what would have happened if a component had not operated, people may have a range of options in imagining the counterfactual state. For the example of Fig. 1, in thinking about why component B might not have operated, we could imagine the problem as local to B, perhaps some failure of the component itself or interference from an external source. If so, we might reasonably agree that component A would still be operating in the counterfactual state of affairs, consistent with pruning theory (see Fig. 1b). However, we could also see the problem with B as the result of some prior difficulty, in this case, one associated with A. We would then believe it likely that A would not be operating in the counterfactual state, consistent with minimal-network theory (see Fig. 1c). In the present experiments, we compare two ways of wording the counterfactual question to see whether we can alter participants assumptions about the local or non-local origin of the changed state of affairs. Participants in one condition read the questions phrased in terms of the component not operating, as in (2a), whereas participants in a second condition read the questions phrased in terms of the component having failed, as in (2b):

6 6 L. J. Rips, B. J. Edwards / Cognitive Science (2013) (2) a. If component B had not operated, would component A have operated, have not operated, or might or might not have operated? b. If component B had failed, would component A have operated, have not operated, or might or might not have operated? We note that this same prediction also applies to the other components in the Fig. 1 device. 1 A second innovation in these experiments is that we look at the explanations people give for their decisions about counterfactual states. The predictions just mentioned assume that people have different ways of conceiving a counterfactual state in which, for example, a particular component is not working. People may ask themselves why the antecedent might have occurred and then use the most likely reason to project further information about this counterfactual situation (Rips, 2011). To examine this possibility, we asked participants in Experiments 1 and 2 to provide brief explanations of their decisions at the end of each trial. Experiments in developmental psychology suggest a relation between 3- and 4-year-old children s ability to explain why an action could or could not be performed and their ability to state whether counterfactual alternatives to the action exist (Sobel, 2004). The relation, however, was relatively weak, perhaps because of task demands (as Sobel suggests). More robust relations between explanations and counterfactual judgments may surface for adult participants, who may be less susceptible to task demands. Third, these experiments use a novel arrangement of causal components variations on the diamond-shaped structure of Fig. 1a that helps eliminate an alternative interpretation of our previous findings (see Section 4.3). This structure allows us to examine both direct and indirect effects of the antecedent on the consequent. For example, if component B in Fig. 1a had not operated, then this could have affected component D in two ways directly through the B-to-D link and indirectly from B-to-A-to-C-to-D. Minimal-network theory, but not pruning theory, permits the use of the indirect, back-door path in assessing counterfactuals. The diamond structure thus gives us a test of the theories in a situation in which the counterfactual itself does not force participants to consider the antecedent s causes. B s cause, component A, is not named in the question just mentioned. In this setting, then, potential evidence favoring minimal-network theory is less likely to be due to the question calling attention to the indirect path. Fourth, we allowed participants in these studies to say that the other components might or might not have operated, as in (2). Previous studies have provided only two response options, yes (component A would have operated) and no (component A would not have operated). The absence of a maybe response (component A might or might not have operated) makes it difficult to determine whether participants who gave no responses believed that A definitely would not have operated or that A might or might not have operated. Since minimal-network theory often predicts maybe responses when causal links are probabilistic (e.g., A sometimes causes B), the addition of this third response option enables a more sensitive test of minimal networks.

7 L. J. Rips, B. J. Edwards / Cognitive Science (2013) 7 2. Experiment 1: Alternative counterfactual possibilities The aim of this study is to find out how people s causal interpretation of a device affects their answers to counterfactual questions. As we have just noted, one aspect of this interpretation is the locus of the counterfactual change. Counterfactuals beginning If component B had failed seem to imply a change that is local to B, in line with the predictions of pruning theory. However, counterfactuals beginning If component B had not operated allow for changes brought about by earlier events, in line with the predictions of minimal-network theory. For this reason, we asked one group of participants questions with the failed wording and a second group questions with the not operated wording [see (2a and b)]. We also varied the causal structure of the devices to examine detailed predictions from pruning and minimal-network theory. Participants saw a series of eight different devices, all with the diamond shape of Fig. 1. Previous research has found that participants can trace the effects of interventions on the components of such systems (Meder, Hagmayer, & Waldmann, 2009). This makes the system suitable for a test of pruning theory, which models counterfactuals as interventions (Pearl, 2000; chap. 7). In four of these devices, components B and C independently controlled D, as in the earlier example. In four others, however, components B and C had to work together in order to cause D to operate, according to the descriptions we supplied participants. Figs. 2a d show the former separately caused devices, and Figs. 2e h the latter jointly caused devices. (The arc connecting the arrows from B to D and from C to D in the figure indicates joint operation.) The devices also varied in whether the connections between components were deterministic or probabilistic. In half the devices, the links between A and B and between A and C were deterministic (as represented by solid lines in Fig. 2), and in the remaining devices, probabilistic (dashed lines). Independently, the links between B and D and C and D could also be deterministic or probabilistic. For the deterministic links, we told participants that the cause always produced the effect (e.g., Component A s operating always causes component B to operate ), and for the probabilistic links, we told participants that the cause usually produced the effect (e.g., Component A s operating usually causes component B to operate ). The combination of these factors produced the eight devices in Fig. 2. For each device, we asked separate counterfactual questions based on what would have happened if component A had not operated [failed], if component B had not operated [failed], and if component D had not operated [failed]. As all the devices were causally symmetric with respect to components B and C, we did not ask a separate counterfactual question about C. For each of the three antecedents, participants decided the states of the remaining three components. Predictions from pruning theory are straightforward. If component A had not operated, none of the other components could operate, since A is their root cause. Participants should therefore answer would not have operated for all the remaining components. If component D had not operated, pruning theory cuts the incoming links to D but otherwise leaves the state of the device unchanged. Thus, participants should answer that the other components would have operated. Finally, if component B had not operated, the

8 8 L. J. Rips, B. J. Edwards / Cognitive Science (2013) (A) (B) Fig. 2. Mean responses that a component (shown on the x-axis) would operate in a counterfactual state, Experiment 1. Graphs in the same column refer to the device shown immediately above. Solid lines and circles represent counterfactuals in which a component had failed ; dashed lines and squares represent counterfactuals in which a component had not operated. The top row of graphs corresponds to counterfactuals in which A had not operated [failed], the middle row to counterfactuals in which B had not operated [failed], and the bottom row to counterfactuals in which D had not operated [failed].

9 answers for the separately caused devices should follow the pattern shown in Fig. 1b: A, C, and D would all have operated (see Figs. 2a d). However, when both B and C are necessary for D to operate (the jointly caused devices in Figs. 2e h), then if B had not operated, A and C would have operated but D would not have operated. Predictions for minimal-network theory are more complex because some of the devices have more than one model in which the antecedent is true and which are minimally different from the device s current state. 2 In general, though, minimal-network theory predicts either a no answer (as in the example in Fig. 1c) or a might or might not answer. Minimal-network theory never predicts a would have operated answer, whereas pruning theory predicts many such answers Method L. J. Rips, B. J. Edwards / Cognitive Science (2013) 9 Participants in this experiment read descriptions of each of the eight devices shown in Fig. 2. For each device, participants then answered counterfactual questions about the operating states of the components Procedure and materials Participants received a booklet containing three pages of instructions followed by 24 pages of questions. The first page of instructions explained the task to participants, saying that they would read about a series of hypothetical devices and then make some decisions about the way the devices would work under certain conditions. The second and third pages contained instructions for interpreting the Fig. 2 diagrams, showing how each device operated. We asked participants to go through the question pages in the order in which they appeared in the booklet and not return to a page after they had completed it. After reading the instructions, participants proceeded through the booklets at their own pace. Each of the following 24 pages contained a written description of one of the devices in Fig. 2, which was accompanied by the corresponding diagram. For example, the description of the device in Fig. 2e stated: Professor McNutt of the Department of Engineering has designed a device called a glux. The glux has only four components, labeled A, B, C, and D. The device works in the following way. Component A s operating always causes component B to operate. Component A s operating always causes component C to operate. Component B s operating and component C s operating together always causes component D to operate. Component B s operating alone never causes component D to operate. Component C s operating alone never causes component D to operate. For the devices with probabilistic connections, the descriptions substituted usually for always. For the separately caused devices, the last two sentences in the above description were omitted, and the third sentence was replaced with the two sentences

10 10 L. J. Rips, B. J. Edwards / Cognitive Science (2013) Component B s operating always [usually] causes component D to operate and Component C s operating always [usually] causes component D to operate. After reading this description, participants were told that at present, components A, B, C, and D are all operating. Next, participants were asked counterfactual questions about the device, such as If component A had not operated, would components B, C, and D have operated? In what follows, we will refer to the component mentioned in the antecedent (e.g., A in the example just given) as the antecedent component, and the component mentioned in the consequent, as the consequent component. To gain insight into the mental process participants were using, we presented the problems in the form shown in Fig. 3. We labeled one of the nodes of the diagram (B in Fig. 3) had failed or had not operated, depending on condition. Each of the remaining nodes was adjacent to a text box giving participants the choice of had operated, had not operated, or might or might not have operated, and a blank labeled Number. Participants were free to consider the components in any order, but they were to record this sequence (1 3) in the number blanks. After reasoning about all three non-antecedent components, participants were asked to provide freeform responses to the prompt please explain why you answered in the way you did. For each device, participants answered three counterfactual questions. These questions varied in whether component A, B, or D was the antecedent component. The three questions appeared separately on consecutive booklet pages, with the order of the antecedent components being ABD or DBA for each device. The order of the questions was balanced across participants. We manipulated the wording of the counterfactual question across two groups of participants. We told participants in the not operated condition that the antecedent component had not operated, and participants in the failed condition that the antecedent component had failed, as in (2). Fig. 3. Sample response format for the counterfactual If component B had not operated [failed], Experiments 1 and 2.

11 We constructed 16 distinct questionnaire booklets, consisting of eight different random orders of the devices based on an Latin square 9 2 conditions (not operated, failed) Participants The booklets were randomly assigned to 34 participants. All participants were undergraduate students at Northwestern University, and they received credit in their introductory psychology course for taking part in the experiment. Participants took approximately 30 min to complete this study. During the same experimental session, participants also completed a number of unrelated studies. The entire session lasted approximately 50 min Results and discussion L. J. Rips, B. J. Edwards / Cognitive Science (2013) 11 In examining the results, we first look at participants answers to the counterfactual questions (e.g., If component B had not operated, would component A [C, D] have operated?) and the consistency of these answers. We then check the order in which participants considered the components for a given antecedent (i.e., the order in which they decided about components A, C, and D in the example just mentioned) and the explanations they gave for their decisions Counterfactual operating states When a component is described as having failed in a counterfactual state, the description implies a fault local to the component itself. However, when a component is described as having not operated, the reason for the fault is more open-ended, possibly traceable to problems with causally preceding components. We gave these two kinds of wording to separate groups of participants in this experiment, and the difference between their responses appears quite clearly in Fig. 2. Participants answered each counterfactual question (e.g., If component B had not operated, would component A have operated?) by circling either would have operated, would NOT have operated, or might or might not have operated, and we coded these responses by giving a score of +1 to would, 1 to would not, and 0 to might or might not. The graphs in Fig. 2 plot the means of these responses. The columns in this figure correspond to the device at the column s top. The rows of graphs indicate the antecedent of the counterfactual: the first row for If component A had not operated [failed], the second for If component B had not operated [failed], and the third for If component D had not operated [failed]. When the counterfactual s antecedent stated that component A had not operated [failed], these two types of wording produced similar results. But for the other antecedents, the failed wording (circles and solid lines in Fig. 2) typically produced higher scores than the not operated wording (squares and dashed lines). If a downstream component fails, then other components may still be operating. If a downstream component is not operating, however, then that may be because causally prior components are not

12 12 L. J. Rips, B. J. Edwards / Cognitive Science (2013) operating. The latter state of affairs tended to lower the scores for these prior components and for some of their effects. Overall, the mean score for the failed wording was 0.14 and the mean for the not operated wording was 0.43, F(1, 32) = 7.07, MSE = 7.29, p =.01. Note that both scores were negative: Participants were more likely to decide that the consequent component would not have operated than that it would have operated. The effect of wording reflected a shift toward more would have operated responses (and fewer would not or might or might not responses) in the failed condition. For the not operated condition, 6.2% of responses were would have operated, 44.7% were might or might not, and 49.1% were would not. In the failed condition, however, 23.9% of responses were would, 38.4% were might or might not, and 37.7% were would not. The effect of wording also depended on the nature of the device and the specific counterfactual question. Differences tended to be biggest when the antecedent component was B or D rather than A, as just mentioned. This made for a significant interaction between wording (failed vs. not operated) and antecedent component, F(2, 64) = 4.91, MSE = 0.99, p =.01. Moreover, when B was the antecedent component, wording differences depended on B s connection to other components. When the connection between A and B was deterministic (Figs. 2a b, e f), the difference mattered between B failing and B not operating. But when the connection between A and B was probabilistic (Figs. 2c d, g h), the effect of wording was reduced. In the probabilistic case, participants may have thought that A may be on no matter whether B is said to have failed or to have not operated, because component A only usually causes B. These complex dependencies produced an interaction among wording, device type, the antecedent component, and the consequent component, F(21, 662) = 2.40, MSE = 0.07, p <.001. Pruning theory and minimal-network theory agree in their predictions in just two situations. One of these is the case in which A is the antecedent component (If A had not operated/failed ), as we mentioned earlier. The other case of agreement occurs for the jointly caused devices (Figs. 2e h). When B is the antecedent component, then both theories predict that D would not have operated. For all other cases, pruning theory predicts that the answer to the counterfactual question is that the component in question would have operated, but minimal-network theory predicts that the component would not have operated or might or might not have operated. As a test of these theories, we can therefore check the mean response for the counterfactual questions on which they differ. For the not operated wording, the mean for these conditions was 0.27, in line with the predictions of minimal-network theory. For the failed wording, the comparable mean was 0.18, in the direction predicted by pruning theory. The difference due to wording was significant here, as it was in the full data set, F(1, 32) = 11.96, MSE = 6.27, p =.002. However, only the value for the not operated wording is significantly different from 0 (t(17) = 4.50, p <.001 for the not operated condition and t(17) = 1.68, p =.110 for the failed condition). Minimal-network theory appears to deliver better predictions for the not operated wording (as in Rips, 2010), but not for the failed wording.

13 L. J. Rips, B. J. Edwards / Cognitive Science (2013) Response consistency We can assess the participants responses to the counterfactual questions in terms of how accurately they reflected the logic of the devices. If component A is off, for example, then all other components must also be off, according to both theories. So participants who stated that A would not be operating but another component would be operating (or might or might not be operating) are inconsistent in this sense. Further inconsistencies can arise from the relations between components B and D or between components C and D. Most participants (27 of 34) made one or more causally inconsistent responses about the same number in the not operated condition (13) as in the failed condition (14). Some of these responses may have been due to low-level response errors or to caution (e.g., responding might or might not in situations warranting stronger would or would not answers). The most common errors, however, were cases in which the counterfactual antecedent was If component A had not operated [failed] and the links from A were probabilistic (see Figs. 2c, d, g, h). Participants sometimes said that components B and C might or might not have operated in this situation. These participants may have interpreted the statement that A usually causes B and C to imply an unobserved (exogenous) cause for B and C. We can also examine the responses in terms of their consistency with pruning and minimal-network theories. Fifteen of our 34 participants gave the responses predicted by minimal-network theory on 75% or more of the questions. Only two participants answered in accord with pruning theory on 75% or more of the questions. 3 Another two participants, however, responded in a way that resembled pruning theory. These participants said that an upstream component would have operated under the counterfactual, but sometimes also said that a downstream component might or might not have operated in the same circumstances. Consider, for example, the device in Fig. 2b (which has deterministic links into components B and C, but probabilistic links into D) and a counterfactual beginning If component B had not operated Both participants stated that A and C would have operated in this case and that D might or might not have operated. Pruning theory predicts that D would be operating here, since its value is fixed to reflect its operation in the actual state of affairs. These participants, however, apparently took the probabilistic connection from C to D to imply that D would not necessarily be operating. This strategy and the one mentioned in the previous paragraph hint at strategies for dealing with probabilistic connections that go beyond the two models we are comparing Processing order of components Participants noted the order in which they considered the components by filling in the number blanks in their booklet (see Fig. 3). In general, participants answered the counterfactual questions in order from left to right. But some modulation in this pattern occurred that depended on the type of device. For the jointly caused devices (Figs. 2e h), if component B is not operating, then component D must not be operating, since D only operates when both B and C operate. Thus, participants could proceed to D before determining the status of C. However, for the separately

14 14 L. J. Rips, B. J. Edwards / Cognitive Science (2013) caused devices (Figs. 2a d), if B is not operating, D might still be operating, since C is sufficient to turn it on. So participants needed to check the status of C before evaluating D. To examine the effect of this difference among the devices, we used the serial order (1, 2, or 3) with which participants responded to each component. Differences in ordering were clearest when B was the antecedent component, since participants could move either backward or forward from the B position. For the jointly caused devices, the mean serial position for the remaining components were as follows: 1.58 for A, 2.25 for C, and 2.14 for D. For the separately caused devices, the corresponding positions were as follows: 1.30 for A, 2.17 for C, and 2.50 for D. The reversal in position for C and D was in line with the difference in logic just described, and it produced a significant interaction between consequent component (C vs. D) and device, F(7, 214) = 3.91, MSE = 0.43, p <.001. However, wording (failed vs. not operated) had no significant effect on serial position Freeform explanations We coded participants explanations of how they reasoned about the counterfactual questions based on whether they were consistent with pruning theory or minimal-networks theory. Specifically, we coded whether the explanations: (a) showed evidence of causal backtracking or (b) suggested that the states of causes are causally independent of the states of their effects. An individual who was unfamiliar with the experimental hypotheses coded the data, and then the data from 25% of participants were independently coded by a second individual. Intercoder reliability was 90%. We coded explanations as backtracking if they involved reasoning backwards from the state of an effect component to the operating states of cause components. A backtracking explanation is consistent with minimal-network theory. An example of a causal backtracking explanation from one participant is If B was not operating that would mean A was not working, since A always causes B. We coded explanations as causes are independent of effects if they suggested that changing the state of an effect component would not change the state of the cause component, an idea consistent with pruning theory. An example of a causes independent of effects explanation is: Neither A, B, nor C is dependent on D so they all will have operated. These types of explanation are only applicable when components exist that are upstream of the antecedent component. Therefore, the following analyses only include participants explanations when component B or D was the antecedent. Participants explanations provided further evidence that the not operated versus failed wording influenced their reasoning. Overall, participants offered more backtracking than causes-are-independent-of-effects explanations, F(1, 32) = 17.32, MSE = 19.60, p <.001. However, this difference was greater in the not operated condition than in the failed condition, F(1, 32) = 9.21, MSE = 19.60, p =.005. With not-operated wording, participants produced an average of 8.9 backtracking explanations (out of a maximum of 16) but only 1.2 causes-independent explanations. With failed wording, however, the means were 4.2 for backtracking and 2.9 for causes-independent.

15 L. J. Rips, B. J. Edwards / Cognitive Science (2013) Experiment 2: Explanations of counterfactual states In thinking about counterfactual states, participants in our first experiment based their judgments on the conditional s framing. Antecedents of the form If component X had not operated invited them to explore prior causes of X s non-operation. Similarly, participants explanations of their answers suggested that they were reasoning diagnostically from X s not operating to the state of X s causes. However, for conditional antecedents of the form If component X had failed, participants were more likely to think that prior causes had operated, and their explanations more often appealed to the idea that these causes were independent of X. These results suggest that in answering counterfactual questions, people try to find a plausible explanation of the counterfactual state and let the explanation dictate their decisions. The present experiment explores this connection between explanations and counterfactual states in a more systematic way. In one part of the experiment, participants read about the Fig. 2 devices, and for each device, they decided which of a set of possible explanations best accounted for the states mentioned in the antecedents. The question had the form in (3): (3) If component A [B, D] had not operated [failed], which of the following would best explain why? We chose these explanations to span the range of possible ways in which the state could come about, and Table 1 lists the explanations we employed. Note that this is a different type of explanation from the one participants provided in Experiment 1. In that experiment, participants explained (in their own words) why the non-antecedent components would or would not have operated in the counterfactual state. In the present experiment, participants explain why the antecedent component had not operated by choosing an explanation from a fixed list. In a second part of the experiment, participants carried out the same reasoning task we had used in Experiment 1. The explanations in Table 1 provide an additional test of pruning and minimal-network theories. Explanations such as component B was internally broken, factors external to the device prevented component B from operating, or component A operated, but the connection between component A and component B was broken provide reasons that are consistent with pruning theory. But explanations such as component A did not operate, which in turn caused component B not to operate are compatible with minimal-network theory, at least when the connection between A and B is deterministic. (When the A-to-B connection is probabilistic, minimal-network theory is also consistent with component A operated, but component B just did not operate this time because the connection between component A and component B is unreliable ). We also expect that variations in wording will affect the explanation task in the same way as the inference task. If a component had failed, explanations should emphasize faults local to that component. However, if a component had not operated, explanations should emphasize problems with earlier components or links (e.g., Component A did not operate, which in turn caused component B not to operate ).

16 16 L. J. Rips, B. J. Edwards / Cognitive Science (2013) Table 1 Answer options for the question If component X had not operated [failed], which of the following would best explain why? (Experiment 2) Antecedent If component A had not operated [failed] If component B had not operated [failed] If component D had not operated [failed] Possible Explanations 1. Component A was internally broken. 2. Factors external to the device prevented component A from operating. 3. Component A operates unreliably, and component A just did not operate this time. 1. Component B was internally broken. 2. Factors external to the device prevented component B from operating. 3. Component B operates unreliably, and component B just did not operate this time. 4. Component A did not operate, which in turn caused component B not to operate. 5. Component A operated, but component B just did not operate this time because the connection between component A and component B is unreliable. 6. Component A operated, but the connection between component A and component B was broken. 1. Component D was internally broken. 2. Factors external to the device prevented component D from operating. 3. Component D operates unreliably, and component D just did not operate this time. 4. Component B or [and] component C did not operate, which in turn caused component D not to operate. a 5. Component B and component C both operated [component B or component C operated], but component D just did not operate this time because the connection between components B and C and component D [between component B and component D or between component C and component D] is unreliable. a 6. Component B and component C both operated [component B or component C operated], but the connection between components B and C and component D [between component B and component D or between component C and component D] was broken. a a When component D was the antecedent component, the wording varied slightly based on whether component B and component C must both be operating in order for component D to operate or whether component B or component C operating alone is sufficient for component D to operate, as noted by the brackets Method Participants performed a counterfactual inference task similar to that in Experiment 1. In addition, they performed a separate explanation task in which they selected an explanation for why the antecedent component had not operated (or had failed to operate) from the list in Table Procedure and materials Half the participants completed the inference task followed by the explanation task, and half completed these two tasks in the reverse order.

17 L. J. Rips, B. J. Edwards / Cognitive Science (2013) Inference task: The inference task was identical to that in Experiment 1, except that participants were not asked to justify their inferences on the same booklet page. As in Experiment 1, the causal devices were the eight machines in Fig Explanation task: In the explanation task, we asked participants to explain why the antecedent component had not operated (not operated condition) or failed to operate (failed condition). Each participant received a 27-page booklet containing three pages of instructions followed by 24 pages of questions. The instructions were similar to those for the inference task of this experiment. Each of the following 24 pages contained a written description of one of the devices accompanied by the corresponding diagram from Fig. 2. After reading this description, participants were told the present state of the device, which was always that all four components are operating. Next, participants in the not operated condition answered a question of the form If component X had not operated, which of the following would best explain why? Participants in the failed condition received a question that was identical except that failed substituted for not operated. Separate consecutive pages asked questions for the components A, B, and D. As in Experiment 1, the order of the antecedent components for each device was either ABD or DBA. For each participant, the order of the devices and of the antecedent components were the same as in the inference task. Participants selected one explanation from the list in Table 1. The set of choices varied depending on which component (A, B, or D) was the antecedent component and on whether the device was separately caused or jointly caused, as the Table indicates. Half the participants received the explanations in the order shown in Table 1, and the other half received the explanations in the reverse order. After selecting an explanation, participants rated their confidence in their response on a 0 9 scale, where 0 = not at all confident and 9 = extremely confident. There were 16 distinct explanation booklets, consisting of eight different random orders of the devices based on an Latin square 9 2 conditions (not operated, failed) Participants The 32 participants were from the same pool as those in Experiment 1 but had not participated in the earlier study. We assigned participants randomly to the not operated or failed condition. The inference task and the explanation task each took approximately 20 min to complete Results and discussion The goal of the present experiment was to examine two aspects of how people think about counterfactual states. We look first at direct answers to questions such as If component B had not operated, would component D have operated? and then at explanations for why component B had not operated.

18 18 L. J. Rips, B. J. Edwards / Cognitive Science (2013) Inferences about counterfactual states Participants decisions about the status of the components closely matched those of Experiment 1. We scored the answers as we had in Experiment 1, using +1 if the participant believed the component would have operated, 1 if the component would not have operated, and 0 if the component might or might not have operated. Fig. 4 displays these results in the same format as in Fig. 2 to bring out the similarity between them. As in the earlier study, when the antecedent component was described as having not operated, scores for the remaining components tended to be more negative (M = 0.48) than when the antecedent component was described as having failed (M = 0.16). These means were quite close to those of Experiment 1 ( 0.43 and 0.14, respectively). The overall difference due to wording was again significant, F(1, 30) = 14.47, MSE = 4.14, p <.001. As in Experiment 1, this difference was due to more would have operated responses and fewer would not or might or might not responses in the failed condition. In the not operated condition, 4.3% of responses were would have operated, 43.2% were might or might not, and 52.5% were would not. In the failed condition, 27.3% of responses were would, 29.2% were would or would not, and 43.5% were would not. As Fig. 4 suggests, the wording differences were again larger when the antecedent component was B or D than when it was A, F(2, 60) = 5.35, MSE = 1.01, p =.007. Since A is the initial cause, if A had failed or had not operated, the other components must not have operated. However, if components B or D had not operated, upstream components are not necessarily on or off, allowing for the effects of wording. Fig. 4 also shows that wording interacted with the structure of the device. If a downstream component had not operated, then whether an upstream component would have operated depended on the connections between them. (Deterministic connections suggested that the upstream component would not have operated, whereas probabilistic connections made the status of the upstream component more uncertain.) However, if a downstream component had failed, the implications for upstream components were less marked. As in Experiment 1, this made for a significant interaction of wording with the specific device, antecedent component, and consequent component, F(21, 616) = 2.36, MSE = 0.06, p <.001. We also tested predictions of pruning and minimal-network theories in those situations in which they diverge, following the procedure of Experiment 1 (see Section 2.2.1). In the not operated condition, the mean response was 0.25 for these counterfactuals, favoring minimal-network predictions. But in the failed condition, the mean was 0.27, favoring pruning theory. The difference between these means was significant (F(1, 28) = 18.43, MSE = 0.12, p <.001), and both means differed from 0 (t(16) = 6.59, p <.001 for the not operated condition and t(16) = 2.58, p =.020 for the failed condition). This effect did not depend on whether participants completed the inference task before the explanation task or completed the tasks in the opposite order, F(1, 28) < 1 for the interaction. Logical consistency was higher in this experiment than in Experiment 1, with 15 of 32 participants making no errors of this sort (see Section for a description of these). But as in the previous study, the failed and not operated conditions held about equal

The Doubly-Modifiable Structural Model (DMSM) The Modifiable Structural Model (MSM)

The Doubly-Modifiable Structural Model (DMSM) The Modifiable Structural Model (MSM) unified theory of counterfactual reasoning hristopher G. Lucas cglucas@cmu.edu Department of Psychology arnegie Mellon University harles Kemp ckemp@cmu.edu Department of Psychology arnegie Mellon University

More information

Learning Causal Direction from Repeated Observations over Time

Learning Causal Direction from Repeated Observations over Time Learning Causal Direction from Repeated Observations over Time Benjamin M. Rottman (benjamin.rottman@yale.edu) Frank C. Keil (frank.keil@yale.edu) Department of Psychology, Yale U., 2 Hillhouse Ave New

More information

Effects of Fact Mutability in the Interpretation of Counterfactuals

Effects of Fact Mutability in the Interpretation of Counterfactuals Effects of Fact Mutability in the Interpretation of Counterfactuals Morteza Dehghani (morteza@northwestern.edu) Department of EECS, 2145 Sheridan Rd Evanston, IL 60208-0834 USA Rumen Iliev (r-iliev@northwestern.edu)

More information

Learning Causal Direction from Transitions with Continuous and Noisy Variables

Learning Causal Direction from Transitions with Continuous and Noisy Variables Learning Causal Direction from Transitions with Continuous and Noisy Variables Kevin W. Soo (kws10@pitt.edu) Benjamin M. Rottman (rottman@pitt.edu) Department of Psychology, University of Pittsburgh 3939

More information

Structure learning in human causal induction

Structure learning in human causal induction Structure learning in human causal induction Joshua B. Tenenbaum & Thomas L. Griffiths Department of Psychology Stanford University, Stanford, CA 94305 jbt,gruffydd @psych.stanford.edu Abstract We use

More information

Generating Settlements. Through Lateral Thinking Techniques

Generating Settlements. Through Lateral Thinking Techniques Generating Settlements Through Lateral Thinking Techniques Vertical Thinking vs. Lateral Thinking Complementary Processes- VT is concerned with proving or developing concept patterns and LT is concerned

More information

Tooley on backward causation

Tooley on backward causation Tooley on backward causation Paul Noordhof Michael Tooley has argued that, if backward causation (of a certain kind) is possible, then a Stalnaker-Lewis account of the truth conditions of counterfactuals

More information

Counterfactual Reasoning in Algorithmic Fairness

Counterfactual Reasoning in Algorithmic Fairness Counterfactual Reasoning in Algorithmic Fairness Ricardo Silva University College London and The Alan Turing Institute Joint work with Matt Kusner (Warwick/Turing), Chris Russell (Sussex/Turing), and Joshua

More information

(Refer Slide Time: 0:21)

(Refer Slide Time: 0:21) Theory of Computation Prof. Somenath Biswas Department of Computer Science and Engineering Indian Institute of Technology Kanpur Lecture 7 A generalisation of pumping lemma, Non-deterministic finite automata

More information

Antecedents of counterfactuals violate de Morgan s law

Antecedents of counterfactuals violate de Morgan s law Antecedents of counterfactuals violate de Morgan s law Lucas Champollion champollion@nyu.edu Joint work with Ivano Ciardelli and Linmin Zhang Fourth Workshop on Natural Language and Computer Science (NLCS

More information

Causality II: How does causal inference fit into public health and what it is the role of statistics?

Causality II: How does causal inference fit into public health and what it is the role of statistics? Causality II: How does causal inference fit into public health and what it is the role of statistics? Statistics for Psychosocial Research II November 13, 2006 1 Outline Potential Outcomes / Counterfactual

More information

Coins and Counterfactuals

Coins and Counterfactuals Chapter 19 Coins and Counterfactuals 19.1 Quantum Paradoxes The next few chapters are devoted to resolving a number of quantum paradoxes in the sense of giving a reasonable explanation of a seemingly paradoxical

More information

Causal Explanations in Counterfactual Reasoning

Causal Explanations in Counterfactual Reasoning Causal Explanations in Counterfactual Reasoning Morteza Dehghani (morteza@northwestern.edu) Department of EECS, 2145 Sheridan Rd Evanston, IL 60208-0834 USA Rumen Iliev (r-iliev@northwestern.edu) Department

More information

3 The Semantics of the Propositional Calculus

3 The Semantics of the Propositional Calculus 3 The Semantics of the Propositional Calculus 1. Interpretations Formulas of the propositional calculus express statement forms. In chapter two, we gave informal descriptions of the meanings of the logical

More information

Bounding the Probability of Causation in Mediation Analysis

Bounding the Probability of Causation in Mediation Analysis arxiv:1411.2636v1 [math.st] 10 Nov 2014 Bounding the Probability of Causation in Mediation Analysis A. P. Dawid R. Murtas M. Musio February 16, 2018 Abstract Given empirical evidence for the dependence

More information

Abstract. Three Methods and Their Limitations. N-1 Experiments Suffice to Determine the Causal Relations Among N Variables

Abstract. Three Methods and Their Limitations. N-1 Experiments Suffice to Determine the Causal Relations Among N Variables N-1 Experiments Suffice to Determine the Causal Relations Among N Variables Frederick Eberhardt Clark Glymour 1 Richard Scheines Carnegie Mellon University Abstract By combining experimental interventions

More information

1 Measurement Uncertainties

1 Measurement Uncertainties 1 Measurement Uncertainties (Adapted stolen, really from work by Amin Jaziri) 1.1 Introduction No measurement can be perfectly certain. No measuring device is infinitely sensitive or infinitely precise.

More information

Manipulating Radicals

Manipulating Radicals Lesson 40 Mathematics Assessment Project Formative Assessment Lesson Materials Manipulating Radicals MARS Shell Center University of Nottingham & UC Berkeley Alpha Version Please Note: These materials

More information

1 Motivation for Instrumental Variable (IV) Regression

1 Motivation for Instrumental Variable (IV) Regression ECON 370: IV & 2SLS 1 Instrumental Variables Estimation and Two Stage Least Squares Econometric Methods, ECON 370 Let s get back to the thiking in terms of cross sectional (or pooled cross sectional) data

More information

CAUSATION CAUSATION. Chapter 10. Non-Humean Reductionism

CAUSATION CAUSATION. Chapter 10. Non-Humean Reductionism CAUSATION CAUSATION Chapter 10 Non-Humean Reductionism Humean states of affairs were characterized recursively in chapter 2, the basic idea being that distinct Humean states of affairs cannot stand in

More information

1 Measurement Uncertainties

1 Measurement Uncertainties 1 Measurement Uncertainties (Adapted stolen, really from work by Amin Jaziri) 1.1 Introduction No measurement can be perfectly certain. No measuring device is infinitely sensitive or infinitely precise.

More information

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes. CS 188 Spring 2014 Introduction to Artificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except your two-page crib sheet. Mark your answers

More information

The paradox of knowability, the knower, and the believer

The paradox of knowability, the knower, and the believer The paradox of knowability, the knower, and the believer Last time, when discussing the surprise exam paradox, we discussed the possibility that some claims could be true, but not knowable by certain individuals

More information

Comments on The Role of Large Scale Assessments in Research on Educational Effectiveness and School Development by Eckhard Klieme, Ph.D.

Comments on The Role of Large Scale Assessments in Research on Educational Effectiveness and School Development by Eckhard Klieme, Ph.D. Comments on The Role of Large Scale Assessments in Research on Educational Effectiveness and School Development by Eckhard Klieme, Ph.D. David Kaplan Department of Educational Psychology The General Theme

More information

CMPT Machine Learning. Bayesian Learning Lecture Scribe for Week 4 Jan 30th & Feb 4th

CMPT Machine Learning. Bayesian Learning Lecture Scribe for Week 4 Jan 30th & Feb 4th CMPT 882 - Machine Learning Bayesian Learning Lecture Scribe for Week 4 Jan 30th & Feb 4th Stephen Fagan sfagan@sfu.ca Overview: Introduction - Who was Bayes? - Bayesian Statistics Versus Classical Statistics

More information

B. Weaver (24-Mar-2005) Multiple Regression Chapter 5: Multiple Regression Y ) (5.1) Deviation score = (Y i

B. Weaver (24-Mar-2005) Multiple Regression Chapter 5: Multiple Regression Y ) (5.1) Deviation score = (Y i B. Weaver (24-Mar-2005) Multiple Regression... 1 Chapter 5: Multiple Regression 5.1 Partial and semi-partial correlation Before starting on multiple regression per se, we need to consider the concepts

More information

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II CS 5522: Artificial Intelligence II Bayes Nets: Independence Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]

More information

What if Every "If Only" Statement Were True?: The Logic of Counterfactuals

What if Every If Only Statement Were True?: The Logic of Counterfactuals Michigan State University College of Law Digital Commons at Michigan State University College of Law Faculty Publications 1-1-2008 What if Every "If Only" Statement Were True?: The Logic of Counterfactuals

More information

Hardy s Paradox. Chapter Introduction

Hardy s Paradox. Chapter Introduction Chapter 25 Hardy s Paradox 25.1 Introduction Hardy s paradox resembles the Bohm version of the Einstein-Podolsky-Rosen paradox, discussed in Chs. 23 and 24, in that it involves two correlated particles,

More information

Causal Models Guide Analogical Inference

Causal Models Guide Analogical Inference Lee, H. S., & Holyoak, K. J. (2007). Causal models guide analogical inference. In D. S. McNamara & G. Trafton (Eds.), Proceedings of the Twenty-ninth Annual Conference of the Cognitive Science Society

More information

Name: UW CSE 473 Final Exam, Fall 2014

Name: UW CSE 473 Final Exam, Fall 2014 P1 P6 Instructions Please answer clearly and succinctly. If an explanation is requested, think carefully before writing. Points may be removed for rambling answers. If a question is unclear or ambiguous,

More information

Answering Causal Queries about Singular Cases

Answering Causal Queries about Singular Cases Answering Causal Queries about Singular Cases Simon Stephan (simon.stephan@psych.uni-goettingen.de) Michael R. Waldmann (michael.waldmann@bio.uni-goettingen.de) Department of Psychology, University of

More information

Project Management Prof. Raghunandan Sengupta Department of Industrial and Management Engineering Indian Institute of Technology Kanpur

Project Management Prof. Raghunandan Sengupta Department of Industrial and Management Engineering Indian Institute of Technology Kanpur Project Management Prof. Raghunandan Sengupta Department of Industrial and Management Engineering Indian Institute of Technology Kanpur Module No # 07 Lecture No # 35 Introduction to Graphical Evaluation

More information

Hestenes lectures, Part 5. Summer 1997 at ASU to 50 teachers in their 3 rd Modeling Workshop

Hestenes lectures, Part 5. Summer 1997 at ASU to 50 teachers in their 3 rd Modeling Workshop Hestenes lectures, Part 5. Summer 1997 at ASU to 50 teachers in their 3 rd Modeling Workshop WHAT DO WE TEACH? The question What do we teach? has to do with What do we want to learn? A common instructional

More information

Delayed Choice Paradox

Delayed Choice Paradox Chapter 20 Delayed Choice Paradox 20.1 Statement of the Paradox Consider the Mach-Zehnder interferometer shown in Fig. 20.1. The second beam splitter can either be at its regular position B in where the

More information

Counterfactual Undoing in Deterministic Causal Reasoning

Counterfactual Undoing in Deterministic Causal Reasoning Counterfactual Undoing in Deterministic Causal Reasoning Steven A. Sloman (Steven_Sloman@brown.edu) Department of Cognitive & Linguistic Sciences, ox 1978 rown University, Providence, RI 02912 USA David

More information

GRADE 7 MATH LEARNING GUIDE. Lesson 26: Solving Linear Equations and Inequalities in One Variable Using

GRADE 7 MATH LEARNING GUIDE. Lesson 26: Solving Linear Equations and Inequalities in One Variable Using GRADE 7 MATH LEARNING GUIDE Lesson 26: Solving Linear Equations and Inequalities in One Variable Using Guess and Check Time: 1 hour Prerequisite Concepts: Evaluation of algebraic expressions given values

More information

30. TRANSFORMING TOOL #1 (the Addition Property of Equality)

30. TRANSFORMING TOOL #1 (the Addition Property of Equality) 30 TRANSFORMING TOOL #1 (the Addition Property of Equality) sentences that look different, but always have the same truth values What can you DO to a sentence that will make it LOOK different, but not

More information

On the teaching and learning of logic in mathematical contents. Kyeong Hah Roh Arizona State University

On the teaching and learning of logic in mathematical contents. Kyeong Hah Roh Arizona State University On the teaching and learning of logic in mathematical contents Kyeong Hah Roh Arizona State University khroh@asu.edu Students understanding of the formal definitions of limit teaching and learning of logic

More information

4 Derivations in the Propositional Calculus

4 Derivations in the Propositional Calculus 4 Derivations in the Propositional Calculus 1. Arguments Expressed in the Propositional Calculus We have seen that we can symbolize a wide variety of statement forms using formulas of the propositional

More information

Paradoxes of special relativity

Paradoxes of special relativity Paradoxes of special relativity Today we are turning from metaphysics to physics. As we ll see, certain paradoxes about the nature of space and time result not from philosophical speculation, but from

More information

Math 38: Graph Theory Spring 2004 Dartmouth College. On Writing Proofs. 1 Introduction. 2 Finding A Solution

Math 38: Graph Theory Spring 2004 Dartmouth College. On Writing Proofs. 1 Introduction. 2 Finding A Solution Math 38: Graph Theory Spring 2004 Dartmouth College 1 Introduction On Writing Proofs What constitutes a well-written proof? A simple but rather vague answer is that a well-written proof is both clear and

More information

Modeling the Role of Unobserved Causes in Causal Learning

Modeling the Role of Unobserved Causes in Causal Learning Modeling the Role of Unobserved Causes in Causal Learning Christian C. Luhmann (christian.luhmann@vanderbilt.edu) Department of Psychology, 2 Hillhouse Ave New Haven, CT 06511 USA Woo-koung Ahn (woo-kyoung.ahn@yale.edu)

More information

DISCUSSION CENSORED VISION. Bruce Le Catt

DISCUSSION CENSORED VISION. Bruce Le Catt Australasian Journal of Philosophy Vol. 60, No. 2; June 1982 DISCUSSION CENSORED VISION Bruce Le Catt When we see in the normal way, the scene before the eyes causes matching visual experience. And it

More information

Chapter 24. Comparing Means

Chapter 24. Comparing Means Chapter 4 Comparing Means!1 /34 Homework p579, 5, 7, 8, 10, 11, 17, 31, 3! /34 !3 /34 Objective Students test null and alternate hypothesis about two!4 /34 Plot the Data The intuitive display for comparing

More information

Social Science Counterfactuals. Julian Reiss, Durham University

Social Science Counterfactuals. Julian Reiss, Durham University Social Science Counterfactuals Julian Reiss, Durham University Social Science Counterfactuals Julian Reiss, Durham University Counterfactuals in Social Science Stand-ins for causal claims about singular

More information

Midterm 2 V1. Introduction to Artificial Intelligence. CS 188 Spring 2015

Midterm 2 V1. Introduction to Artificial Intelligence. CS 188 Spring 2015 S 88 Spring 205 Introduction to rtificial Intelligence Midterm 2 V ˆ You have approximately 2 hours and 50 minutes. ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib

More information

31. TRANSFORMING TOOL #2 (the Multiplication Property of Equality)

31. TRANSFORMING TOOL #2 (the Multiplication Property of Equality) 3 TRANSFORMING TOOL # (the Multiplication Property of Equality) a second transforming tool THEOREM Multiplication Property of Equality In the previous section, we learned that adding/subtracting the same

More information

Probabilistic Models. Models describe how (a portion of) the world works

Probabilistic Models. Models describe how (a portion of) the world works Probabilistic Models Models describe how (a portion of) the world works Models are always simplifications May not account for every variable May not account for all interactions between variables All models

More information

Essential Question: What is a complex number, and how can you add, subtract, and multiply complex numbers? Explore Exploring Operations Involving

Essential Question: What is a complex number, and how can you add, subtract, and multiply complex numbers? Explore Exploring Operations Involving Locker LESSON 3. Complex Numbers Name Class Date 3. Complex Numbers Common Core Math Standards The student is expected to: N-CN. Use the relation i = 1 and the commutative, associative, and distributive

More information

1 Computational problems

1 Computational problems 80240233: Computational Complexity Lecture 1 ITCS, Tsinghua Univesity, Fall 2007 9 October 2007 Instructor: Andrej Bogdanov Notes by: Andrej Bogdanov The aim of computational complexity theory is to study

More information

Indicative conditionals

Indicative conditionals Indicative conditionals PHIL 43916 November 14, 2012 1. Three types of conditionals... 1 2. Material conditionals... 1 3. Indicatives and possible worlds... 4 4. Conditionals and adverbs of quantification...

More information

Midterm II. Introduction to Artificial Intelligence. CS 188 Spring ˆ You have approximately 1 hour and 50 minutes.

Midterm II. Introduction to Artificial Intelligence. CS 188 Spring ˆ You have approximately 1 hour and 50 minutes. CS 188 Spring 2013 Introduction to Artificial Intelligence Midterm II ˆ You have approximately 1 hour and 50 minutes. ˆ The exam is closed book, closed notes except a one-page crib sheet. ˆ Please use

More information

THE SIMPLE PROOF OF GOLDBACH'S CONJECTURE. by Miles Mathis

THE SIMPLE PROOF OF GOLDBACH'S CONJECTURE. by Miles Mathis THE SIMPLE PROOF OF GOLDBACH'S CONJECTURE by Miles Mathis miles@mileswmathis.com Abstract Here I solve Goldbach's Conjecture by the simplest method possible. I do this by first calculating probabilites

More information

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. CS 188 Fall 2015 Introduction to Artificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

More information

Math 147 Lecture Notes: Lecture 12

Math 147 Lecture Notes: Lecture 12 Math 147 Lecture Notes: Lecture 12 Walter Carlip February, 2018 All generalizations are false, including this one.. Samuel Clemens (aka Mark Twain) (1835-1910) Figures don t lie, but liars do figure. Samuel

More information

Propositional Logic. Fall () Propositional Logic Fall / 30

Propositional Logic. Fall () Propositional Logic Fall / 30 Propositional Logic Fall 2013 () Propositional Logic Fall 2013 1 / 30 1 Introduction Learning Outcomes for this Presentation 2 Definitions Statements Logical connectives Interpretations, contexts,... Logically

More information

ACTIVITY 5: Changing Force-Strength and Mass

ACTIVITY 5: Changing Force-Strength and Mass UNIT FM Developing Ideas ACTIVITY 5: Changing Force-Strength and Mass Purpose In the previous activities of this unit you have seen that during a contact push/pull interaction, when a single force acts

More information

Solving Equations. Lesson Fifteen. Aims. Context. The aim of this lesson is to enable you to: solve linear equations

Solving Equations. Lesson Fifteen. Aims. Context. The aim of this lesson is to enable you to: solve linear equations Mathematics GCSE Module Four: Basic Algebra Lesson Fifteen Aims The aim of this lesson is to enable you to: solve linear equations solve linear equations from their graph solve simultaneous equations from

More information

Introduction to Metalogic

Introduction to Metalogic Philosophy 135 Spring 2008 Tony Martin Introduction to Metalogic 1 The semantics of sentential logic. The language L of sentential logic. Symbols of L: Remarks: (i) sentence letters p 0, p 1, p 2,... (ii)

More information

Causal Explanation and Fact Mutability in Counterfactual Reasoning

Causal Explanation and Fact Mutability in Counterfactual Reasoning Causal Explanation and Fact Mutability in Counterfactual Reasoning MORTEZA DEHGHANI, RUMEN ILIEV AND STEFAN KAUFMANN Abstract: Recent work on the interpretation of counterfactual conditionals has paid

More information

CHAPTER 3. THE IMPERFECT CUMULATIVE SCALE

CHAPTER 3. THE IMPERFECT CUMULATIVE SCALE CHAPTER 3. THE IMPERFECT CUMULATIVE SCALE 3.1 Model Violations If a set of items does not form a perfect Guttman scale but contains a few wrong responses, we do not necessarily need to discard it. A wrong

More information

CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence Fall 2008 CS 188: Artificial Intelligence Fall 2008 Lecture 14: Bayes Nets 10/14/2008 Dan Klein UC Berkeley 1 1 Announcements Midterm 10/21! One page note sheet Review sessions Friday and Sunday (similar) OHs on

More information

Philosophy 5340 Epistemology. Topic 3: Analysis, Analytically Basic Concepts, Direct Acquaintance, and Theoretical Terms. Part 2: Theoretical Terms

Philosophy 5340 Epistemology. Topic 3: Analysis, Analytically Basic Concepts, Direct Acquaintance, and Theoretical Terms. Part 2: Theoretical Terms Philosophy 5340 Epistemology Topic 3: Analysis, Analytically Basic Concepts, Direct Acquaintance, and Theoretical Terms Part 2: Theoretical Terms 1. What Apparatus Is Available for Carrying out Analyses?

More information

Module 03 Lecture 14 Inferential Statistics ANOVA and TOI

Module 03 Lecture 14 Inferential Statistics ANOVA and TOI Introduction of Data Analytics Prof. Nandan Sudarsanam and Prof. B Ravindran Department of Management Studies and Department of Computer Science and Engineering Indian Institute of Technology, Madras Module

More information

A Brief Introduction to Proofs

A Brief Introduction to Proofs A Brief Introduction to Proofs William J. Turner October, 010 1 Introduction Proofs are perhaps the very heart of mathematics. Unlike the other sciences, mathematics adds a final step to the familiar scientific

More information

Midterm II. Introduction to Artificial Intelligence. CS 188 Spring ˆ You have approximately 1 hour and 50 minutes.

Midterm II. Introduction to Artificial Intelligence. CS 188 Spring ˆ You have approximately 1 hour and 50 minutes. CS 188 Spring 2013 Introduction to Artificial Intelligence Midterm II ˆ You have approximately 1 hour and 50 minutes. ˆ The exam is closed book, closed notes except a one-page crib sheet. ˆ Please use

More information

Class Note #20. In today s class, the following four concepts were introduced: decision

Class Note #20. In today s class, the following four concepts were introduced: decision Class Note #20 Date: 03/29/2006 [Overall Information] In today s class, the following four concepts were introduced: decision version of a problem, formal language, P and NP. We also discussed the relationship

More information

Recall from last time: Conditional probabilities. Lecture 2: Belief (Bayesian) networks. Bayes ball. Example (continued) Example: Inference problem

Recall from last time: Conditional probabilities. Lecture 2: Belief (Bayesian) networks. Bayes ball. Example (continued) Example: Inference problem Recall from last time: Conditional probabilities Our probabilistic models will compute and manipulate conditional probabilities. Given two random variables X, Y, we denote by Lecture 2: Belief (Bayesian)

More information

Using Algebra Fact Families to Solve Equations

Using Algebra Fact Families to Solve Equations Using Algebra Fact Families to Solve Equations Martin V. Bonsangue, California State University, Fullerton Gerald E. Gannon, California State University, Fullerton

More information

Commentary on Guarini

Commentary on Guarini University of Windsor Scholarship at UWindsor OSSA Conference Archive OSSA 5 May 14th, 9:00 AM - May 17th, 5:00 PM Commentary on Guarini Andrew Bailey Follow this and additional works at: http://scholar.uwindsor.ca/ossaarchive

More information

8. TRANSFORMING TOOL #1 (the Addition Property of Equality)

8. TRANSFORMING TOOL #1 (the Addition Property of Equality) 8 TRANSFORMING TOOL #1 (the Addition Property of Equality) sentences that look different, but always have the same truth values What can you DO to a sentence that will make it LOOK different, but not change

More information

Chapter 14: Finding the Equilibrium Solution and Exploring the Nature of the Equilibration Process

Chapter 14: Finding the Equilibrium Solution and Exploring the Nature of the Equilibration Process Chapter 14: Finding the Equilibrium Solution and Exploring the Nature of the Equilibration Process Taking Stock: In the last chapter, we learned that equilibrium problems have an interesting dimension

More information

27. THESE SENTENCES CERTAINLY LOOK DIFFERENT

27. THESE SENTENCES CERTAINLY LOOK DIFFERENT 27 HESE SENENCES CERAINLY LOOK DIEREN comparing expressions versus comparing sentences a motivating example: sentences that LOOK different; but, in a very important way, are the same Whereas the = sign

More information

An Introduction to Mplus and Path Analysis

An Introduction to Mplus and Path Analysis An Introduction to Mplus and Path Analysis PSYC 943: Fundamentals of Multivariate Modeling Lecture 10: October 30, 2013 PSYC 943: Lecture 10 Today s Lecture Path analysis starting with multivariate regression

More information

Scientific Explanation- Causation and Unification

Scientific Explanation- Causation and Unification Scientific Explanation- Causation and Unification By Wesley Salmon Analysis by Margarita Georgieva, PSTS student, number 0102458 Van Lochemstraat 9-17 7511 EG Enschede Final Paper for Philosophy of Science

More information

12. Vagueness, Uncertainty and Degrees of Belief

12. Vagueness, Uncertainty and Degrees of Belief 12. Vagueness, Uncertainty and Degrees of Belief KR & R Brachman & Levesque 2005 202 Noncategorical statements Ordinary commonsense knowledge quickly moves away from categorical statements like a P is

More information

Basic Thermodynamics. Prof. S. K. Som. Department of Mechanical Engineering. Indian Institute of Technology, Kharagpur.

Basic Thermodynamics. Prof. S. K. Som. Department of Mechanical Engineering. Indian Institute of Technology, Kharagpur. Basic Thermodynamics Prof. S. K. Som Department of Mechanical Engineering Indian Institute of Technology, Kharagpur Lecture - 06 Second Law and its Corollaries I Good afternoon, I welcome you all to this

More information

ANALYTIC COMPARISON. Pearl and Rubin CAUSAL FRAMEWORKS

ANALYTIC COMPARISON. Pearl and Rubin CAUSAL FRAMEWORKS ANALYTIC COMPARISON of Pearl and Rubin CAUSAL FRAMEWORKS Content Page Part I. General Considerations Chapter 1. What is the question? 16 Introduction 16 1. Randomization 17 1.1 An Example of Randomization

More information

In Newcomb s problem, an agent is faced with a choice between acts that

In Newcomb s problem, an agent is faced with a choice between acts that Aporia vol. 23 no. 2 2013 Counterfactuals and Causal Decision Theory Kevin Dorst In Newcomb s problem, an agent is faced with a choice between acts that are highly correlated with certain outcomes, but

More information

Lecture Notes on Inductive Definitions

Lecture Notes on Inductive Definitions Lecture Notes on Inductive Definitions 15-312: Foundations of Programming Languages Frank Pfenning Lecture 2 September 2, 2004 These supplementary notes review the notion of an inductive definition and

More information

Incompatibility Paradoxes

Incompatibility Paradoxes Chapter 22 Incompatibility Paradoxes 22.1 Simultaneous Values There is never any difficulty in supposing that a classical mechanical system possesses, at a particular instant of time, precise values of

More information

Path Analysis. PRE 906: Structural Equation Modeling Lecture #5 February 18, PRE 906, SEM: Lecture 5 - Path Analysis

Path Analysis. PRE 906: Structural Equation Modeling Lecture #5 February 18, PRE 906, SEM: Lecture 5 - Path Analysis Path Analysis PRE 906: Structural Equation Modeling Lecture #5 February 18, 2015 PRE 906, SEM: Lecture 5 - Path Analysis Key Questions for Today s Lecture What distinguishes path models from multivariate

More information

, (1) e i = ˆσ 1 h ii. c 2016, Jeffrey S. Simonoff 1

, (1) e i = ˆσ 1 h ii. c 2016, Jeffrey S. Simonoff 1 Regression diagnostics As is true of all statistical methodologies, linear regression analysis can be a very effective way to model data, as along as the assumptions being made are true. For the regression

More information

Proving Completeness for Nested Sequent Calculi 1

Proving Completeness for Nested Sequent Calculi 1 Proving Completeness for Nested Sequent Calculi 1 Melvin Fitting abstract. Proving the completeness of classical propositional logic by using maximal consistent sets is perhaps the most common method there

More information

9. TRANSFORMING TOOL #2 (the Multiplication Property of Equality)

9. TRANSFORMING TOOL #2 (the Multiplication Property of Equality) 9 TRANSFORMING TOOL # (the Multiplication Property of Equality) a second transforming tool THEOREM Multiplication Property of Equality In the previous section, we learned that adding/subtracting the same

More information

1.1 The Language of Mathematics Expressions versus Sentences

1.1 The Language of Mathematics Expressions versus Sentences The Language of Mathematics Expressions versus Sentences a hypothetical situation the importance of language Study Strategies for Students of Mathematics characteristics of the language of mathematics

More information

Chapter 1 Statistical Inference

Chapter 1 Statistical Inference Chapter 1 Statistical Inference causal inference To infer causality, you need a randomized experiment (or a huge observational study and lots of outside information). inference to populations Generalizations

More information

Relevant Logic. Daniel Bonevac. March 20, 2013

Relevant Logic. Daniel Bonevac. March 20, 2013 March 20, 2013 The earliest attempts to devise a relevance logic that avoided the problem of explosion centered on the conditional. FDE, however, has no conditional operator, or a very weak one. If we

More information

The Causal Sampler: A Sampling Approach to Causal Representation, Reasoning and Learning

The Causal Sampler: A Sampling Approach to Causal Representation, Reasoning and Learning The Causal Sampler: A Sampling Approach to Causal Representation, Reasoning and Learning Zachary J. Davis (zach.davis@nyu.edu) Bob Rehder (bob.rehder@nyu.edu) Department of Psychology, New York University

More information

33. SOLVING LINEAR INEQUALITIES IN ONE VARIABLE

33. SOLVING LINEAR INEQUALITIES IN ONE VARIABLE get the complete book: http://wwwonemathematicalcatorg/getfulltextfullbookhtm 33 SOLVING LINEAR INEQUALITIES IN ONE VARIABLE linear inequalities in one variable DEFINITION linear inequality in one variable

More information

DIFFERENT APPROACHES TO STATISTICAL INFERENCE: HYPOTHESIS TESTING VERSUS BAYESIAN ANALYSIS

DIFFERENT APPROACHES TO STATISTICAL INFERENCE: HYPOTHESIS TESTING VERSUS BAYESIAN ANALYSIS DIFFERENT APPROACHES TO STATISTICAL INFERENCE: HYPOTHESIS TESTING VERSUS BAYESIAN ANALYSIS THUY ANH NGO 1. Introduction Statistics are easily come across in our daily life. Statements such as the average

More information

An Introduction to Path Analysis

An Introduction to Path Analysis An Introduction to Path Analysis PRE 905: Multivariate Analysis Lecture 10: April 15, 2014 PRE 905: Lecture 10 Path Analysis Today s Lecture Path analysis starting with multivariate regression then arriving

More information

Testing Hypotheses about Mechanical Devices

Testing Hypotheses about Mechanical Devices Testing Hypotheses about Mechanical Devices Aidan Feeney Department of Psychology University of Durham Science Laboratories South Road Durham DH1 3LE United Kingdom aidan.feeney@durham.ac.uk Simon J. Handley

More information

What Causality Is (stats for mathematicians)

What Causality Is (stats for mathematicians) What Causality Is (stats for mathematicians) Andrew Critch UC Berkeley August 31, 2011 Introduction Foreword: The value of examples With any hard question, it helps to start with simple, concrete versions

More information

Limiting Reactants An analogy and learning cycle approach

Limiting Reactants An analogy and learning cycle approach Limiting Reactants An analogy and learning cycle approach Introduction This lab builds on the previous one on conservation of mass by looking at a chemical reaction in which there is a limiting reactant.

More information

This is logically equivalent to the conjunction of the positive assertion Minimal Arithmetic and Representability

This is logically equivalent to the conjunction of the positive assertion Minimal Arithmetic and Representability 16.2. MINIMAL ARITHMETIC AND REPRESENTABILITY 207 If T is a consistent theory in the language of arithmetic, we say a set S is defined in T by D(x) if for all n, if n is in S, then D(n) is a theorem of

More information

INTRODUCTION TO ANALYSIS OF VARIANCE

INTRODUCTION TO ANALYSIS OF VARIANCE CHAPTER 22 INTRODUCTION TO ANALYSIS OF VARIANCE Chapter 18 on inferences about population means illustrated two hypothesis testing situations: for one population mean and for the difference between two

More information

P (E) = P (A 1 )P (A 2 )... P (A n ).

P (E) = P (A 1 )P (A 2 )... P (A n ). Lecture 9: Conditional probability II: breaking complex events into smaller events, methods to solve probability problems, Bayes rule, law of total probability, Bayes theorem Discrete Structures II (Summer

More information