Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 34
Filter
Add more filters










Publication year range
1.
Mem Cognit ; 52(1): 182-196, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37787932

ABSTRACT

People can think about hypothetical impossibilities and a curious observation is that some impossible conditionals seem true and others do not. Four experiments test the proposal that people think about impossibilities just as they do possibilities, by attempting to construct a consistent simulation of the impossible conjecture with its suggested outcome, informed by their knowledge of the real world. The results show that participants judge some impossible conditionals true with one outcome, for example, "if people were made of steel, they would not bruise easily" and false with the opposite outcome, "if people were made of steel they would bruise easily", and others false with either outcome, for example, "if houses were made of spaghetti, their engines would (not) be noisy". However, they can sometimes judge impossible conditionals true with either outcome, for example, "if Plato were identical to Socrates, he would (not) have a small nose", or "if sheep and wolves were alike, they would (not) eat grass". The results were observed for judgments about what could be true (Experiments 1 and 4), judgments of degrees of truth (Experiment 2), and judgments of what is true (Experiment 3). The results rule out the idea that people evaluate the truth of a hypothetical impossibility by relying on cognitive processes that compare the probability of each conditional to its counterpart with the opposite outcome.


Subject(s)
Judgment , Steel , Male , Humans , Animals , Sheep , Probability
2.
Proc Natl Acad Sci U S A ; 120(40): e2310488120, 2023 10 03.
Article in English | MEDLINE | ID: mdl-37748054

ABSTRACT

Cognitive scientists treat verification as a computation in which descriptions that match the relevant situation are true, but otherwise false. The claim is controversial: The logician Gödel and the physicist Penrose have argued that human verifications are not computable. In contrast, the theory of mental models treats verification as computable, but the two truth values of standard logics, true and false, as insufficient. Three online experiments (n = 208) examined participants' verifications of disjunctive assertions about a location of an individual or a journey, such as: 'You arrived at Exeter or Perth'. The results showed that their verifications depended on observation of a match with one of the locations but also on the status of other locations (Experiment 1). Likewise, when they reached one destination and the alternative one was impossible, their use of the truth value: could be true and could be false increased (Experiment 2). And, when they reached one destination and the only alternative one was possible, they used the truth value, true and it couldn't have been false, and when the alternative one was impossible, they used the truth value: true but it could have been false (Experiment 3). These truth values and those for falsity embody counterfactuals. We implemented a computer program that constructs models of disjunctions, represents possible destinations, and verifies the disjunctions using the truth values in our experiments. Whether an awareness of a verification's outcome is computable remains an open question.


Subject(s)
Physicians , Humans , Software
3.
Mem Cognit ; 51(7): 1481-1496, 2023 10.
Article in English | MEDLINE | ID: mdl-36964302

ABSTRACT

Few empirical studies have examined how people understand counterfactual explanations for other people's decisions, for example, "if you had asked for a lower amount, your loan application would have been approved". Yet many current Artificial Intelligence (AI) decision support systems rely on counterfactual explanations to improve human understanding and trust. We compared counterfactual explanations to causal ones, i.e., "because you asked for a high amount, your loan application was not approved", for an AI's decisions in a familiar domain (alcohol and driving) and an unfamiliar one (chemical safety) in four experiments (n = 731). Participants were shown inputs to an AI system, its decisions, and an explanation for each decision; they attempted to predict the AI's decisions, or to make their own decisions. Participants judged counterfactual explanations more helpful than causal ones, but counterfactuals did not improve the accuracy of their predictions of the AI's decisions more than causals (Experiment 1). However, counterfactuals improved the accuracy of participants' own decisions more than causals (Experiment 2). When the AI's decisions were correct (Experiments 1 and 2), participants considered explanations more helpful and made more accurate judgements in the familiar domain than in the unfamiliar one; but when the AI's decisions were incorrect, they considered explanations less helpful and made fewer accurate judgements in the familiar domain than the unfamiliar one, whether they predicted the AI's decisions (Experiment 3a) or made their own decisions (Experiment 3b). The results corroborate the proposal that counterfactuals provide richer information than causals, because their mental representation includes more possibilities.


Subject(s)
Artificial Intelligence , Judgment , Humans , Ethanol , Trust
4.
Mem Cognit ; 50(5): 1103-1123, 2022 07.
Article in English | MEDLINE | ID: mdl-35532831

ABSTRACT

How do people come to consider a morally unacceptable action, such as "a passenger in an airplane does not want to sit next to a Muslim passenger and so he tells the stewardess the passenger must be moved to another seat", to be less unacceptable? We propose they tend to imagine counterfactual alternatives about how things could have been different that transform the unacceptable action to be less unacceptable. Five experiments identify the cognitive processes underlying this imaginative moral shift: an action is judged less unacceptable when people imagine circumstances in which it would have been moral. The effect occurs for immediate counterfactuals and reflective ones, but is greater when participants create an immediate counterfactual first, and diminished when they create a reflective one first. The effect also occurs for unreasonable actions. We discuss the implications for alternative theories of the mental representations and cognitive processes underlying moral judgments.


Subject(s)
Judgment , Morals , Humans , Imagination , Male
5.
J Exp Psychol Learn Mem Cogn ; 47(4): 547-570, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33090843

ABSTRACT

When people understand a counterfactual such as "if it had been a good year, there would have been roses," they simulate the imagined alternative to reality, for example, "there were roses," and the actual reality, as known or presupposed, for example, "there were no roses." Seven experiments examined how people keep track of the epistemic status of these possibilities, by priming participants to anticipate a story would continue about one or the other. When participants anticipated the story would continue about how the current reality related to the past presupposed reality, they read a target description about reality more rapidly than one about the imagined alternative, indicating they had prioritized access to their mental representation of reality; but when they anticipated the story would continue about how the current reality related to the imagined alternative to reality, they read a target description about the imagined alternative and one about reality equally rapidly, indicating they had maintained access to both (Experiment 1), unlike for stories with no counterfactuals (Experiments 2 and 3). The tendency is not invariant: it appears immune to remote experience (Experiments 4 and 5), but it is influenced by immediate experience (Experiments 6 and 7). The results have implications for theories of reality monitoring, reasoning, and imagination. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Comprehension , Cues , Imagination , Adult , Humans , Young Adult
6.
Mem Cognit ; 48(7): 1263-1280, 2020 10.
Article in English | MEDLINE | ID: mdl-32495318

ABSTRACT

The mental model theory postulates that the meanings of conditionals are based on possibilities. Indicative conditionals-such as "If he is injured tomorrow, then he will take some leave"-have a factual interpretation that can be paraphrased as It is possible, and remains so, that he is injured tomorrow, and in that case certain that he takes some leave. Subjunctive conditionals, such as, "If he were injured tomorrow, then he would take some leave," have a prefactual interpretation that has the same paraphrase. But when context makes clear that his injury will not occur, the subjunctive has a counterfactual paraphrase, with the first clause: It was once possible, but does not remain so, that he will be injured tomorrow. Three experiments corroborated these predictions for participants' selections of paraphrases in their native Spanish, for epistemic and deontic conditionals, for those referring to past and to future events, and for those with then clauses referring to what may or must happen. These results are contrary to normal modal logics. They are also contrary to theories based on probabilities, which are inapplicable to deontic conditionals, such as, "If you have a ticket, then you must enter the show."


Subject(s)
Logic , Models, Psychological , Humans , Male , Probability
7.
Cogn Sci ; 44(4): e12827, 2020 04.
Article in English | MEDLINE | ID: mdl-32291803

ABSTRACT

We examine two competing effects of beliefs on conditional inferences. The suppression effect occurs for conditionals, for example, "if she watered the plants they bloomed," when beliefs about additional background conditions, for example, "if the sun shone they bloomed" decrease the frequency of inferences such as modus tollens (from "the plants did not bloom" to "therefore she did not water them"). In contrast, the counterfactual elevation effect occurs for counterfactual conditionals, for example, "if she had watered the plants they would have bloomed," when beliefs about the known or presupposed facts, "she did not water the plants and they did not bloom" increase the frequency of inferences such as modus tollens. We report six experiments that show that beliefs about additional conditions take precedence over beliefs about presupposed facts for counterfactuals. The modus tollens inference is suppressed for counterfactuals that contain additional conditions (Experiments 1a and 1b). The denial of the antecedent inference (from "she did not water the plants" to "therefore they did not bloom") is suppressed for counterfactuals that contain alternatives (Experiments 2a and 2b). We report a new "switched-suppression" effect for conditionals with negated components, for example, "if she had not watered the plants they would not have bloomed": modus tollens is suppressed by alternatives and denial of the antecedent by additional conditions, rather than vice versa (Experiments 3a and 3b). We discuss the implications of the results for alternative theories of conditional reasoning.


Subject(s)
Logic , Thinking , Adolescent , Female , Humans , Male , Young Adult
8.
J Exp Psychol Learn Mem Cogn ; 46(4): 760-780, 2020 Apr.
Article in English | MEDLINE | ID: mdl-31647286

ABSTRACT

The theory of mental models postulates that conditionals and disjunctions refer to possibilities, real or counterfactual. Factual conditionals, for example, "If there's an apple, there's a pear," parallel counterfactual ones, for example, "If there had been an apple, there would have been a pear." A similar parallel underlies disjunctions. Individuals estimate the probabilities of conditionals by adjusting the probability of their then-clauses according to the effects of their if-clauses, and the probabilities of disjunctions by a rough average of the probabilities of their disjuncts. Hence, the theory predicts that estimates of the joint probabilities of these assertions with each of the four cases in their partitions will be grossly subadditive, summing to over 100%. Five experiments corroborated these predictions. Factual conditionals and disjunctions were judged true in the same cases as their counterfactual equivalents, and the sum of their joint probabilities with cases in the partition ranged from 240% to 270% (Experiments 1a, 1b). When participants were told these probabilities should not sum to more than 100%, estimates of the probability of A and C, as the model theory predicts, were higher for factual than counterfactual conditionals, whereas estimates of the probability of not-A and not-C had the opposite difference (Experiment 1c). Judgments of truth or falsity distinguished between conditionals that were certain and those that might have counterexamples (Experiment 2a), whereas judgments of the likelihood of truth reflected the probabilities of counterexamples (Experiment 2b). We discuss implications for alternative theories based on standard logic, suppositions, probabilistic logic, and causal Bayes networks. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Logic , Models, Psychological , Probability , Thinking/physiology , Adult , Humans , Judgment/physiology
9.
Front Psychol ; 10: 1172, 2019.
Article in English | MEDLINE | ID: mdl-31258498

ABSTRACT

Three experiments tracked participants' eye-movements to examine the time course of comprehension of the dual meaning of counterfactuals, such as "if there had been oranges then there would have been pears." Participants listened to conditionals while looking at images in the visual world paradigm, including an image of oranges and pears that corresponds to the counterfactual's conjecture, and one of no oranges and no pears that corresponds to its presumed facts, to establish at what point in time they consider each one. The results revealed striking individual differences: some participants looked at the negative image and the affirmative one, and some only at the affirmative image. The first experiment showed that participants who looked at the negative image increased their fixation on it within half a second. The second experiment showed they do so even without explicit instructions, and the third showed they do so even for printed words.

10.
Cogn Sci ; 42(8): 2459-2501, 2018 11.
Article in English | MEDLINE | ID: mdl-30240030

ABSTRACT

When people understand a counterfactual such as "if the flowers had been roses, the trees would have been orange trees," they think about the conjecture, "there were roses and orange trees," and they also think about its opposite, the presupposed facts. We test whether people think about the opposite by representing alternates, for example, "poppies and apple trees," or whether models can contain symbols, for example, "no roses and no orange trees." We report the discovery of an inference-to-alternates effect-a tendency to make an affirmative inference that refers to an alternate even from a negative minor premise, for example, "there were no orange trees, therefore there were poppies." Nine experiments show the inference-to-alternates effect occurs in a binary context, but not a multiple context, and for direct and indirect reference; it can be induced and reduced by prior experience with similar inferences, and it also occurs for indicative conditionals. The results have implications for theories of counterfactual conditionals, and of negation.


Subject(s)
Cognition/physiology , Comprehension/physiology , Language , Adolescent , Female , Humans , Male , Young Adult
11.
Cogn Sci ; 2018 Jul 02.
Article in English | MEDLINE | ID: mdl-29968343

ABSTRACT

This article presents a fundamental advance in the theory of mental models as an explanation of reasoning about facts, possibilities, and probabilities. It postulates that the meanings of compound assertions, such as conditionals (if) and disjunctions (or), unlike those in logic, refer to conjunctions of epistemic possibilities that hold in default of information to the contrary. Various factors such as general knowledge can modulate these interpretations. New information can always override sentential inferences; that is, reasoning in daily life is defeasible (or nonmonotonic). The theory is a dual process one: It distinguishes between intuitive inferences (based on system 1) and deliberative inferences (based on system 2). The article describes a computer implementation of the theory, including its two systems of reasoning, and it shows how the program simulates crucial predictions that evidence corroborates. It concludes with a discussion of how the theory contrasts with those based on logic or on probabilities.

12.
Cognition ; 178: 82-91, 2018 09.
Article in English | MEDLINE | ID: mdl-29842988

ABSTRACT

Five experiments identify an asymmetric moral hindsight effect for judgments about whether a morally good action should have been taken, e.g., Ann should run into traffic to save Jill who fell before an oncoming truck. Judgments are increased when the outcome is good (Jill sustained minor bruises), as Experiment 1 shows; but they are not decreased when the outcome is bad (Jill sustained life-threatening injuries), as Experiment 2 shows. The hindsight effect is modified by imagined alternatives to the outcome: judgments are amplified by a counterfactual that if the good action had not been taken, the outcome would have been worse, and diminished by a semi-factual that if the good action had not been taken, the outcome would have been the same. Hindsight modification occurs when the alternative is presented with the outcome, and also when participants have already committed to a judgment based on the outcome, as Experiments 3A and 3B show. The hindsight effect occurs not only for judgments in life-and-death situations but also in other domains such as sports, as Experiment 4 shows. The results are consistent with a causal-inference explanation of moral judgment and go against an aversive-emotion one.


Subject(s)
Emotions , Imagination , Judgment , Morals , Adolescent , Adult , Aged , Decision Making , Female , Humans , Male , Middle Aged , Young Adult
13.
Q J Exp Psychol (Hove) ; 71(3): 779-789, 2018 Mar.
Article in English | MEDLINE | ID: mdl-28059634

ABSTRACT

Two experiments examine whether people reason differently about intentional and accidental violations in the moral domains of harm and purity, by examining moral responsibility and wrongness judgments for violations that affect others or the self. The first experiment shows that intentional violations are judged to be worse than accidental ones, regardless of whether they are harm or purity violations-for example, Sam poisons his colleague versus Sam eats his dog, when participants judge how morally responsible was Sam for what he did, or how morally wrong was what Sam did. The second experiment shows that violations of others are judged to be worse than violations of the self, regardless of whether they are harm or purity violations, when their content and context is matched-for example, on a tropical holiday Sam orders poisonous starfruit for dinner for his friend, or for himself, versus on a tropical holiday Sam orders dog meat for dinner for his friend, or for himself. Moral reasoning is influenced by whether the violation was intentional or accidental, and whether its target was the self or another person, rather than by the moral domain, such as harm or purity.


Subject(s)
Accidents/psychology , Intention , Judgment/physiology , Morals , Social Behavior , Adolescent , Adult , Analysis of Variance , Female , Humans , Male , Young Adult
14.
J Autism Dev Disord ; 47(6): 1806-1817, 2017 06.
Article in English | MEDLINE | ID: mdl-28342167

ABSTRACT

We examine false belief and counterfactual reasoning in children with autism with a new change-of-intentions task. Children listened to stories, for example, Anne is picking up toys and John hears her say she wants to find her ball. John goes away and the reason for Anne's action changes-Anne's mother tells her to tidy her bedroom. We asked, 'What will John believe is the reason that Anne is picking up toys?' which requires a false-belief inference, and 'If Anne's mother hadn't asked Anne to tidy her room, what would have been the reason she was picking up toys?' which requires a counterfactual inference. We tested children aged 6, 8 and 10 years. Children with autism made fewer correct inferences than typically developing children at 8 years, but by 10 years there was no difference. Children with autism made fewer correct false-belief than counterfactual inferences, just like typically developing children.


Subject(s)
Autistic Disorder/psychology , Culture , Intention , Acoustic Stimulation/methods , Autistic Disorder/diagnosis , Child , Child Development/physiology , Female , Humans , Male , Thinking/physiology
15.
Annu Rev Psychol ; 67: 135-57, 2016.
Article in English | MEDLINE | ID: mdl-26393873

ABSTRACT

People spontaneously create counterfactual alternatives to reality when they think "if only" or "what if" and imagine how the past could have been different. The mind computes counterfactuals for many reasons. Counterfactuals explain the past and prepare for the future, they implicate various relations including causal ones, and they affect intentions and decisions. They modulate emotions such as regret and relief, and they support moral judgments such as blame. The loss of the ability to imagine alternatives as a result of injuries to the prefrontal cortex is devastating. The basic cognitive processes that compute counterfactuals mutate aspects of the mental representation of reality to create an imagined alternative, and they compare alternative representations. The ability to create counterfactuals develops throughout childhood and contributes to reasoning about other people's beliefs, including their false beliefs. Knowledge affects the plausibility of a counterfactual through the semantic and pragmatic modulation of the mental representation of alternative possibilities.


Subject(s)
Decision Making/physiology , Emotions/physiology , Imagination/physiology , Judgment/physiology , Thinking/physiology , Humans , Intention , Morals
16.
J Exp Psychol Learn Mem Cogn ; 41(1): 55-76, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25019601

ABSTRACT

Four experiments tested the idea that people distinguish between biconditional, conditional, and enabling intention conditionals by thinking about counterexamples. The experiments examined intention conditionals that contain different types of reasons for actions, such as beliefs, goals, obligations, and social norms, based on a corpus of 48 intention conditionals established through an extensive materials test (n = 136). Experiment 1 (n = 19) showed that retrieved alternative reasons suppress the affirmation of the consequent and denial of the antecedent inferences from conditional intentions, whereas retrieved disabling reasons suppress the modus ponens and modus tollens inferences from enabling intentions. Experiment 2 (n = 61) showed that the suppression effects also occur for explicitly provided alternatives and disablers, for a large corpus of 80 intention conditionals. Experiment 3 (n = 60) showed that the suppression effects also occur for unfamiliar content, for which participants cannot rely on prior knowledge or beliefs about probabilities. Experiment 4 (n = 26) showed that participants retrieve alternatives and disablers readily for intentions just as they do for causal conditionals. The implications of the results for alternative accounts based on possibilities and probabilities are discussed.


Subject(s)
Intention , Motion Perception , Social Perception , Thinking , Adult , Female , Humans , Knowledge , Male , Probability , Psychological Tests , Theory of Mind , Young Adult
17.
Cogn Psychol ; 67(3): 98-129, 2013 Nov.
Article in English | MEDLINE | ID: mdl-23968595

ABSTRACT

A new theory explains how people make hypothetical inferences from a premise consistent with several alternatives to a conclusion consistent with several alternatives. The key proposal is that people rely on a heuristic that identifies compatible possibilities. It is tested in 7 experiments that examine inferences between conditionals and disjunctions. Participants accepted inferences between conditionals and inclusive disjunctions when a compatible possibility was immediately available, in their binary judgments that a conclusion followed or not (Experiment 1a) and ternary judgments that included it was not possible to know (Experiment 1b). The compatibility effect was amplified when compatible possibilities were more readily available, e.g., for 'A only if B' conditionals (Experiment 2). It was eliminated when compatible possibilities were not available, e.g., for 'if and only if A B' bi-conditionals and exclusive disjunctions (Experiment 3). The compatibility heuristic occurs even for inferences based on implicit negation e.g., 'A or B, therefore if C D' (Experiment 4), and between universals 'All A's are B's' and disjunctions (Experiment 5a) and universals and conditionals (Experiment 5b). The implications of the results for alternative theories of the cognitive processes underlying hypothetical deductions are discussed.


Subject(s)
Judgment , Concept Formation , Humans , Probability , Problem Solving , Thinking
18.
Acta Psychol (Amst) ; 141(1): 54-66, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22858874

ABSTRACT

Causal counterfactuals e.g., 'if the ignition key had been turned then the car would have started' and causal conditionals e.g., 'if the ignition key was turned then the car started' are understood by thinking about multiple possibilities of different sorts, as shown in six experiments using converging evidence from three different types of measures. Experiments 1a and 1b showed that conditionals that comprise enabling causes, e.g., 'if the ignition key was turned then the car started' primed people to read quickly conjunctions referring to the possibility of the enabler occurring without the outcome, e.g., 'the ignition key was turned and the car did not start'. Experiments 2a and 2b showed that people paraphrased causal conditionals by using causal or temporal connectives (because, when), whereas they paraphrased causal counterfactuals by using subjunctive constructions (had…would have). Experiments 3a and 3b showed that people made different inferences from counterfactuals presented with enabling conditions compared to none. The implications of the results for alternative theories of conditionals are discussed.


Subject(s)
Cognition , Comprehension , Problem Solving , Adolescent , Adult , Aged , Female , Humans , Male , Middle Aged , Models, Psychological , Reading
19.
Exp Psychol ; 59(4): 227-35, 2012.
Article in English | MEDLINE | ID: mdl-22580411

ABSTRACT

We examine how people understand and reason from counterfactual threats, for example, "if you had hit your sister, I would have grounded you" and counterfactual promises, for example, "if you had tidied your room, I would have given you ice-cream." The first experiment shows that people consider counterfactual threats, but not counterfactual promises, to have the illocutionary force of an inducement. They also make the immediate inference that the action mentioned in the "if" part of the counterfactual threat and promise did not occur. The second experiment shows that people make more negative inferences (modus tollens and denial of the antecedent) than affirmative inferences (modus ponens and affirmation of the consequent) from counterfactual threats and promises, unlike indicative threats and promises. We discuss the implications of the results for theories of the mental representations and cognitive processes that underlie conditional inducements.


Subject(s)
Cognition , Motivation , Problem Solving , Adolescent , Adult , Aged , Female , Humans , Male , Middle Aged
20.
Mem Cognit ; 40(5): 769-78, 2012 Jul.
Article in English | MEDLINE | ID: mdl-22396128

ABSTRACT

In two experiments, we established a new phenomenon in reasoning from disjunctions of the grammatical form either A or else B, where A and B are clauses. When individuals have to assess whether pairs of assertions can be true at the same time, they tend to focus on the truth of each clause of an exclusive disjunction (and ignore the concurrent falsity of the other clause). Hence, they succumb to illusions of consistency and of inconsistency with pairs consisting of a disjunction and a conjunction (Experiment 1), and with simpler problems consisting of pairs of disjunctions, such as eIther there is a pie or else there is a cake and Either there isn't a pie or else there is a cake (Experiment 2), that appear to be consistent with one another, but in fact are not. These results corroborate the theory that reasoning depends on envisaging models of possibilities.


Subject(s)
Association Learning , Decision Making , Illusions , Logic , Problem Solving , Semantics , Humans , Probability Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...