Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
Proc Natl Acad Sci U S A ; 120(40): e2310488120, 2023 10 03.
Article in English | MEDLINE | ID: mdl-37748054

ABSTRACT

Cognitive scientists treat verification as a computation in which descriptions that match the relevant situation are true, but otherwise false. The claim is controversial: The logician Gödel and the physicist Penrose have argued that human verifications are not computable. In contrast, the theory of mental models treats verification as computable, but the two truth values of standard logics, true and false, as insufficient. Three online experiments (n = 208) examined participants' verifications of disjunctive assertions about a location of an individual or a journey, such as: 'You arrived at Exeter or Perth'. The results showed that their verifications depended on observation of a match with one of the locations but also on the status of other locations (Experiment 1). Likewise, when they reached one destination and the alternative one was impossible, their use of the truth value: could be true and could be false increased (Experiment 2). And, when they reached one destination and the only alternative one was possible, they used the truth value, true and it couldn't have been false, and when the alternative one was impossible, they used the truth value: true but it could have been false (Experiment 3). These truth values and those for falsity embody counterfactuals. We implemented a computer program that constructs models of disjunctions, represents possible destinations, and verifies the disjunctions using the truth values in our experiments. Whether an awareness of a verification's outcome is computable remains an open question.


Subject(s)
Physicians , Humans , Software
2.
Acta Psychol (Amst) ; 224: 103506, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35101737

ABSTRACT

Poetry evokes emotions. It does so, according to the theory we present, from three sorts of simulation. They each can prompt emotions, which are communications both within the brain and among people. First, models of a poem's semantic contents can evoke emotions as do models that occur in depictions of all kinds, from novels to perceptions. Second, mimetic simulations of prosodic cues, such as meter, rhythm, and rhyme, yield particular emotional states. Third, people's simulations of themselves enable them to know that they are engaged with a poem, and an aesthetic emotion can occur as a result. The three simulations predict certain sorts of emotion, e.g., prosodic cues can evoke basic emotions of happiness, sadness, anger, and anxiety. Empirical evidence corroborates the theory, which we relate to other accounts of poetic emotions.


Subject(s)
Emotions , Happiness , Anger , Anxiety , Humans , Semantics
3.
Cogn Sci ; 2018 Jul 02.
Article in English | MEDLINE | ID: mdl-29968343

ABSTRACT

This article presents a fundamental advance in the theory of mental models as an explanation of reasoning about facts, possibilities, and probabilities. It postulates that the meanings of compound assertions, such as conditionals (if) and disjunctions (or), unlike those in logic, refer to conjunctions of epistemic possibilities that hold in default of information to the contrary. Various factors such as general knowledge can modulate these interpretations. New information can always override sentential inferences; that is, reasoning in daily life is defeasible (or nonmonotonic). The theory is a dual process one: It distinguishes between intuitive inferences (based on system 1) and deliberative inferences (based on system 2). The article describes a computer implementation of the theory, including its two systems of reasoning, and it shows how the program simulates crucial predictions that evidence corroborates. It concludes with a discussion of how the theory contrasts with those based on logic or on probabilities.

4.
Psychol Bull ; 144(8): 779-796, 2018 08.
Article in English | MEDLINE | ID: mdl-29781626

ABSTRACT

How individuals choose evidence to test hypotheses is a long-standing puzzle. According to an algorithmic theory that we present, it is based on dual processes: individuals' intuitions depending on mental models of the hypothesis yield selections of evidence matching instances of the hypothesis, but their deliberations yield selections of potential counterexamples to the hypothesis. The results of 228 experiments using Wason's selection task corroborated the theory's predictions. Participants made dependent choices of items of evidence: the selections in 99 experiments were significantly more redundant (using Shannon's measure) than those of 10,000 simulations of each experiment based on independent selections. Participants tended to select evidence corresponding to instances of hypotheses, or to its counterexamples, or to both. Given certain contents, instructions, or framings of the task, they were more likely to select potential counterexamples to the hypothesis. When participants received feedback about their selections in the "repeated" selection task, they switched from selections of instances of the hypothesis to selection of potential counterexamples. These results eliminated most of the 15 alternative theories of selecting evidence. In a meta-analysis, the model theory yielded a better fit of the results of 228 experiments than the one remaining theory based on reasoning rather than meaning. We discuss the implications of the model theory for hypothesis testing and for a well-known paradox of confirmation. (PsycINFO Database Record


Subject(s)
Choice Behavior/physiology , Intuition/physiology , Problem Solving/physiology , Humans , Meta-Analysis as Topic , Models, Psychological , Research Design , Task Performance and Analysis
5.
Cogn Sci ; 41 Suppl 5: 1003-1030, 2017 May.
Article in English | MEDLINE | ID: mdl-28370159

ABSTRACT

The theory of mental models postulates that meaning and knowledge can modulate the interpretation of conditionals. The theory's computer implementation implied that certain conditionals should be true or false without the need for evidence. Three experiments corroborated this prediction. In Experiment 1, nearly 500 participants evaluated 24 conditionals as true or false, and they justified their judgments by completing sentences of the form, It is impossible that A and ___ appropriately. In Experiment 2, participants evaluated 16 conditionals and provided their own justifications, which tended to be explanations rather than logical justifications. In Experiment 3, the participants also evaluated as possible or impossible each of the four cases in the partitions of 16 conditionals: A and C, A and not-C, not-A and C, not-A and not-C. These evaluations corroborated the model theory. We consider the implications of these results for theories of reasoning based on logic, probabilistic logic, and suppositions.


Subject(s)
Judgment/physiology , Logic , Problem Solving/physiology , Adolescent , Adult , Female , Humans , Male , Middle Aged , Models, Psychological , Psychological Theory , Young Adult
6.
Cogn Sci ; 39(6): 1216-58, 2015 Aug.
Article in English | MEDLINE | ID: mdl-25363706

ABSTRACT

We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning.


Subject(s)
Judgment/physiology , Models, Theoretical , Probability , Uncertainty , Bayes Theorem , Humans
7.
Front Hum Neurosci ; 8: 849, 2014.
Article in English | MEDLINE | ID: mdl-25389398

ABSTRACT

This paper outlines the model-based theory of causal reasoning. It postulates that the core meanings of causal assertions are deterministic and refer to temporally-ordered sets of possibilities: A causes B to occur means that given A, B occurs, whereas A enables B to occur means that given A, it is possible for B to occur. The paper shows how mental models represent such assertions, and how these models underlie deductive, inductive, and abductive reasoning yielding explanations. It reviews evidence both to corroborate the theory and to account for phenomena sometimes taken to be incompatible with it. Finally, it reviews neuroscience evidence indicating that mental models for causal inference are implemented within lateral prefrontal cortex.

8.
Proc Natl Acad Sci U S A ; 110(42): 16766-71, 2013 Oct 15.
Article in English | MEDLINE | ID: mdl-24082090

ABSTRACT

We present a theory, and its computer implementation, of how mental simulations underlie the abductions of informal algorithms and deductions from these algorithms. Three experiments tested the theory's predictions, using an environment of a single railway track and a siding. This environment is akin to a universal Turing machine, but it is simple enough for nonprogrammers to use. Participants solved problems that required use of the siding to rearrange the order of cars in a train (experiment 1). Participants abduced and described in their own words algorithms that solved such problems for trains of any length, and, as the use of simulation predicts, they favored "while-loops" over "for-loops" in their descriptions (experiment 2). Given descriptions of loops of procedures, participants deduced the consequences for given trains of six cars, doing so without access to the railway environment (experiment 3). As the theory predicts, difficulty in rearranging trains depends on the numbers of moves and cars to be moved, whereas in formulating an algorithm and deducing its consequences, it depends on the Kolmogorov complexity of the algorithm. Overall, the results corroborated the use of a kinematic mental model in creating and testing informal algorithms and showed that individuals differ reliably in the ability to carry out these tasks.


Subject(s)
Algorithms , Models, Neurological , Problem Solving/physiology , Adolescent , Adult , Biomechanical Phenomena , Female , Humans , Male
9.
Trends Cogn Sci ; 17(3): 128-33, 2013 Mar.
Article in English | MEDLINE | ID: mdl-23428936

ABSTRACT

Boolean relations, such as and, or, and not, are a fundamental way to create new concepts out of old. Classic psychological studies showed that such concepts differed in how difficult they were to learn, but did not explain the source of these differences. Recent theories have reinvigorated the field with explanations ranging from the complexity of minimal descriptions of a concept to the relative invariance of its different instances. We review these theories and argue that the simplest explanation - the number of mental models required to represent a concept - provides a powerful account. However, no existing theory explains the process in full, such as how individuals spontaneously describe concepts.


Subject(s)
Concept Formation/physiology , Models, Psychological , Humans
10.
Mem Cognit ; 40(2): 266-79, 2012 Feb.
Article in English | MEDLINE | ID: mdl-22002598

ABSTRACT

This article reports investigations of inferences that depend both on connectives between clauses, such as or else, and on relations between entities, such as in the same place as. Participants made more valid inferences from biconditionals--for instance, Ann is taller than Beth if and only if Beth is taller than Cath--than from exclusive disjunctions (Exp. 1). They made more valid transitive inferences from a biconditional when a categorical premise affirmed rather than denied one of its clauses, but they made more valid transitive inferences from an exclusive disjunction when a categorical premise denied rather than affirmed one of its clauses (Exp. 2). From exclusive disjunctions, such as either Ann is not in the same place as Beth or else Beth is not in the same place as Cath, individuals tended to infer that all three individuals could be in different places, whereas in fact this was impossible (Exps. 3a and 3b). The theory of mental models predicts all of these results.


Subject(s)
Judgment/physiology , Logic , Problem Solving/physiology , Adult , Humans , Judgment/classification , Psychological Theory , Young Adult
11.
Q J Exp Psychol (Hove) ; 64(11): 2276-88, 2011 Nov.
Article in English | MEDLINE | ID: mdl-21819280

ABSTRACT

How do reasoners deal with inconsistencies? James (1907) believed that the rational solution is to revise your beliefs and to do so in a minimal way. We propose an alternative: You explain the origins of an inconsistency, which has the side effect of a revision to your beliefs. This hypothesis predicts that individuals should spontaneously create explanations of inconsistencies rather than refute one of the assertions and that they should rate explanations as more probable than refutations. A pilot study showed that participants spontaneously explain inconsistencies when they are asked what follows from inconsistent premises. In three subsequent experiments, participants were asked to compare explanations of inconsistencies against minimal refutations of the inconsistent premises. In Experiment 1, participants chose which conclusion was most probable; in Experiment 2 they rank ordered the conclusions based on their probability; and in Experiment 3 they estimated the mean probability of the conclusions' occurrence. In all three studies, participants rated explanations as more probable than refutations. The results imply that individuals create explanations to resolve an inconsistency and that these explanations lead to changes in belief. Changes in belief are therefore of secondary importance to the primary goal of explanation.


Subject(s)
Concept Formation , Culture , Problem Solving/physiology , Decision Making , Humans , Pilot Projects , Probability , Thinking , Uncertainty
12.
Proc Natl Acad Sci U S A ; 107(43): 18243-50, 2010 Oct 26.
Article in English | MEDLINE | ID: mdl-20956326

ABSTRACT

To be rational is to be able to reason. Thirty years ago psychologists believed that human reasoning depended on formal rules of inference akin to those of a logical calculus. This hypothesis ran into difficulties, which led to an alternative view: reasoning depends on envisaging the possibilities consistent with the starting point--a perception of the world, a set of assertions, a memory, or some mixture of them. We construct mental models of each distinct possibility and derive a conclusion from them. The theory predicts systematic errors in our reasoning, and the evidence corroborates this prediction. Yet, our ability to use counterexamples to refute invalid inferences provides a foundation for rationality. On this account, reasoning is a simulation of the world fleshed out with our knowledge, not a formal rearrangement of the logical skeletons of sentences.


Subject(s)
Cognition , Models, Psychological , Brain/physiology , Cognition/physiology , Humans , Logic , Magnetic Resonance Imaging , Memory , Models, Neurological
13.
Q J Exp Psychol (Hove) ; 63(9): 1716-39, 2010 Sep.
Article in English | MEDLINE | ID: mdl-20204920

ABSTRACT

The theory of mental models postulates that conditionals of the sort, if A then C, have a "core" meaning referring to three possibilities: A and C, not-A and C, and not-A and not-C. The meaning of a conditional's clauses and general knowledge can modulate this meaning, blocking certain possibilities or adding relations between the clauses. Four experiments investigated such interpretations in factual and deontic domains. In Experiment 1, the participants constructed instances of what was possible and what was impossible according to various conditionals. The results corroborated the general predictions of the model theory and also the occurrence of modulation. The resulting interpretations governed the conclusions that participants accepted in Experiment 2, which also yielded the predicted effects of a time limit on responding. In Experiment 3, the participants drew the predicted conclusions for themselves. In Experiment 4, modulation led to predicted temporal relations between A and C. We relate these results to current theories of conditionals.


Subject(s)
Comprehension/physiology , Models, Psychological , Problem Solving/physiology , Psychological Theory , Adolescent , Female , Humans , Logic , Male , Predictive Value of Tests , Statistics, Nonparametric , Young Adult
14.
Q J Exp Psychol (Hove) ; 63(3): 499-515, 2010 Mar.
Article in English | MEDLINE | ID: mdl-19591080

ABSTRACT

This paper summarizes the theory of simple cumulative risks-for example, the risk of food poisoning from the consumption of a series of portions of tainted food. Problems concerning such risks are extraordinarily difficult for naïve individuals, and the paper explains the reasons for this difficulty. It describes how naïve individuals usually attempt to estimate cumulative risks, and it outlines a computer program that models these methods. This account predicts that estimates can be improved if problems of cumulative risk are framed so that individuals can focus on the appropriate subset of cases. The paper reports two experiments that corroborated this prediction. They also showed that whether problems are stated in terms of frequencies (80 out of 100 people got food poisoning) or in terms of percentages (80% of people got food poisoning) did not reliably affect accuracy.


Subject(s)
Comprehension , Judgment , Risk Assessment , Risk , Disease Outbreaks/statistics & numerical data , Humans , Models, Statistical , Risk Assessment/statistics & numerical data , Statistics, Nonparametric , Students , Universities
15.
Brain Res ; 1243: 86-103, 2008 Dec 03.
Article in English | MEDLINE | ID: mdl-18760263

ABSTRACT

In an effort to clarify how deductive reasoning is accomplished, an fMRI study was performed to observe the neural substrates of logical reasoning and mathematical calculation. Participants viewed a problem statement and three premises, and then either a conclusion or a mathematical formula. They had to indicate whether the conclusion followed from the premises, or to solve the mathematical formula. Language areas of the brain (Broca's and Wernicke's area) responded as the premises and the conclusion were read, but solution of the problems was then carried out by non-language areas. Regions in right prefrontal cortex and inferior parietal lobe were more active for reasoning than for calculation, whereas regions in left prefrontal cortex and superior parietal lobe were more active for calculation than for reasoning. In reasoning, only those problems calling for a search for counterexamples to conclusions recruited right frontal pole. These results have important implications for understanding how higher cognition, including deduction, is implemented in the brain. Different sorts of thinking recruit separate neural substrates, and logical reasoning goes beyond linguistic regions of the brain.


Subject(s)
Cerebral Cortex/anatomy & histology , Cerebral Cortex/physiology , Cognition/physiology , Mathematics , Thinking/physiology , Brain Mapping , Dominance, Cerebral/physiology , Frontal Lobe/anatomy & histology , Frontal Lobe/physiology , Functional Laterality/physiology , Humans , Magnetic Resonance Imaging , Mental Processes/physiology , Nerve Net/anatomy & histology , Nerve Net/physiology , Neuropsychological Tests , Parietal Lobe/anatomy & histology , Parietal Lobe/physiology , Prefrontal Cortex/anatomy & histology , Prefrontal Cortex/physiology , Speech Perception/physiology , Temporal Lobe/anatomy & histology , Temporal Lobe/physiology
16.
Span J Psychol ; 5(2): 125-40, 2002 Nov.
Article in English | MEDLINE | ID: mdl-12428479

ABSTRACT

We report research investigating the role of mental models in deduction. The first study deals with conjunctive inferences (from one conjunction and two conditional premises) and disjunctive inferences (from one disjunction and the same two conditionals). The second study examines reasoning from multiple conditionals such as: If e then b; If a then b; If b then c; What follows between a and c? The third study addresses reasoning from different sorts of conditional assertions, including conditionals based on if then, only if, and unless. The paper also presents research on figural effects in syllogistic reasoning, on the effects of structure and believability in reasoning from double conditionals, and on reasoning from factual, counterfactual, and semifactual conditionals. The findings of these studies support the model theory, pose some difficulties for rule theories, and show the influence on reasoning of the linguistic structure and the semantic content of problems.


Subject(s)
Cognition , Logic , Problem Solving , Humans , Mental Processes , Models, Psychological , Psychological Theory
17.
Trends Cogn Sci ; 5(10): 434-442, 2001 Oct 01.
Article in English | MEDLINE | ID: mdl-11707382

ABSTRACT

According to the mental-model theory of deductive reasoning, reasoners use the meanings of assertions together with general knowledge to construct mental models of the possibilities compatible with the premises. Each model represents what is true in a possibility. A conclusion is held to be valid if it holds in all the models of the premises. Recent evidence described here shows that the fewer models an inference calls for, the easier the inference is. Errors arise because reasoners fail to consider all possible models, and because models do not normally represent what is false, even though reasoners can construct counterexamples to refute invalid conclusions.

SELECTION OF CITATIONS
SEARCH DETAIL
...