Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Front Artif Intell ; 6: 1144569, 2023.
Article in English | MEDLINE | ID: mdl-38259824

ABSTRACT

Formal deductive logic, used to express and reason over declarative, axiomatizable content, captures, we now know, essentially all of what is known in mathematics and physics, and captures as well the details of the proofs by which such knowledge has been secured. This is certainly impressive, but deductive logic alone cannot enable rational adjudication of arguments that are at variance (however much additional information is added). After affirming a fundamental directive, according to which argumentation should be the basis for human-centric AI, we introduce and employ both a deductive and-crucially-an inductive cognitive calculus. The former cognitive calculus, DCEC, is the deductive one and is used with our automated deductive reasoner ShadowProver; the latter, IDCEC, is inductive, is used with the automated inductive reasoner ShadowAdjudicator, and is based on human-used concepts of likelihood (and in some dialects of IDCEC, probability). We explain that ShadowAdjudicator centers around the concept of competing and nuanced arguments adjudicated non-monotonically through time. We make things clearer and more concrete by way of three case studies, in which our two automated reasoners are employed. Case Study 1 involves the famous Monty Hall Problem. Case Study 2 makes vivid the efficacy of our calculi and automated reasoners in simulations that involve a cognitive robot (PERI.2). In Case Study 3, as we explain, the simulation employs the cognitive architecture ARCADIA, which is designed to computationally model human-level cognition in ways that take perception and attention seriously. We also discuss a type of argument rarely analyzed in logic-based AI; arguments intended to persuade by leveraging human deficiencies. We end by sharing thoughts about the future of research and associated engineering of the type that we have displayed.

2.
Cogn Sci ; 44(12): e12898, 2020 12.
Article in English | MEDLINE | ID: mdl-33222259

ABSTRACT

Khemlani et al. (2018) mischaracterize logic in the course of seeking to show that mental model theory (MMT) can accommodate a form of inference ( I , let us label it) they find in a high percentage of their subjects. We reveal their mischaracterization and, in so doing, lay a landscape for future modeling by cognitive scientists who may wonder whether human reasoning is consistent with, or perhaps even capturable by, reasoning in a logic or family thereof. Along the way, we note that the properties touted by Khemlani et al. as innovative aspects of MMT-based modeling (e.g., nonmonotonicity) have for decades been, in logic, acknowledged and rigorously specified by families of (implemented) logics. Khemlani et al. (2018) further declare that I is "invalid in any modal logic." We demonstrate this to be false by our introduction (Appendix A) of a new propositional modal logic (within a family of such logics) in which I is provably valid, and by the implementation of this logic. A second appendix, B, partially answers the two-part question, "What is a formal logic, and what is it for one to capture empirical phenomena?"


Subject(s)
Logic , Models, Psychological , Humans , Problem Solving
3.
Top Cogn Sci ; 11(4): 914-917, 2019 10.
Article in English | MEDLINE | ID: mdl-31587501

ABSTRACT

Núñez et al.'s (2019) negative assessment of the field of cognitive science derives from evaluation criteria that fail to reflect the true nature of the field. In reality, the field is thriving on both the research and educational fronts, and it shows great promise for the future.


Subject(s)
Cognitive Science
5.
Front Hum Neurosci ; 8: 867, 2014.
Article in English | MEDLINE | ID: mdl-25414655

ABSTRACT

People are habitual explanation generators. At its most mundane, our propensity to explain allows us to infer that we should not drink milk that smells sour; at the other extreme, it allows us to establish facts (e.g., theorems in mathematical logic) whose truth was not even known prior to the existence of the explanation (proof). What do the cognitive operations underlying the inference that the milk is sour have in common with the proof that, say, the square root of two is irrational? Our ability to generate explanations bears striking similarities to our ability to make analogies. Both reflect a capacity to generate inferences and generalizations that go beyond the featural similarities between a novel problem and familiar problems in terms of which the novel problem may be understood. However, a notable difference between analogy-making and explanation-generation is that the former is a process in which a single source situation is used to reason about a single target, whereas the latter often requires the reasoner to integrate multiple sources of knowledge. This seemingly small difference poses a challenge to the task of marshaling our understanding of analogical reasoning to understanding explanation. We describe a model of explanation, derived from a model of analogy, adapted to permit systematic violations of this one-to-one mapping constraint. Simulation results demonstrate that the resulting model can generate explanations for novel explananda and that, like the explanations generated by human reasoners, these explanations vary in their coherence.

SELECTION OF CITATIONS
SEARCH DETAIL
...