Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters











Database
Language
Publication year range
1.
Front Robot AI ; 11: 1440631, 2024.
Article in English | MEDLINE | ID: mdl-39206060

ABSTRACT

This paper presents an interdisciplinary framework, Machine Psychology, which integrates principles from operant learning psychology with a particular Artificial Intelligence model, the Non-Axiomatic Reasoning System (NARS), to advance Artificial General Intelligence (AGI) research. Central to this framework is the assumption that adaptation is fundamental to both biological and artificial intelligence, and can be understood using operant conditioning principles. The study evaluates this approach through three operant learning tasks using OpenNARS for Applications (ONA): simple discrimination, changing contingencies, and conditional discrimination tasks. In the simple discrimination task, NARS demonstrated rapid learning, achieving 100% correct responses during training and testing phases. The changing contingencies task illustrated NARS's adaptability, as it successfully adjusted its behavior when task conditions were reversed. In the conditional discrimination task, NARS managed complex learning scenarios, achieving high accuracy by forming and utilizing complex hypotheses based on conditional cues. These results validate the use of operant conditioning as a framework for developing adaptive AGI systems. NARS's ability to function under conditions of insufficient knowledge and resources, combined with its sensorimotor reasoning capabilities, positions it as a robust model for AGI. The Machine Psychology framework, by implementing aspects of natural intelligence such as continuous learning and goal-driven behavior, provides a scalable and flexible approach for real-world applications. Future research should explore using enhanced NARS systems, more advanced tasks and applying this framework to diverse, complex tasks to further advance the development of human-level AI.

2.
Front Comput Neurosci ; 18: 1367712, 2024.
Article in English | MEDLINE | ID: mdl-38984056

ABSTRACT

The Causal Cognitive Architecture is a brain-inspired cognitive architecture developed from the hypothesis that the navigation circuits in the ancestors of mammals duplicated to eventually form the neocortex. Thus, millions of neocortical minicolumns are functionally modeled in the architecture as millions of "navigation maps." An investigation of a cognitive architecture based on these navigation maps has previously shown that modest changes in the architecture allow the ready emergence of human cognitive abilities such as grounded, full causal decision-making, full analogical reasoning, and near-full compositional language abilities. In this study, additional biologically plausible modest changes to the architecture are considered and show the emergence of super-human planning abilities. The architecture should be considered as a viable alternative pathway toward the development of more advanced artificial intelligence, as well as to give insight into the emergence of natural human intelligence.

3.
Entropy (Basel) ; 25(10)2023 Oct 09.
Article in English | MEDLINE | ID: mdl-37895550

ABSTRACT

Recent advancements in artificial intelligence (AI) technology have raised concerns about the ethical, moral, and legal safeguards. There is a pressing need to improve metrics for assessing security and privacy of AI systems and to manage AI technology in a more ethical manner. To address these challenges, an AI Trust Framework and Maturity Model is proposed to enhance trust in the design and management of AI systems. Trust in AI involves an agreed-upon understanding between humans and machines about system performance. The framework utilizes an "entropy lens" to root the study in information theory and enhance transparency and trust in "black box" AI systems, which lack ethical guardrails. High entropy in AI systems can decrease human trust, particularly in uncertain and competitive environments. The research draws inspiration from entropy studies to improve trust and performance in autonomous human-machine teams and systems, including interconnected elements in hierarchical systems. Applying this lens to improve trust in AI also highlights new opportunities to optimize performance in teams. Two use cases are described to validate the AI framework's ability to measure trust in the design and management of AI systems.

4.
Front Radiol ; 3: 1224682, 2023.
Article in English | MEDLINE | ID: mdl-38464946

ABSTRACT

At the dawn of of Artificial General Intelligence (AGI), the emergence of large language models such as ChatGPT show promise in revolutionizing healthcare by improving patient care, expanding medical access, and optimizing clinical processes. However, their integration into healthcare systems requires careful consideration of potential risks, such as inaccurate medical advice, patient privacy violations, the creation of falsified documents or images, overreliance on AGI in medical education, and the perpetuation of biases. It is crucial to implement proper oversight and regulation to address these risks, ensuring the safe and effective incorporation of AGI technologies into healthcare systems. By acknowledging and mitigating these challenges, AGI can be harnessed to enhance patient care, medical knowledge, and healthcare processes, ultimately benefiting society as a whole.

SELECTION OF CITATIONS
SEARCH DETAIL