Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 321
Filter
1.
Top Cogn Sci ; 2024 Jun 09.
Article in English | MEDLINE | ID: mdl-38852167

ABSTRACT

Teams are a fundamental aspect of life-from sports to business, to defense, to science, to education. While the cognitive sciences tend to focus on information processing within individuals, others have argued that teams are also capable of demonstrating cognitive capacities similar to humans, such as skill acquisition and forgetting (cf., Cooke, Gorman, Myers, & Duran, 2013; Fiore et al., 2010). As artificially intelligent and autonomous systems improve in their ability to learn, reason, interact, and coordinate with human teammates combined with the observation that teams can express cognitive capacities typically seen in individuals, a cognitive science of teams is emerging. Consequently, new questions are being asked about teams regarding teamness, trust, the introduction and effects of autonomous systems on teams, and how best to measure team behavior and phenomena. In this topic, four facets of human-autonomy team cognition are introduced with leaders in the field providing in-depth articles associated with one or more of the facets: (1) defining teams; (2) how trust is established, maintained, and repaired when broken; (3) autonomous systems operating as teammates; and (4) metrics for evaluating team cognition across communication, coordination, and performance.

2.
Neuropsychologia ; 200: 108903, 2024 07 29.
Article in English | MEDLINE | ID: mdl-38750788

ABSTRACT

Cognitive neuroscience has considerable untapped potential to translate our understanding of brain function into applications that maintain, restore, or enhance human cognition. Complex, real-world phenomena encountered in daily life, professional contexts, and in the arts, can also be a rich source of information for better understanding cognition, which in turn can lead to advances in knowledge and health outcomes. Interdisciplinary work is needed for these bi-directional benefits to be realized. Our cognitive neuroscience team has been collaborating on several interdisciplinary projects: hardware and software development for brain stimulation, measuring human operator state in safety-critical robotics environments, and exploring emotional regulation in actors who perform traumatic narratives. Our approach is to study research questions of mutual interest in the contexts of domain-specific applications, using (and sometimes improving) the experimental tools and techniques of cognitive neuroscience. These interdisciplinary attempts are described as case studies in the present work to illustrate non-trivial challenges that come from working across traditional disciplinary boundaries. We reflect on how obstacles to interdisciplinary work can be overcome, with the goals of enriching our understanding of human cognition and amplifying the positive effects cognitive neuroscientists have on society and innovation.


Subject(s)
Cognitive Neuroscience , Humans , Interdisciplinary Research , Brain/physiology , Cognition/physiology , Neurosciences
3.
J Med Philos ; 49(4): 354-366, 2024 Jul 11.
Article in English | MEDLINE | ID: mdl-38815253

ABSTRACT

The moment when a person's actual relationships fall short of desired relationships is commonly identified as the etiological moment of chronic loneliness, which can lead to physical and psychological effects like depression, worse recovery from illness and increased mortality. But, this etiology fails to explain the nature and severe impact of loneliness. Here, we use philosophical analysis and neuroscience to show that human beings develop and maintain our world-picture (our sense of what is true, important, and good) through joint attention and action, motivated by friendship, in the Aristotelian sense of "other selves" who share a sense of the true and the good, and desire the good for each other as much as for themselves. The true etiological event of loneliness is the moment one's world-picture becomes unshared. The pathogenesis is a resultant decay of our world-picture, with brain and behavior changes following as sequelae.


Subject(s)
Loneliness , Humans , Loneliness/psychology , Philosophy, Medical , Brain , Interpersonal Relations , Neurosciences , Depression
4.
J Exp Child Psychol ; 244: 105954, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38718680

ABSTRACT

A solid understanding of fractions is the cornerstone for acquiring proficiency with rational numbers and paves the way for learning advanced mathematical concepts such as algebra. Fraction difficulties limit not only students' educational and vocational opportunities but also their ability to solve everyday problems. Students who exit sixth grade with inadequate understanding of fractions may experience far-reaching repercussions that lead to lifelong avoidance of mathematics. This article presents the results of a randomized controlled trial focusing on the first two cohorts of a larger efficacy investigation aimed at building fraction sense in students with mathematics difficulties. Teachers implemented an evidence-informed fraction sense intervention (FSI) within their sixth-grade intervention classrooms. The lessons draw from research in cognitive science as well as mathematics education research. Employing random assignment at the classroom level, multilevel modeling revealed a significant effect of the intervention on posttest fractions scores after controlling for pretest fractions scores, working memory, vocabulary, proportional reasoning, and classroom attentive behavior. Students in the FSI group outperformed their counterparts in the control group, with noteworthy effect sizes on most fraction measures. Challenges associated with carrying out school-based intervention research are addressed.


Subject(s)
Mathematics , Schools , Humans , Male , Female , Child , Mathematics/education , Students/psychology , Problem Solving , Dyscalculia/psychology
5.
Sci Prog ; 107(2): 368504241245812, 2024.
Article in English | MEDLINE | ID: mdl-38614459

ABSTRACT

In our 2023 paper, entitled "Modeling interactions between the embodied and the narrative self: Dynamics of the self-pattern within LIDA," Kugele, Newen, Franklin, and I propose a functional description and implementation of a central element of Gallagher & Newen's pattern theory of self, which identifies an agent's self with a dynamic pattern of so-called cognitive aspects which govern their thought and behavior (Gallagher, 2013; Newen, 2018; Gallagher & Daly, 2018). The pattern theory explicitly rejects the traditional conceptualization of the self as a unitary entity with certain properties that resides within agents, with the idea of a pattern of aspects being central to its ability to account for the dynamic, yet relatively stable development of most natural agents' selves. Implementing the pattern theory within Learning Intelligent Distribution Agent revealed that, in order for a cognitive architecture to account for both the dynamic and stable nature of an agent's self-pattern, aspects of that pattern had to be realized by dispositions of the agent to either think or act in a certain way. In this commentary, I argue that this fundamental role of dispositions extends to cognitive processes in general and that cognitive systems should be understood in terms of the dynamical interactions of dispositions over time. In order to facilitate such an understanding, dispositions will have to be identified with topologies of cognitive (sub)systems. I provide an example of such a topology by reference to informational topologies in neuronal systems.


Subject(s)
Cognition
7.
Med Teach ; : 1-2, 2024 Mar 09.
Article in English | MEDLINE | ID: mdl-38460499

ABSTRACT

There is increasing pressure to accelerate health professions education programs and educators have the challenge of ensuring that students can effectively transfer their learning into clinical practice. In this personal view, we discuss how insights from cognitive science can inform the redesign of current curricula and highlight the challenge of implementing these new approaches for instructional design and assessment. We also recommend that educators disseminate the important lessons learned from their endeavors.

8.
Cogn Sci ; 48(3): e13430, 2024 03.
Article in English | MEDLINE | ID: mdl-38500317

ABSTRACT

This letter explores the intricate historical and contemporary links between large language models (LLMs) and cognitive science through the lens of information theory, statistical language models, and socioanthropological linguistic theories. The emergence of LLMs highlights the enduring significance of information-based and statistical learning theories in understanding human communication. These theories, initially proposed in the mid-20th century, offered a visionary framework for integrating computational science, social sciences, and humanities, which nonetheless was not fully fulfilled at that time. The subsequent development of sociolinguistics and linguistic anthropology, especially since the 1970s, provided critical perspectives and empirical methods that both challenged and enriched this framework. This letter proposes that two pivotal concepts derived from this development, metapragmatic function and indexicality, offer a fruitful theoretical perspective for integrating the semantic, textual, and pragmatic, contextual dimensions of communication, an amalgamation that contemporary LLMs have yet to fully achieve. The author believes that contemporary cognitive science is at a crucial crossroads, where fostering interdisciplinary dialogues among computational linguistics, social linguistics and linguistic anthropology, and cognitive and social psychology is in particular imperative. Such collaboration is vital to bridge the computational, cognitive, and sociocultural aspects of human communication and human-AI interaction, especially in the era of large language and multimodal models and human-centric Artificial Intelligence (AI).


Subject(s)
Artificial Intelligence , Language , Humans , Linguistics , Communication , Semantics
9.
Diagnosis (Berl) ; 2024 Feb 23.
Article in English | MEDLINE | ID: mdl-38386866

ABSTRACT

Algorithms are a ubiquitous part of modern life. Despite being a component of medicine since early efforts to deploy computers in medicine, clinicians' resistance to using decision support and use algorithms to address cognitive biases has been limited. This resistance is not just limited to the use of algorithmic clinical decision support, but also evidence and stochastic reasoning and the implications of the forcing function of the electronic medical record. Physician resistance to algorithmic support in clinical decision making is in stark contrast to their general acceptance of algorithmic support in other aspects of life.

10.
JMIR Public Health Surveill ; 10: e47979, 2024 Mar 18.
Article in English | MEDLINE | ID: mdl-38315620

ABSTRACT

BACKGROUND: Despite COVID-19 vaccine mandates, many chose to forgo vaccination, raising questions about the psychology underlying how judgment affects these choices. Research shows that reward and aversion judgments are important for vaccination choice; however, no studies have integrated such cognitive science with machine learning to predict COVID-19 vaccine uptake. OBJECTIVE: This study aims to determine the predictive power of a small but interpretable set of judgment variables using 3 machine learning algorithms to predict COVID-19 vaccine uptake and interpret what profile of judgment variables was important for prediction. METHODS: We surveyed 3476 adults across the United States in December 2021. Participants answered demographic, COVID-19 vaccine uptake (ie, whether participants were fully vaccinated), and COVID-19 precaution questions. Participants also completed a picture-rating task using images from the International Affective Picture System. Images were rated on a Likert-type scale to calibrate the degree of liking and disliking. Ratings were computationally modeled using relative preference theory to produce a set of graphs for each participant (minimum R2>0.8). In total, 15 judgment features were extracted from these graphs, 2 being analogous to risk and loss aversion from behavioral economics. These judgment variables, along with demographics, were compared between those who were fully vaccinated and those who were not. In total, 3 machine learning approaches (random forest, balanced random forest [BRF], and logistic regression) were used to test how well judgment, demographic, and COVID-19 precaution variables predicted vaccine uptake. Mediation and moderation were implemented to assess statistical mechanisms underlying successful prediction. RESULTS: Age, income, marital status, employment status, ethnicity, educational level, and sex differed by vaccine uptake (Wilcoxon rank sum and chi-square P<.001). Most judgment variables also differed by vaccine uptake (Wilcoxon rank sum P<.05). A similar area under the receiver operating characteristic curve (AUROC) was achieved by the 3 machine learning frameworks, although random forest and logistic regression produced specificities between 30% and 38% (vs 74.2% for BRF), indicating a lower performance in predicting unvaccinated participants. BRF achieved high precision (87.8%) and AUROC (79%) with moderate to high accuracy (70.8%) and balanced recall (69.6%) and specificity (74.2%). It should be noted that, for BRF, the negative predictive value was <50% despite good specificity. For BRF and random forest, 63% to 75% of the feature importance came from the 15 judgment variables. Furthermore, age, income, and educational level mediated relationships between judgment variables and vaccine uptake. CONCLUSIONS: The findings demonstrate the underlying importance of judgment variables for vaccine choice and uptake, suggesting that vaccine education and messaging might target varying judgment profiles to improve uptake. These methods could also be used to aid vaccine rollouts and health care preparedness by providing location-specific details (eg, identifying areas that may experience low vaccination and high hospitalization).


Subject(s)
COVID-19 Vaccines , COVID-19 , Adult , Humans , Judgment , Cross-Sectional Studies , COVID-19/epidemiology , COVID-19/prevention & control , Vaccination , Cognitive Science , Ethnicity
11.
Forensic Sci Rev ; 36(1): 41-54, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38297426

ABSTRACT

Advocates and researchers have made many recommendations for forensic science improvement in the United States. These proposals are often motivated by wrongful convictions related to false or misleading forensic evidence. In many cases, the connection between the proposals and the actual experience of wrongful convictions has not been well defined. Further, recommendations may not have been realizable given the structure of the criminal justice system in the United States and the practical realities of forensic science laboratories. Finally, limited attempts have been made to assess recommendations over time to determine the progress of forensic science improvement and elucidate continuing gaps. Reports from the Department of Justice, the National Academy of Sciences, and the President's Council of Advisors on Science and Technology are assessed to determine the extent to which their recommendations have been implemented, whether the recommendations align with the actual experience of wrongful convictions, and how the American forensic science community has implemented forensic science improvement. The most successful proposals reflect a broad movement toward quality assurance, improved standards, and organizational improvement in the forensic sciences. Less successful proposals are associated with calls for large federal investments, difficulties in community-wide implementation, or uncertain linkage to foundations in science and practice. Significant progress has been made in the standardization of reporting and testimony, assessment of the foundational reliability of the disciplines, and DNA mixture interpretation. Significant gaps remain to improve medicolegal death investigation, governance, and the implementation of standards. Improved allocation and use of resources will be required to meet continuing challenges in capacity building, training, and proficiency testing, although past experience indicates that both federal and non-federal funding will be required to address these issues. Continued improvement is needed to address the issues associated with wrongful convictions, although forensic science leaders have demonstrated the ability to prioritize improvement initiatives.


Subject(s)
Forensic Sciences , Law Enforcement , Humans , United States , Reproducibility of Results , Forensic Sciences/education , DNA , Uncertainty
12.
Trends Cogn Sci ; 28(1): 56-71, 2024 01.
Article in English | MEDLINE | ID: mdl-37798182

ABSTRACT

Research on human navigation by psychologists and neuroscientists has come mainly from a limited range of environments and participants inhabiting western countries. By contrast, numerous anthropological accounts illustrate the diverse ways in which cultures adapt to their surrounding environment to navigate. Here, we provide an overview of these studies and relate them to cognitive science research. The diversity of cues in traditional navigation is much higher and multimodal compared with navigation experiments in the laboratory. It typically involves an integrated system of methods, drawing on a detailed understanding of the environmental cues, specific tools, and forms part of a broader cultural system. We highlight recent methodological developments for measuring navigation skill and modelling behaviour that will aid future research into how culture and environment shape human navigation.


Subject(s)
Cues , Tundra , Humans , Oceans and Seas
13.
Philos Trans R Soc Lond B Biol Sci ; 379(1895): 20220410, 2024 Jan 29.
Article in English | MEDLINE | ID: mdl-38104599

ABSTRACT

In the last few years, a remarkable convergence of interests and results has emerged between scholars interested in the arts and aesthetics from a variety of perspectives and cognitive scientists studying the mind and brain within the predictive processing (PP) framework. This convergence has so far proven fruitful for both sides: while PP is increasingly adopted as a framework for understanding aesthetic phenomena, the arts and aesthetics, examined under the lens of PP, are starting to be seen as important windows into our mental functioning. The result is a vast and fast-growing research programme that promises to deliver important insights into our aesthetic encounters as well as a wide range of psychological phenomena of general interest. Here, we present this developing research programme, describing its grounds and highlighting its prospects. We start by clarifying how the study of the arts and aesthetics encounters the PP picture of mental functioning (§1). We then go on to outline the prospects of this encounter for the fields involved: philosophy and history of art (§2), psychology of aesthetics and neuroaesthetics (§3) and psychology and neuroscience more generally (§4). The upshot is an ambitious but well-defined framework within which aesthetics and cognitive science can partner up to illuminate crucial aspects of the human mind. This article is part of the theme issue 'Art, aesthetics and predictive processing: theoretical and empirical perspectives'.


Subject(s)
Brain , Neurosciences , Humans , Esthetics , Philosophy , Cognitive Science
15.
Trends Neurosci Educ ; 33: 100209, 2023 12.
Article in English | MEDLINE | ID: mdl-38049287

ABSTRACT

PURPOSE: Cognitive science is essential to designing, implementing, and evaluating instruction for enhancing student learning. However, there may not be sufficient focus on the principles of cognitive science, as some educators hold learning beliefs that may be considered cognitive myths. PROCEDURES: This review article analyzes examples of five learning myths (learning styles, pure discovery learning, digital natives, extrinsic motivation, multitasking) and five research-based learning strategies (dual coding, direct instruction, summarization, retrieval practice, spacing). It details the research evidence for each to explain those misconceptions of learning and also those underutilized or misunderstood but effective strategies shown to benefit student learning. CONCLUSION: Educational practices related to learning myths are widespread in education with potentially detrimental effects on student learning. We recommend that colleges of education be restructured to ensure greater emphasis on cognitive science in educator preparation programs to better promote research-based instructional strategies to meet students' learning needs.


Subject(s)
Teacher Training , Humans , Learning , Students , Curriculum , Cognitive Science
16.
Healthc Inform Res ; 29(4): 367-376, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37964458

ABSTRACT

OBJECTIVES: Mobile health applications that are designed without considering usability criteria can lead to cognitive overload, resulting in the rejection of these apps. To avoid this problem, the user interface of mobile health applications should be evaluated for cognitive load. This evaluation can contribute to the improvement of the user interface and help prevent cognitive overload for the user. METHODS: In this study, we evaluated a mobile personal health records application using the cognitive task analysis method, specifically the goals, operators, methods, and selection rules (GOMS) approach, along with the related updated GOMS model and gesture-level model techniques. The GOMS method allowed us to determine the steps of the tasks and categorize them as physical or cognitive tasks. We then estimated the completion times of these tasks using the updated GOMS model and gesture-level model. RESULTS: All 10 identified tasks were split into 398 steps consisting of mental and physical operators. The time to complete all the tasks was 5.70 minutes and 5.45 minutes according to the updated GOMS model and gesture-level model, respectively. Mental operators covered 73% of the total fulfillment time of the tasks according to the updated GOMS model and 76% according to the gesture-level model. The inter-rater reliability analysis yielded an average of 0.80, indicating good reliability for the evaluation method. CONCLUSIONS: The majority of the task execution times comprised mental operators, suggesting that the cognitive load on users is high. To enhance the application's implementation, the number of mental operators should be reduced.

17.
PNAS Nexus ; 2(11): pgad337, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37954157

ABSTRACT

Human vision, thought, and planning involve parsing and representing objects and scenes using structured representations based on part-whole hierarchies. Computer vision and machine learning researchers have recently sought to emulate this capability using neural networks, but a generative model formulation has been lacking. Generative models that leverage compositionality, recursion, and part-whole hierarchies are thought to underlie human concept learning and the ability to construct and represent flexible mental concepts. We introduce Recursive Neural Programs (RNPs), a neural generative model that addresses the part-whole hierarchy learning problem by modeling images as hierarchical trees of probabilistic sensory-motor programs. These programs recursively reuse learned sensory-motor primitives to model an image within different spatial reference frames, enabling hierarchical composition of objects from parts and implementing a grammar for images. We show that RNPs can learn part-whole hierarchies for a variety of image datasets, allowing rich compositionality and intuitive parts-based explanations of objects. Our model also suggests a cognitive framework for understanding how human brains can potentially learn and represent concepts in terms of recursively defined primitives and their relations with each other.

18.
Trends Cogn Sci ; 27(12): 1165-1179, 2023 12.
Article in English | MEDLINE | ID: mdl-37805385

ABSTRACT

Seeing the interactions between other people is a critical part of our everyday visual experience, but recognizing the social interactions of others is often considered outside the scope of vision and grouped with higher-level social cognition like theory of mind. Recent work, however, has revealed that recognition of social interactions is efficient and automatic, is well modeled by bottom-up computational algorithms, and occurs in visually-selective regions of the brain. We review recent evidence from these three methodologies (behavioral, computational, and neural) that converge to suggest the core of social interaction perception is visual. We propose a computational framework for how this process is carried out in the brain and offer directions for future interdisciplinary investigations of social perception.


Subject(s)
Social Interaction , Social Perception , Humans , Brain , Cognition
19.
Front Psychol ; 14: 1205891, 2023.
Article in English | MEDLINE | ID: mdl-37809306

ABSTRACT

Fictionality and fictional experiences are ubiquitous in people's everyday lives in the forms of movies, novels, video games, pretense and role playing, and digital technology use. Despite this ubiquity, though, the field of cognitive science has traditionally been dominated by a focus on the real world. Based on the limited understanding from previous research on questions regarding fictional information and the cognitive processes for distinguishing reality from fiction, we argue for the need for a comprehensive and systematic account that reflects on related phenomena, such as narrative comprehension or imagination embedded into general theories of cognition. This is important as incorporating cognitive processing of fictional events into memory theory reshapes the conceptual map of human memory. In this paper, we highlight future challenges for the cognitive studies of fictionality on conceptual, neurological, and computational levels. Taking on these challenges requires an interdisciplinary approach between fields like developmental psychology, philosophy, and the study of narrative comprehension. Our aim is to build on such interdisciplinarity and provide conclusions on the ways in which new theoretical frameworks of fiction cognition can aid understanding human behaviors in a wide range of aspects of people's daily lives, media consumption habits, and digital encounters. Our account also has the potential to inform technological innovations related to training intelligent digital systems to distinguish fact and fiction in the source material.

20.
Entropy (Basel) ; 25(9)2023 Aug 22.
Article in English | MEDLINE | ID: mdl-37761546

ABSTRACT

Resilience is a basic trait of cognitive systems and fundamentally connected to their autopoietic organization. It plays a vital role in maintaining the identity of cognitive systems in the face of external threats and perturbances. However, when examining resilience in the context of autopoiesis, an overlooked issue arises: the autopoietic theory formulated by Maturana and Varela (1980) renders traditional Shannon information obsolete, highlighting that information should not be ascribed a role in cognitive systems in a general sense. This paper examines the current situation and suggests a possible way forward by exploring an affordance-based view on information, derived from radical cognitive science, which is exempted from Maturana and Varela's critique. Specifically, it argues that the impact of social influence on affordance use is crucial when considering how resilience can manifest in informational relations pertaining to the human cognitive ecology.

SELECTION OF CITATIONS
SEARCH DETAIL
...