Your browser doesn't support javascript.
loading
The application of eXplainable artificial intelligence in studying cognition: A scoping review.
Mahmood, Shakran; Teo, Colin; Sim, Jeremy; Zhang, Wei; Muyun, Jiang; Bhuvana, R; Teo, Kejia; Yeo, Tseng Tsai; Lu, Jia; Gulyas, Balazs; Guan, Cuntai.
Afiliação
  • Mahmood S; Lee Kong Chian School of Medicine Nanyang Technological University Singapore Singapore.
  • Teo C; Lee Kong Chian School of Medicine Nanyang Technological University Singapore Singapore.
  • Sim J; Centre for Neuroimaging Research Nanyang Technological University Singapore Singapore.
  • Zhang W; Division of Neurosurgery, Department of Surgery National University Hospital Singapore Singapore.
  • Muyun J; Centre for Neuroimaging Research Nanyang Technological University Singapore Singapore.
  • Bhuvana R; Lee Kong Chian School of Medicine Nanyang Technological University Singapore Singapore.
  • Teo K; Centre for Neuroimaging Research Nanyang Technological University Singapore Singapore.
  • Yeo TT; Centre for Neuroimaging Research Nanyang Technological University Singapore Singapore.
  • Lu J; School of Computer Science and Engineering Nanyang Technological University Singapore Singapore.
  • Gulyas B; Lee Kong Chian School of Medicine Nanyang Technological University Singapore Singapore.
  • Guan C; Centre for Neuroimaging Research Nanyang Technological University Singapore Singapore.
Ibrain ; 10(3): 245-265, 2024.
Article em En | MEDLINE | ID: mdl-39346792
ABSTRACT
The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XAI methods used to study the mechanisms and features of cognitive function and dysfunction. In this study, the collected evidence is qualitatively assessed to develop an effective framework for approaching XAI in cognitive neuroscience. Based on the Joanna Briggs Institute and preferred reporting items for systematic reviews and meta-analyses extension for scoping review guidelines, we searched for peer-reviewed articles on MEDLINE, Embase, Web of Science, Cochrane Central Register of Controlled Trials, and Google Scholar. Two reviewers performed data screening, extraction, and thematic analysis in parallel. Twelve eligible experimental studies published in the past decade were included. The results showed that the majority (75%) focused on normal cognitive functions such as perception, social cognition, language, executive function, and memory, while others (25%) examined impaired cognition. The predominant XAI methods employed were intrinsic XAI (58.3%), followed by attribution-based (41.7%) and example-based (8.3%) post hoc methods. Explainability was applied at a local (66.7%) or global (33.3%) scope. The findings, predominantly correlational, were anatomical (83.3%) or nonanatomical (16.7%). In conclusion, while these XAI techniques were lauded for their predictive power, robustness, testability, and plausibility, limitations included oversimplification, confounding factors, and inconsistencies. The reviewed studies showcased the potential of XAI models while acknowledging current challenges in causality and oversimplification, particularly emphasizing the need for reproducibility.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Ibrain Ano de publicação: 2024 Tipo de documento: Article País de publicação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Ibrain Ano de publicação: 2024 Tipo de documento: Article País de publicação: Estados Unidos