Your browser doesn't support javascript.
The effect of machine learning explanations on user trust for automated diagnosis of COVID-19.
Goel, Kanika; Sindhgatta, Renuka; Kalra, Sumit; Goel, Rohan; Mutreja, Preeti.
  • Goel K; School of Information Systems, Queensland University of Technology, Australia. Electronic address: k.goel@qut.edu.au.
  • Sindhgatta R; IBM Research AI, Bangalore, India. Electronic address: renuka.sr@ibm.com.
  • Kalra S; Department of Computer Science, Indian Institute of Technology, Jodhpur, India. Electronic address: sumitk@iitj.ac.in.
  • Goel R; COVID-19 Centre at Guru Teg Bahadur (GTB) Hospital, Delhi, India. Electronic address: rohanpgoel@gmail.com.
  • Mutreja P; All India Institute of Medical Sciences (AIIMS), Jodhpur, India. Electronic address: dr.preeti.mutreja@gmail.com.
Comput Biol Med ; 146: 105587, 2022 07.
Article in English | MEDLINE | ID: covidwho-1821197
ABSTRACT
Recent years have seen deep neural networks (DNN) gain widespread acceptance for a range of computer vision tasks that include medical imaging. Motivated by their performance, multiple studies have focused on designing deep convolutional neural network architectures tailored to detect COVID-19 cases from chest computerized tomography (CT) images. However, a fundamental challenge of DNN models is their inability to explain the reasoning for a diagnosis. Explainability is essential for medical diagnosis, where understanding the reason for a decision is as important as the decision itself. A variety of algorithms have been proposed that generate explanations and strive to enhance users' trust in DNN models. Yet, the influence of the generated machine learning explanations on clinicians' trust for complex decision tasks in healthcare has not been understood. This study evaluates the quality of explanations generated for a deep learning model that detects COVID-19 based on CT images and examines the influence of the quality of these explanations on clinicians' trust. First, we collect radiologist-annotated explanations of the CT images for the diagnosis of COVID-19 to create the ground truth. We then compare ground truth explanations with machine learning explanations. Our evaluation shows that the explanations produced. by different algorithms were often correct (high precision) when compared to the radiologist annotated ground truth but a significant number of explanations were missed (significantly lower recall). We further conduct a controlled experiment to study the influence of machine learning explanations on clinicians' trust for the diagnosis of COVID-19. Our findings show that while the clinicians' trust in automated diagnosis increases with the explanations, their reliance on the diagnosis reduces as clinicians are less likely to rely on algorithms that are not close to human judgement. Clinicians want higher recall of the explanations for a better understanding of an automated diagnosis system.
Subject(s)
Keywords

Full text: Available Collection: International databases Database: MEDLINE Main subject: COVID-19 Type of study: Experimental Studies Limits: Humans Language: English Journal: Comput Biol Med Year: 2022 Document Type: Article

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: International databases Database: MEDLINE Main subject: COVID-19 Type of study: Experimental Studies Limits: Humans Language: English Journal: Comput Biol Med Year: 2022 Document Type: Article