Your browser doesn't support javascript.
Evaluating Explainable Artificial Intelligence for X-ray Image Analysis
Applied Sciences ; 12(9):4459, 2022.
Article in English | ProQuest Central | ID: covidwho-1837579
ABSTRACT
The lack of justification of the results obtained by artificial intelligence (AI) algorithms has limited their usage in the medical context. To increase the explainability of the existing AI methods, explainable artificial intelligence (XAI) is proposed. We performed a systematic literature review, based on the guidelines proposed by Kitchenham and Charters, of studies that applied XAI methods in X-ray-image-related tasks. We identified 141 studies relevant to the objective of this research from five different databases. For each of these studies, we assessed the quality and then analyzed them according to a specific set of research questions. We determined two primary purposes for X-ray images the detection of bone diseases and lung diseases. We found that most of the AI methods used were based on a CNN. We identified the different techniques to increase the explainability of the models and grouped them depending on the kind of explainability obtained. We found that most of the articles did not evaluate the quality of the explainability obtained, causing problems of confidence in the explanation. Finally, we identified the current challenges and future directions of this subject and provide guidelines to practitioners and researchers to improve the limitations and the weaknesses that we detected.
Keywords
Search on Google
Collection: Databases of international organizations Database: ProQuest Central Type of study: Experimental Studies Language: English Journal: Applied Sciences Year: 2022 Document Type: Article

Similar

MEDLINE

...
LILACS

LIS

Search on Google
Collection: Databases of international organizations Database: ProQuest Central Type of study: Experimental Studies Language: English Journal: Applied Sciences Year: 2022 Document Type: Article