Your browser doesn't support javascript.
COVID-Transformer: Interpretable COVID-19 Detection Using Vision Transformer for Healthcare.
Shome, Debaditya; Kar, T; Mohanty, Sachi Nandan; Tiwari, Prayag; Muhammad, Khan; AlTameem, Abdullah; Zhang, Yazhou; Saudagar, Abdul Khader Jilani.
  • Shome D; School of Electronics Engineering, KIIT Deemed to be University, Odisha 751024, India.
  • Kar T; School of Electronics Engineering, KIIT Deemed to be University, Odisha 751024, India.
  • Mohanty SN; Department of Computer Science & Engineering, Vardhaman College of Engineering (Autonomous), Hyderabad 501218, India.
  • Tiwari P; Department of Computer Science, Aalto University, 02150 Espoo, Finland.
  • Muhammad K; Visual Analytics for Knowledge Laboratory (VIS2KNOW Lab), School of Convergence, College of Computing and Informatics, Sungkyunkwan University, Seoul 03063, Korea.
  • AlTameem A; Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia.
  • Zhang Y; Software Engineering College, Zhengzhou University of Light Industry, Zhengzhou 450001, China.
  • Saudagar AKJ; Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia.
Int J Environ Res Public Health ; 18(21)2021 10 21.
Article in English | MEDLINE | ID: covidwho-1480751
ABSTRACT
In the recent pandemic, accurate and rapid testing of patients remained a critical task in the diagnosis and control of COVID-19 disease spread in the healthcare industry. Because of the sudden increase in cases, most countries have faced scarcity and a low rate of testing. Chest X-rays have been shown in the literature to be a potential source of testing for COVID-19 patients, but manually checking X-ray reports is time-consuming and error-prone. Considering these limitations and the advancements in data science, we proposed a Vision Transformer-based deep learning pipeline for COVID-19 detection from chest X-ray-based imaging. Due to the lack of large data sets, we collected data from three open-source data sets of chest X-ray images and aggregated them to form a 30 K image data set, which is the largest publicly available collection of chest X-ray images in this domain to our knowledge. Our proposed transformer model effectively differentiates COVID-19 from normal chest X-rays with an accuracy of 98% along with an AUC score of 99% in the binary classification task. It distinguishes COVID-19, normal, and pneumonia patient's X-rays with an accuracy of 92% and AUC score of 98% in the Multi-class classification task. For evaluation on our data set, we fine-tuned some of the widely used models in literature, namely, EfficientNetB0, InceptionV3, Resnet50, MobileNetV3, Xception, and DenseNet-121, as baselines. Our proposed transformer model outperformed them in terms of all metrics. In addition, a Grad-CAM based visualization is created which makes our approach interpretable by radiologists and can be used to monitor the progression of the disease in the affected lungs, assisting healthcare.
Subject(s)
Keywords

Full text: Available Collection: International databases Database: MEDLINE Main subject: Deep Learning / COVID-19 Type of study: Diagnostic study / Experimental Studies / Observational study / Prognostic study Limits: Humans Language: English Year: 2021 Document Type: Article Affiliation country: Ijerph182111086

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: International databases Database: MEDLINE Main subject: Deep Learning / COVID-19 Type of study: Diagnostic study / Experimental Studies / Observational study / Prognostic study Limits: Humans Language: English Year: 2021 Document Type: Article Affiliation country: Ijerph182111086