Your browser doesn't support javascript.
loading
Visual Transformer and Deep CNN Prediction of High-risk COVID-19 Infected Patients using Fusion of CT Images and Clinical Data
Hamid Abbasi; Sara Saberi Moghadam Tehrani; Maral Zarvani; Pariya Amiri; Reza Azmi; Zahra Ghods; Narges Nourozi; Masoomeh Raoufi; Seyed Amir Ahmad Safavi-Naini; Amirali Soheili; Sara Abolghasemi; Mohammad Javad Gharib.
Affiliation
  • Hamid Abbasi; University of Auckland
  • Sara Saberi Moghadam Tehrani; Faculty of Engineering, Alzahra University, Tehran, Iran,
  • Maral Zarvani; Faculty of Engineering, Alzahra University, Tehran, Iran,
  • Pariya Amiri; Pooyandegan Rah Saadat Company, Tehran, Iran,
  • Reza Azmi; Faculty of Engineering, Alzahra University, Tehran, Iran,
  • Zahra Ghods; Faculty of Engineering, Alzahra University, Tehran, Iran,
  • Narges Nourozi; Faculty of Engineering, Alzahra University, Tehran, Iran,
  • Masoomeh Raoufi; Department of Radiology, School of Medicine, Imam Hossein Hospital, Shahid Beheshti, University of Medical Sciences.
  • Seyed Amir Ahmad Safavi-Naini; Research Institute for Gastroenterology and Liver Diseases, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
  • Amirali Soheili; Medical Student Research Committee, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
  • Sara Abolghasemi; Infectious Diseases and Tropical Medicine Research Center, Shahid Beheshti University of Medial Sciences, Tehran, Iran,
  • Mohammad Javad Gharib; Auckland City Hospital, Auckland, 1010, New Zealand,
Preprint in English | medRxiv | ID: ppmedrxiv-22278084
ABSTRACT
Despite the globally reducing hospitalization rates and the much lower risks of Covid-19 mortality, accurate diagnosis of the infection stage and prediction of outcomes are clinically of interest. Advanced current technology can facilitate automating the process and help identifying those who are at higher risks of developing severe illness. Deep-learning schemes including Visual Transformer and Convolutional Neural Networks (CNNs), in particular, are shown to be powerful tools for predicting clinical outcomes when fed with either CT scan images or clinical data of patients. This paper demonstrates how a novel 3D data fusion approach through concatenating CT scan images with patients clinical data can remarkably improve the performance of Visual Transformer and CNN models in predicting Covid-19 infection outcomes. Here, we explore and represent comprehensive research on the efficiency of Video Swin Transformers and a number of CNN models fed with fusion datasets and CT scans only vs a set of conventional classifiers fed with patients clinical data only. A relatively large clinical dataset from 380 Covid-19 diagnosed patients was used to train/test the models. Results show that the 3D Video Swin Transformers fed with the fusion datasets of 64 sectional CT scans+67 (or 30 selected) clinical labels outperformed all other approaches for predicting outcomes in Covid-19-infected patients amongst all techniques (i.e., TPR=0.95, FPR=0.40, F0.5 score=0.82, AUC=0.77, Kappa=0.6). Results indicate possibilities of predicting the severity of outcome using patients CT images and clinical data collected at the time of admission to hospital.
License
cc_by_nc_nd
Full text: Available Collection: Preprints Database: medRxiv Type of study: Prognostic study Language: English Year: 2022 Document type: Preprint
Full text: Available Collection: Preprints Database: medRxiv Type of study: Prognostic study Language: English Year: 2022 Document type: Preprint
...