Your browser doesn't support javascript.
Automated Multi-View Multi-Modal Assessment of COVID-19 Patients Using Reciprocal Attention and Biomedical Transform.
Li, Yanhan; Zhao, Hongyun; Gan, Tian; Liu, Yang; Zou, Lian; Xu, Ting; Chen, Xuan; Fan, Cien; Wu, Meng.
  • Li Y; Electronic Information School, Wuhan University, Wuhan, China.
  • Zhao H; Department of Gastroenterology, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China.
  • Gan T; Chongqing Key Laboratory of Ultrasound Molecular Imaging, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China.
  • Liu Y; Department of Ultrasound, Zhongnan Hospital of Wuhan University, Wuhan, China.
  • Zou L; School of Economics and Management, Wuhan University, Wuhan, China.
  • Xu T; Electronic Information School, Wuhan University, Wuhan, China.
  • Chen X; Department of Ultrasound, Zhongnan Hospital of Wuhan University, Wuhan, China.
  • Fan C; Beijing Genomics Institute (BGI) Research, Shenzhen, China.
  • Wu M; Electronic Information School, Wuhan University, Wuhan, China.
Front Public Health ; 10: 886958, 2022.
Article in English | MEDLINE | ID: covidwho-1963620
ABSTRACT
Automated severity assessment of coronavirus disease 2019 (COVID-19) patients can help rationally allocate medical resources and improve patients' survival rates. The existing methods conduct severity assessment tasks mainly on a unitary modal and single view, which is appropriate to exclude potential interactive information. To tackle the problem, in this paper, we propose a multi-view multi-modal model to automatically assess the severity of COVID-19 patients based on deep learning. The proposed model receives multi-view ultrasound images and biomedical indices of patients and generates comprehensive features for assessment tasks. Also, we propose a reciprocal attention module to acquire the underlying interactions between multi-view ultrasound data. Moreover, we propose biomedical transform module to integrate biomedical data with ultrasound data to produce multi-modal features. The proposed model is trained and tested on compound datasets, and it yields 92.75% for accuracy and 80.95% for recall, which is the best performance compared to other state-of-the-art methods. Further ablation experiments and discussions conformably indicate the feasibility and advancement of the proposed model.
Subject(s)
Keywords

Full text: Available Collection: International databases Database: MEDLINE Main subject: COVID-19 Type of study: Prognostic study Limits: Humans Language: English Journal: Front Public Health Year: 2022 Document Type: Article Affiliation country: Fpubh.2022.886958

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: International databases Database: MEDLINE Main subject: COVID-19 Type of study: Prognostic study Limits: Humans Language: English Journal: Front Public Health Year: 2022 Document Type: Article Affiliation country: Fpubh.2022.886958