Automated Multi-View Multi-Modal Assessment of COVID-19 Patients Using Reciprocal Attention and Biomedical Transform.
Front Public Health
; 10: 886958, 2022.
Article
in English
| MEDLINE | ID: covidwho-1963620
ABSTRACT
Automated severity assessment of coronavirus disease 2019 (COVID-19) patients can help rationally allocate medical resources and improve patients' survival rates. The existing methods conduct severity assessment tasks mainly on a unitary modal and single view, which is appropriate to exclude potential interactive information. To tackle the problem, in this paper, we propose a multi-view multi-modal model to automatically assess the severity of COVID-19 patients based on deep learning. The proposed model receives multi-view ultrasound images and biomedical indices of patients and generates comprehensive features for assessment tasks. Also, we propose a reciprocal attention module to acquire the underlying interactions between multi-view ultrasound data. Moreover, we propose biomedical transform module to integrate biomedical data with ultrasound data to produce multi-modal features. The proposed model is trained and tested on compound datasets, and it yields 92.75% for accuracy and 80.95% for recall, which is the best performance compared to other state-of-the-art methods. Further ablation experiments and discussions conformably indicate the feasibility and advancement of the proposed model.
Keywords
Full text:
Available
Collection:
International databases
Database:
MEDLINE
Main subject:
COVID-19
Type of study:
Prognostic study
Limits:
Humans
Language:
English
Journal:
Front Public Health
Year:
2022
Document Type:
Article
Affiliation country:
Fpubh.2022.886958
Similar
MEDLINE
...
LILACS
LIS