Your browser doesn't support javascript.
Cohesive Multi-Modality Feature Learning and Fusion for COVID-19 Patient Severity Prediction.
Zhou, Jinzhao; Zhang, Xingming; Zhu, Ziwei; Lan, Xiangyuan; Fu, Lunkai; Wang, Haoxiang; Wen, Hanchun.
  • Zhou J; School of Computer Science and EngineeringSouth China University of Technology Guangzhou 510641 China.
  • Zhang X; School of Computer Science and EngineeringSouth China University of Technology Guangzhou 510641 China.
  • Zhu Z; School of Computer Science and EngineeringSouth China University of Technology Guangzhou 510641 China.
  • Lan X; Department of Computer ScienceHong Kong Baptist University Hong Kong.
  • Fu L; School of Computer Science and EngineeringSouth China University of Technology Guangzhou 510641 China.
  • Wang H; School of Computer Science and EngineeringSouth China University of Technology Guangzhou 510641 China.
  • Wen H; Department of Critical Care MedicineThe First Affiliated Hospital of Guangxi Medical University Nanning 530021 China.
IEEE Trans Circuits Syst Video Technol ; 32(5): 2535-2549, 2022 May.
Article in English | MEDLINE | ID: covidwho-1831867
ABSTRACT
The outbreak of coronavirus disease (COVID-19) has been a nightmare to citizens, hospitals, healthcare practitioners, and the economy in 2020. The overwhelming number of confirmed cases and suspected cases put forward an unprecedented challenge to the hospital's capacity of management and medical resource distribution. To reduce the possibility of cross-infection and attend a patient according to his severity level, expertly diagnosis and sophisticated medical examinations are often required but hard to fulfil during a pandemic. To facilitate the assessment of a patient's severity, this paper proposes a multi-modality feature learning and fusion model for end-to-end covid patient severity prediction using the blood test supported electronic medical record (EMR) and chest computerized tomography (CT) scan images. To evaluate a patient's severity by the co-occurrence of salient clinical features, the High-order Factorization Network (HoFN) is proposed to learn the impact of a set of clinical features without tedious feature engineering. On the other hand, an attention-based deep convolutional neural network (CNN) using pre-trained parameters are used to process the lung CT images. Finally, to achieve cohesion of cross-modality representation, we design a loss function to shift deep features of both-modality into the same feature space which improves the model's performance and robustness when one modality is absent. Experimental results demonstrate that the proposed multi-modality feature learning and fusion model achieves high performance in an authentic scenario.
Keywords

Full text: Available Collection: International databases Database: MEDLINE Type of study: Experimental Studies / Prognostic study / Randomized controlled trials Language: English Journal: IEEE Trans Circuits Syst Video Technol Year: 2022 Document Type: Article

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: International databases Database: MEDLINE Type of study: Experimental Studies / Prognostic study / Randomized controlled trials Language: English Journal: IEEE Trans Circuits Syst Video Technol Year: 2022 Document Type: Article