Your browser doesn't support javascript.
Multi-task contrastive learning for automatic CT and X-ray diagnosis of COVID-19.
Li, Jinpeng; Zhao, Gangming; Tao, Yaling; Zhai, Penghua; Chen, Hao; He, Huiguang; Cai, Ting.
  • Li J; HwaMei Hospital, University of Chinese Academy of Sciences, 41 Northwest Street, Haishu District, Ningbo, 315010, China.
  • Zhao G; Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, 159 Beijiao Street, Jiangbei District, Ningbo, 315000, China.
  • Tao Y; University of Chinese Academy of Sciences, Beijing, China.
  • Zhai P; Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, 159 Beijiao Street, Jiangbei District, Ningbo, 315000, China.
  • Chen H; The University of Hong Kong, Hong Kong.
  • He H; HwaMei Hospital, University of Chinese Academy of Sciences, 41 Northwest Street, Haishu District, Ningbo, 315010, China.
  • Cai T; Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, 159 Beijiao Street, Jiangbei District, Ningbo, 315000, China.
Pattern Recognit ; 114: 107848, 2021 Jun.
Article in English | MEDLINE | ID: covidwho-1057194
ABSTRACT
Computed tomography (CT) and X-ray are effective methods for diagnosing COVID-19. Although several studies have demonstrated the potential of deep learning in the automatic diagnosis of COVID-19 using CT and X-ray, the generalization on unseen samples needs to be improved. To tackle this problem, we present the contrastive multi-task convolutional neural network (CMT-CNN), which is composed of two tasks. The main task is to diagnose COVID-19 from other pneumonia and normal control. The auxiliary task is to encourage local aggregation though a contrastive loss first, each image is transformed by a series of augmentations (Poisson noise, rotation, etc.). Then, the model is optimized to embed representations of a same image similar while different images dissimilar in a latent space. In this way, CMT-CNN is capable of making transformation-invariant predictions and the spread-out properties of data are preserved. We demonstrate that the apparently simple auxiliary task provides powerful supervisions to enhance generalization. We conduct experiments on a CT dataset (4,758 samples) and an X-ray dataset (5,821 samples) assembled by open datasets and data collected in our hospital. Experimental results demonstrate that contrastive learning (as plugin module) brings solid accuracy improvement for deep learning models on both CT (5.49%-6.45%) and X-ray (0.96%-2.42%) without requiring additional annotations. Our codes are accessible online.
Keywords

Full text: Available Collection: International databases Database: MEDLINE Type of study: Prognostic study Language: English Journal: Pattern Recognit Year: 2021 Document Type: Article Affiliation country: J.patcog.2021.107848

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: International databases Database: MEDLINE Type of study: Prognostic study Language: English Journal: Pattern Recognit Year: 2021 Document Type: Article Affiliation country: J.patcog.2021.107848