Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
J Formos Med Assoc ; 2024 May 02.
Article in English | MEDLINE | ID: mdl-38702216

ABSTRACT

The purpose of this study is to establish a deep learning automatic assistance diagnosis system for benign and malignant classification of mediastinal lesions in endobronchial ultrasound (EBUS) images. EBUS images are in the form of video and contain multiple imaging modes. Different imaging modes and different frames can reflect the different characteristics of lesions. Compared with previous studies, the proposed model can efficiently extract and integrate the spatiotemporal relationships between different modes and does not require manual selection of representative frames. In recent years, Vision Transformer has received much attention in the field of computer vision. Combined with convolutional neural networks, hybrid transformers can also perform well on small datasets. This study designed a novel deep learning architecture based on hybrid transformer called TransEBUS. By adding learnable parameters in the temporal dimension, TransEBUS was able to extract spatiotemporal features from insufficient data. In addition, we designed a two-stream module to integrate information from three different imaging modes of EBUS. Furthermore, we applied contrastive learning when training TransEBUS, enabling it to learn discriminative representation of benign and malignant mediastinal lesions. The results show that TransEBUS achieved a diagnostic accuracy of 82% and an area under the curve of 0.8812 in the test dataset, outperforming other methods. It also shows that several models can improve performance by incorporating two-stream module. Our proposed system has shown its potential to help physicians distinguishing benign and malignant mediastinal lesions, thereby ensuring the accuracy of EBUS examination.

2.
Heliyon ; 9(5): e16060, 2023 May.
Article in English | MEDLINE | ID: mdl-37215788

ABSTRACT

This study established a feature-enhanced adversarial semi-supervised semantic segmentation model to automatically annotate pulmonary embolism (PE) lesion areas in computed tomography pulmonary angiogram (CTPA) images. In the current study, all of the PE CTPA image segmentation methods were trained by supervised learning. However, when CTPA images come from different hospitals, the supervised learning models need to be retrained and the images need to be relabeled. Therefore, this study proposed a semi-supervised learning method to make the model applicable to different datasets by the addition of a small number of unlabeled images. By training the model with both labeled and unlabeled images, the accuracy of unlabeled images was improved and the labeling cost was reduced. Our proposed semi-supervised segmentation model included a segmentation network and a discriminator network. We added feature information generated from the encoder of the segmentation network to the discriminator so that it could learn the similarities between the prediction label and ground truth label. The HRNet-based architecture was modified and used as the segmentation network. This HRNet-based architecture could maintain a higher resolution for convolutional operations to improve the prediction of small PE lesion areas. We used a labeled open-source dataset and an unlabeled National Cheng Kung University Hospital (NCKUH) (IRB number: B-ER-108-380) dataset to train the semi-supervised learning model, and the resulting mean intersection over union (mIOU), dice score, and sensitivity reached 0.3510, 0.4854, and 0.4253, respectively, on the NCKUH dataset. Then we fine-tuned and tested the model with a small number of unlabeled PE CTPA images in a dataset from China Medical University Hospital (CMUH) (IRB number: CMUH110-REC3-173). Comparing the results of our semi-supervised model with those of the supervised model, the mIOU, dice score, and sensitivity improved from 0.2344, 0.3325, and 0.3151 to 0.3721, 0.5113, and 0.4967, respectively. In conclusion, our semi-supervised model can improve the accuracy on other datasets and reduce the labor cost of labeling with the use of only a small number of unlabeled images for fine-tuning.

SELECTION OF CITATIONS
SEARCH DETAIL
...