Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters











Database
Language
Publication year range
1.
BMC Bioinformatics ; 24(1): 285, 2023 Jul 18.
Article in English | MEDLINE | ID: mdl-37464322

ABSTRACT

Deep learning-based medical image segmentation has made great progress over the past decades. Scholars have proposed many novel transformer-based segmentation networks to solve the problems of building long-range dependencies and global context connections in convolutional neural networks (CNNs). However, these methods usually replace the CNN-based blocks with improved transformer-based structures, which leads to the lack of local feature extraction ability, and these structures require a huge number of data for training. Moreover, those methods did not pay attention to edge information, which is essential in medical image segmentation. To address these problems, we proposed a new network structure, called P-TransUNet. This network structure combines the designed efficient P-Transformer and the fusion module, which extract distance-related long-range dependencies and local information respectively and produce the fused features. Besides, we introduced edge loss into training to focus the attention of the network on the edge of the lesion area to improve segmentation performance. Extensive experiments across four tasks of medical image segmentation demonstrated the effectiveness of P-TransUNet, and showed that our network outperforms other state-of-the-art methods.


Subject(s)
Electric Power Supplies , Neural Networks, Computer , Image Processing, Computer-Assisted
2.
BMC Bioinformatics ; 24(1): 85, 2023 Mar 07.
Article in English | MEDLINE | ID: mdl-36882688

ABSTRACT

Although various methods based on convolutional neural networks have improved the performance of biomedical image segmentation to meet the precision requirements of medical imaging segmentation task, medical image segmentation methods based on deep learning still need to solve the following problems: (1) Difficulty in extracting the discriminative feature of the lesion region in medical images during the encoding process due to variable sizes and shapes; (2) difficulty in fusing spatial and semantic information of the lesion region effectively during the decoding process due to redundant information and the semantic gap. In this paper, we used the attention-based Transformer during the encoder and decoder stages to improve feature discrimination at the level of spatial detail and semantic location by its multihead-based self-attention. In conclusion, we propose an architecture called EG-TransUNet, including three modules improved by a transformer: progressive enhancement module, channel spatial attention, and semantic guidance attention. The proposed EG-TransUNet architecture allowed us to capture object variabilities with improved results on different biomedical datasets. EG-TransUNet outperformed other methods on two popular colonoscopy datasets (Kvasir-SEG and CVC-ClinicDB) by achieving 93.44% and 95.26% on mDice. Extensive experiments and visualization results demonstrate that our method advances the performance on five medical segmentation datasets with better generalization ability.


Subject(s)
Electric Power Supplies , Neural Networks, Computer , Semantics
3.
Z Gastroenterol ; 60(12): 1770-1778, 2022 Dec.
Article in English | MEDLINE | ID: mdl-35697062

ABSTRACT

BACKGROUND AND STUDY AIM: Chronic atrophic gastritis plays an important role in the process of gastric cancer. Deep learning is gradually introduced in the medical field, and how to better apply a convolutional neural network (CNN) to the diagnosis of chronic atrophic gastritis remains a research hotspot. This study was designed to improve the performance of CNN on diagnosing chronic atrophic gastritis by constructing and evaluating a network structure based on the characteristics of gastroscopic images. METHODS: Three endoscopists reviewed the endoscopic images of the gastric antrum from the Gastroscopy Image Database of Zhongnan Hospital and labelled available images according to pathological results. Two novel modules proposed recently were introduced to construct the Multi-scale with Attention net (MWA-net) considering the characters of similar medical images. After training the network using images of training sets, the diagnostic ability of the MWA-net was evaluated by comparing it with those of other deep learning models and endoscopists with varying degrees of expertise. RESULTS: As a result, 5,159 images of the gastric antrum from 2,240 patients were used to train and test the MWA-net. Compared with the direct application of famous networks, the MWA-net achieved the best performance (accuracy, 92.13%) with an increase of 1.80% compared to that of ResNet. The suspicious lesions indicated by the network are consistent with the conclusion of experts. The sensitivity and specificity of the convolutional network for gastric atrophy diagnosis are 90.19% and 94.51%, respectively, which are higher than those of experts. CONCLUSIONS: Highly similar images of chronic atrophic gastritis can be identified by the proposed MWA-net, which has a better performance than other well-known networks. This work can further reduce the workload of gastroscopists, simplify the diagnostic process and provide medical assistance to more residents.


Subject(s)
Deep Learning , Humans , Workload
SELECTION OF CITATIONS
SEARCH DETAIL