Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Heliyon ; 9(12): e22663, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38076196

ABSTRACT

Accurate segmentation of skin lesions is a challenging task because the task is highly influenced by factors such as location, shape and scale. In recent years, Convolutional Neural Networks (CNNs) have achieved advanced performance in automated medical image segmentation. However, existing CNNs have problems such as inability to highlight relevant features and preserve local features, which limit their application in clinical decision-making. This paper proposes a CNN with an added attention mechanism (EA-Net) for more accurate medical image segmentation.EA-Net is based on the U-Net network model framework. Specifically, we added a pixel-level attention module (PA) to the encoder section to preserve the local features of the image during downsampling, making the feature maps input to the decoder more relevant to the ground-truth. At the same time, we added a spatial multi-scale attention module (SA) after the decoding process to increase the spatial weight of the feature maps that are more relevant to the ground-truth, thereby reducing the gap between the output results and the ground-truth. We conducted extensive segmentation experiments on skin lesion images from the ISIC 2017 and ISIC 2018 datasets. The results demonstrate that, when compared to U-Net, our proposed EA-Net achieves an average Dice score improvement of 1.94% and 5.38% for skin lesion tissue segmentation on the ISIC 2017 and ISIC 2018 datasets, respectively. The IoU also increases by 2.69% and 8.31%, and the ASSD decreases by 0.3783 pix and 0.5432 pix, indicating superior segmentation performance. EA-Net can achieve better segmentation results when the original image of skin lesions has an obscure boundary and the segmentation area contains interference factors, which proves that the addition of attention mechanism in the encoder and the application of comprehensive attention mechanism can improve the performance of neural network in the field of skin lesions image segmentation.

2.
Heliyon ; 9(11): e22536, 2023 Nov.
Article in English | MEDLINE | ID: mdl-38034799

ABSTRACT

Background: Statistics show that each year more than 100,000 patients pass away from brain tumors. Due to the diverse morphology, hazy boundaries, or unbalanced categories of medical data lesions, segmentation prediction of brain tumors has significant challenges. Purpose: In this thesis, we highlight EAV-UNet, a system designed to accurately detect lesion regions. Optimizing feature extraction, utilizing automatic segmentation techniques to detect anomalous regions, and strengthening the structure. We prioritize the segmentation problem of lesion regions, especially in cases where the margins of the tumor are more hazy. Methods: The VGG-19 network structure is incorporated into the coding stage of the U-Net, resulting in a deeper network structure, and an attention mechanism module is introduced to augment the feature information. Additionally, an edge detection module is added to the encoder to extract edge information in the image, which is then passed to the decoder to aid in reconstructing the original image. Our method uses the VGG-19 in place of the U-Net encoder. To strengthen feature details, we integrate a CBAM (Channel and Spatial Attention Mechanism) module into the decoder to enhance it. To extract vital edge details from the data, we incorporate an edge recognition section into the encoder. Results: All evaluation metrics show major improvements with our recommended EAV-UNet technique, which is based on a thorough analysis of experimental data. Specifically, for low contrast and blurry lesion edge images, the EAV-Unet method consistently produces forecasts that are very similar to the initial images. This technique reduced the Hausdorff distance to 1.82, achieved an F1 score of 96.1%, and attained a precision of 93.2% on Dataset 1. It obtained an F1 score of 76.8%, a Precision of 85.3%, and a Hausdorff distance reduction to 1.31 on Dataset 2. Dataset 3 displayed a Hausdorff distance cut in 2.30, an F1 score of 86.9%, and Precision of 95.3%. Conclusions: We conducted extensive segmentation experiments using various datasets related to brain tumors. We refined the network architecture by employing smaller convolutional kernels in our strategy. To further improve segmentation accuracy, we integrated attention modules and an edge enhancement module to reinforce edge information and boost attention scores.

3.
Front Neurosci ; 16: 920981, 2022.
Article in English | MEDLINE | ID: mdl-36117623

ABSTRACT

Today's brain imaging modality migration techniques are transformed from one modality data in one domain to another. In the specific clinical diagnosis, multiple modal data can be obtained in the same scanning field, and it is more beneficial to synthesize missing modal data by using the diversity characteristics of multiple modal data. Therefore, we introduce a self-supervised learning cycle-consistent generative adversarial network (BSL-GAN) for brain imaging modality transfer. The framework constructs multi-branch input, which enables the framework to learn the diversity characteristics of multimodal data. In addition, their supervision information is mined from large-scale unsupervised data by establishing auxiliary tasks, and the network is trained by constructing supervision information, which not only ensures the similarity between the input and output of modal images, but can also learn valuable representations for downstream tasks.

SELECTION OF CITATIONS
SEARCH DETAIL
...