Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Heliyon ; 9(11): e22536, 2023 Nov.
Article in English | MEDLINE | ID: mdl-38034799

ABSTRACT

Background: Statistics show that each year more than 100,000 patients pass away from brain tumors. Due to the diverse morphology, hazy boundaries, or unbalanced categories of medical data lesions, segmentation prediction of brain tumors has significant challenges. Purpose: In this thesis, we highlight EAV-UNet, a system designed to accurately detect lesion regions. Optimizing feature extraction, utilizing automatic segmentation techniques to detect anomalous regions, and strengthening the structure. We prioritize the segmentation problem of lesion regions, especially in cases where the margins of the tumor are more hazy. Methods: The VGG-19 network structure is incorporated into the coding stage of the U-Net, resulting in a deeper network structure, and an attention mechanism module is introduced to augment the feature information. Additionally, an edge detection module is added to the encoder to extract edge information in the image, which is then passed to the decoder to aid in reconstructing the original image. Our method uses the VGG-19 in place of the U-Net encoder. To strengthen feature details, we integrate a CBAM (Channel and Spatial Attention Mechanism) module into the decoder to enhance it. To extract vital edge details from the data, we incorporate an edge recognition section into the encoder. Results: All evaluation metrics show major improvements with our recommended EAV-UNet technique, which is based on a thorough analysis of experimental data. Specifically, for low contrast and blurry lesion edge images, the EAV-Unet method consistently produces forecasts that are very similar to the initial images. This technique reduced the Hausdorff distance to 1.82, achieved an F1 score of 96.1%, and attained a precision of 93.2% on Dataset 1. It obtained an F1 score of 76.8%, a Precision of 85.3%, and a Hausdorff distance reduction to 1.31 on Dataset 2. Dataset 3 displayed a Hausdorff distance cut in 2.30, an F1 score of 86.9%, and Precision of 95.3%. Conclusions: We conducted extensive segmentation experiments using various datasets related to brain tumors. We refined the network architecture by employing smaller convolutional kernels in our strategy. To further improve segmentation accuracy, we integrated attention modules and an edge enhancement module to reinforce edge information and boost attention scores.

2.
Front Neurosci ; 16: 1058487, 2022.
Article in English | MEDLINE | ID: mdl-36452330

ABSTRACT

Recently, attention has been drawn toward brain imaging technology in the medical field, among which MRI plays a vital role in clinical diagnosis and lesion analysis of brain diseases. Different sequences of MR images provide more comprehensive information and help doctors to make accurate clinical diagnoses. However, their costs are particularly high. For many image-to-image synthesis methods in the medical field, supervised learning-based methods require labeled datasets, which are often difficult to obtain. Therefore, we propose an unsupervised learning-based generative adversarial network with adaptive normalization (AN-GAN) for synthesizing T2-weighted MR images from rapidly scanned diffusion-weighted imaging (DWI) MR images. In contrast to the existing methods, deep semantic information is extracted from the high-frequency information of original sequence images, which are then added to the feature map in deconvolution layers as a modality mask vector. This image fusion operation results in better feature maps and guides the training of GANs. Furthermore, to better preserve semantic information against common normalization layers, we introduce AN, a conditional normalization layer that modulates the activations using the fused feature map. Experimental results show that our method of synthesizing T2 images has a better perceptual quality and better detail than the other state-of-the-art methods.

3.
Front Neurosci ; 16: 920981, 2022.
Article in English | MEDLINE | ID: mdl-36117623

ABSTRACT

Today's brain imaging modality migration techniques are transformed from one modality data in one domain to another. In the specific clinical diagnosis, multiple modal data can be obtained in the same scanning field, and it is more beneficial to synthesize missing modal data by using the diversity characteristics of multiple modal data. Therefore, we introduce a self-supervised learning cycle-consistent generative adversarial network (BSL-GAN) for brain imaging modality transfer. The framework constructs multi-branch input, which enables the framework to learn the diversity characteristics of multimodal data. In addition, their supervision information is mined from large-scale unsupervised data by establishing auxiliary tasks, and the network is trained by constructing supervision information, which not only ensures the similarity between the input and output of modal images, but can also learn valuable representations for downstream tasks.

SELECTION OF CITATIONS
SEARCH DETAIL
...