Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Phys Med Biol ; 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38959911

RESUMO

Objective.In recent years, convolutional neural networks, which typically focus on extracting spatial domain features, have shown limitations in learning global contextual information. However, frequency domain can offer a global perspective that spatial domain methods often struggle to capture. To address this limitation, we propose FreqSNet, which leverages both frequency and spatial features for medical image segmentation.Approach.To begin, we propose a Frequency-Space Representation Aggregation Block (FSRAB) to replace conventional convolutions. FSRAB contains three frequency domain branches to capture global frequency information along different axial combinations, while a convolutional branch is designed to interact information across channels in local spatial features. Secondly, the Multiplex Expansion Attention Block (MEAB) extracts long-range dependency information using dilated convolutional blocks, while suppressing irrelevant information via attention mechanisms. Finally, the introduced Feature Integration Block enhances feature representation by integrating semantic features that fuse spatial and channel positional information.Main results.We validated our method on 5 public datasets, including BUSI, CVC-ClinicDB, CVC-ColonDB, ISIC-2018, and Luna16. On these datasets, our method achieved Intersection over Union (IoU) scores of 75.46%, 87.81%, 79.08%, 84.04%, and 96.99%, and Hausdorff Distance (HD) values of 22.22mm, 13.20mm, 13.08mm, 13.51mm, and 5.22mm, respectively. Compared to other state-of-the-art methods, our FreqSNet achieves better segmentation results.Significance.Our method can effectively combine frequency domain information with spatial domain features, enhancing the segmentation performance and generalization capability in medical image segmentation tasks.

2.
IEEE J Biomed Health Inform ; 27(9): 4317-4328, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37314916

RESUMO

Accuracy segmentation of COVID-19 lesions in lung CT images can aid patient screening and diagnosis. However, the blurred, inconsistent shape and location of the lesion area poses a great challenge to this vision task. To tackle this issue, we propose a multi-scale representation learning network (MRL-Net) that integrates CNN with Transformer via two bridge unit: Dual Multi-interaction Attention (DMA) and Dual Boundary Attention (DBA). First, to obtain multi-scale local detailed feature and global contextual information, we combine low-level geometric information and high-level semantic features extracted by CNN and Transformer, respectively. Secondly, for enhanced feature representation, DMA is proposed to fuse the local detailed feature of CNN and the global context information of Transformer. Finally, DBA makes our network focus on the boundary features of the lesion, further enhancing the representational learning. Amounts of experimental results show that MRL-Net is superior to current state-of-the-art methods and achieves better COVID-19 image segmentation performance.


Assuntos
COVID-19 , Humanos , Fontes de Energia Elétrica , Semântica , Tomografia Computadorizada por Raios X , Pulmão , Processamento de Imagem Assistida por Computador
3.
Entropy (Basel) ; 24(12)2022 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-36554228

RESUMO

Single-modality medical images often cannot contain sufficient valid information to meet the information requirements of clinical diagnosis. The diagnostic efficiency is always limited by observing multiple images at the same time. Image fusion is a technique that combines functional modalities such as positron emission computed tomography (PET) and single-photon emission computed tomography (SPECT) with anatomical modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) to supplement the complementary information. Meanwhile, fusing two anatomical images (like CT-MRI) is often required to replace single MRI, and the fused images can improve the efficiency and accuracy of clinical diagnosis. To this end, in order to achieve high-quality, high-resolution and rich-detail fusion without artificial prior, an unsupervised deep learning image fusion framework is proposed in this paper. It is named the back project dense generative adversarial network (BPDGAN) framework. In particular, we construct a novel network based on the back project dense block (BPDB) and convolutional block attention module (CBAM). The BPDB can effectively mitigate the impact of black backgrounds on image content. Conversely, the CBAM improves the performance of BPDGAN on the texture and edge information. To conclude, qualitative and quantitative experiments are tested to demonstrate the superiority of BPDGAN. In terms of quantitative metrics, BPDGAN outperforms the state-of-the-art comparisons by approximately 19.58%, 14.84%, 10.40% and 86.78% on AG, EI, Qabf and Qcv metrics, respectively.

4.
Comput Biol Med ; 149: 106065, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36081225

RESUMO

Aiming at detecting COVID-19 effectively, a multiscale class residual attention (MCRA) network is proposed via chest X-ray (CXR) image classification. First, to overcome the data shortage and improve the robustness of our network, a pixel-level image mixing of local regions was introduced to achieve data augmentation and reduce noise. Secondly, multi-scale fusion strategy was adopted to extract global contextual information at different scales and enhance semantic representation. Last but not least, class residual attention was employed to generate spatial attention for each class, which can avoid inter-class interference and enhance related features to further improve the COVID-19 detection. Experimental results show that our network achieves superior diagnostic performance on COVIDx dataset, and its accuracy, PPV, sensitivity, specificity and F1-score are 97.71%, 96.76%, 96.56%, 98.96% and 96.64%, respectively; moreover, the heat maps can endow our deep model with somewhat interpretability.


Assuntos
COVID-19 , Aprendizado Profundo , Atenção , COVID-19/diagnóstico por imagem , Teste para COVID-19 , Progressão da Doença , Humanos , Raios X
5.
Med Phys ; 49(12): 7583-7595, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35916116

RESUMO

PURPOSE: Corona virus disease 2019 (COVID-19) is threatening the health of the global people and bringing great losses to our economy and society. However, computed tomography (CT) image segmentation can make clinicians quickly identify the COVID-19-infected regions. Accurate segmentation infection area of COVID-19 can contribute screen confirmed cases. METHODS: We designed a segmentation network for COVID-19-infected regions in CT images. To begin with, multilayered features were extracted by the backbone network of Res2Net. Subsequently, edge features of the infected regions in the low-level feature f2 were extracted by the edge attention module. Second, we carefully designed the structure of the attention position module (APM) to extract high-level feature f5 and detect infected regions. Finally, we proposed a context exploration module consisting of two parallel explore blocks, which can remove some false positives and false negatives to reach more accurate segmentation results. RESULTS: Experimental results show that, on the public COVID-19 dataset, the Dice, sensitivity, specificity, S α ${S}_\alpha $ , E ∅ m e a n $E_\emptyset ^{mean}$ , and mean absolute error (MAE) of our method are 0.755, 0.751, 0.959, 0.795, 0.919, and 0.060, respectively. Compared with the latest COVID-19 segmentation model Inf-Net, the Dice similarity coefficient of our model has increased by 7.3%; the sensitivity (Sen) has increased by 5.9%. On contrary, the MAE has dropped by 2.2%. CONCLUSIONS: Our method performs well on COVID-19 CT image segmentation. We also find that our method is so portable that can be suitable for various current popular networks. In a word, our method can help screen people infected with COVID-19 effectively and save the labor power of clinicians and radiologists.


Assuntos
COVID-19 , Humanos , COVID-19/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Radiologistas , Tomografia Computadorizada por Raios X
6.
Entropy (Basel) ; 24(1)2022 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-35052138

RESUMO

Aiming at recognizing small proportion, blurred and complex traffic sign in natural scenes, a traffic sign detection method based on RetinaNet-NeXt is proposed. First, to ensure the quality of dataset, the data were cleaned and enhanced to denoise. Secondly, a novel backbone network ResNeXt was employed to improve the detection accuracy and effection of RetinaNet. Finally, transfer learning and group normalization were adopted to accelerate our network training. Experimental results show that the precision, recall and mAP of our method, compared with the original RetinaNet, are improved by 9.08%, 9.09% and 7.32%, respectively. Our method can be effectively applied to traffic sign detection.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...