Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Med Imaging ; 41(9): 2273-2284, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35324437

RESUMO

Learning how to capture long-range dependencies and restore spatial information of down-sampled feature maps are the basis of the encoder-decoder structure networks in medical image segmentation. U-Net based methods use feature fusion to alleviate these two problems, but the global feature extraction ability and spatial information recovery ability of U-Net are still insufficient. In this paper, we propose a Global Feature Reconstruction (GFR) module to efficiently capture global context features and a Local Feature Reconstruction (LFR) module to dynamically up-sample features, respectively. For the GFR module, we first extract the global features with category representation from the feature map, then use the different level global features to reconstruct features at each location. The GFR module establishes a connection for each pair of feature elements in the entire space from a global perspective and transfers semantic information from the deep layers to the shallow layers. For the LFR module, we use low-level feature maps to guide the up-sampling process of high-level feature maps. Specifically, we use local neighborhoods to reconstruct features to achieve the transfer of spatial information. Based on the encoder-decoder architecture, we propose a Global and Local Feature Reconstruction Network (GLFRNet), in which the GFR modules are applied as skip connections and the LFR modules constitute the decoder path. The proposed GLFRNet is applied to four different medical image segmentation tasks and achieves state-of-the-art performance.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Semântica
2.
Biomed Opt Express ; 12(10): 6529-6544, 2021 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-34745754

RESUMO

Accurate segmentation of optic disc (OD) and optic cup (OC) in fundus images is crucial for the analysis of many retinal diseases, such as the screening and diagnosis of glaucoma and atrophy segmentation. Due to domain shift between different datasets caused by different acquisition devices and modes and inadequate training caused by small sample dataset, the existing deep-learning-based OD and OC segmentation networks have poor generalization ability for different fundus image datasets. In this paper, adopting the mixed training strategy based on different datasets for the first time, we propose an encoder-decoder based general OD and OC segmentation network (named as GDCSeg-Net) with the newly designed multi-scale weight-shared attention (MSA) module and densely connected depthwise separable convolution (DSC) module, to effectively overcome these two problems. Experimental results show that our proposed GDCSeg-Net is competitive with other state-of-the-art methods on five different public fundus image datasets, including REFUGE, MESSIDOR, RIM-ONE-R3, Drishti-GS and IDRiD.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...