Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Phys Med Biol ; 68(9)2023 04 26.
Artigo em Inglês | MEDLINE | ID: mdl-37019116

RESUMO

Objective. Mild cognitive impairment (MCI) is a precursor to Alzheimer's disease (AD) which is an irreversible progressive neurodegenerative disease and its early diagnosis and intervention are of great significance. Recently, many deep learning methods have demonstrated the advantages of multi-modal neuroimages in MCI identification task. However, previous studies frequently simply concatenate patch-level features for prediction without modeling the dependencies among local features. Also, many methods only focus on modality-sharable information or modality-specific features and ignore their incorporation. This work aims to address above-mentioned issues and construct a model for accurate MCI identification.Approach. In this paper, we propose a multi-level fusion network for MCI identification using multi-modal neuroimages, which consists of local representation learning and dependency-aware global representation learning stages. Specifically, for each patient, we first extract multi-pair of patches from multiple same position in multi-modal neuroimages. After that, in the local representation learning stage, multiple dual-channel sub-networks, each of which consists of two modality-specific feature extraction branches and three sine-cosine fusion modules, are constructed to learn local features that preserve modality-sharable and modality specific representations simultaneously. In the dependency-aware global representation learning stage, we further capture long-range dependencies among local representations and integrate them into global ones for MCI identification.Main results. Experiments on ADNI-1/ADNI-2 datasets demonstrate the superior performance of the proposed method in MCI identification tasks (Accuracy: 0.802, sensitivity: 0.821, specificity: 0.767 in MCI diagnosis task; accuracy: 0.849, sensitivity: 0.841, specificity: 0.856 in MCI conversion task) when compared with state-of-the-art methods. The proposed classification model has demonstrated a promising potential to predict MCI conversion and identify the disease-related regions in the brain.Significance. We propose a multi-level fusion network for MCI identification using multi-modal neuroimage. The results on ADNI datasets have demonstrated its feasibility and superiority.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Doenças Neurodegenerativas , Humanos , Doença de Alzheimer/diagnóstico por imagem , Neuroimagem/métodos , Imageamento por Ressonância Magnética/métodos , Disfunção Cognitiva/diagnóstico por imagem
2.
Comput Methods Programs Biomed ; 230: 107346, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36716637

RESUMO

BACKGROUND AND OBJECTIVE: Predicting the malignant potential of breast lesions based on breast ultrasound (BUS) images is a crucial component of computer-aided diagnosis system for breast cancers. However, since breast lesions in BUS images generally have various shapes with relatively low contrast and present complex textures, it still remains challenging to accurately identify the malignant potential of breast lesions. METHODS: In this paper, we propose a multi-scale gradational-order fusion framework to make full advantages of multi-scale representations incorporating with gradational-order characteristics of BUS images for breast lesions classification. Specifically, we first construct a spatial context aggregation module to generate multi-scale context representations from the original BUS images. Subsequently, multi-scale representations are efficiently fused in feature fusion block that is armed with special fusion strategies to comprehensively capture morphological characteristics of breast lesions. To better characterize complex textures and enhance non-linear modeling capability, we further propose isotropous gradational-order feature module in the feature fusion block to learn and combine multi-order representations. Finally, these multi-scale gradational-order representations are utilized to perform prediction for the malignant potential of breast lesions. RESULTS: The proposed model was evaluated on three open datasets by using 5-fold cross-validation. The experimental results (Accuracy: 85.32%, Sensitivity: 85.24%, Specificity: 88.57%, AUC: 90.63% on dataset A; Accuracy: 76.48%, Sensitivity: 72.45%, Specificity: 80.42%, AUC: 78.98% on dataset B) demonstrate that the proposed method achieves the promising performance when compared with other deep learning-based methods in BUS classification task. CONCLUSIONS: The proposed method has demonstrated a promising potential to predict malignant potential of breast lesion using ultrasound image in an end-to-end manner.


Assuntos
Neoplasias da Mama , Mama , Feminino , Humanos , Mama/diagnóstico por imagem , Mama/patologia , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Ultrassonografia , Ultrassonografia Mamária , Diagnóstico por Computador/métodos
3.
Comput Methods Programs Biomed ; 215: 106612, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35033757

RESUMO

Deep learning methods, especially convolutional neural networks, have advanced the breast lesion classification task using breast ultrasound (BUS) images. However, constructing a highly-accurate classification model still remains challenging due to complex pattern, relatively-low contrast and fuzzy boundary existing between lesion regions (i.e., foreground) and the surrounding tissues (i.e., background). Few studies have separated foreground and background for learning domain-specific representations, and then fused them for improving performance of models. In this paper, we propose a saliency map-guided hierarchical dense feature aggregation framework for breast lesion classification using BUS images. Specifically, we first generate saliency maps for foreground and background via super-pixel clustering and multi-scale region grouping. Then, a triple-branch network, including two feature extraction branches and a feature aggregation branch, is constructed to learn and fuse discriminative representations under the guidance of priors provided by saliency maps. In particular, two feature extraction branches take the original image and corresponding saliency map as input for extracting foreground- and background-specific representations. Subsequently, a hierarchical feature aggregation branch receives and fuses the features from different stages of two feature extraction branches, for lesion classification in a task-oriented manner. The proposed model was evaluated on three datasets using 5-fold cross validation, and experimental results have demonstrated that it outperforms several state-of-the-art deep learning methods on breast lesion diagnosis using BUS images.


Assuntos
Redes Neurais de Computação , Ultrassonografia Mamária , Análise por Conglomerados , Feminino , Humanos , Projetos de Pesquisa , Ultrassonografia
4.
IEEE Trans Med Imaging ; 41(2): 476-490, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34582349

RESUMO

Deep learning methods, especially convolutional neural networks, have been successfully applied to lesion segmentation in breast ultrasound (BUS) images. However, pattern complexity and intensity similarity between the surrounding tissues (i.e., background) and lesion regions (i.e., foreground) bring challenges for lesion segmentation. Considering that such rich texture information is contained in background, very few methods have tried to explore and exploit background-salient representations for assisting foreground segmentation. Additionally, other characteristics of BUS images, i.e., 1) low-contrast appearance and blurry boundary, and 2) significant shape and position variation of lesions, also increase the difficulty in accurate lesion segmentation. In this paper, we present a saliency-guided morphology-aware U-Net (SMU-Net) for lesion segmentation in BUS images. The SMU-Net is composed of a main network with an additional middle stream and an auxiliary network. Specifically, we first propose generation of saliency maps which incorporate both low-level and high-level image structures, for foreground and background. These saliency maps are then employed to guide the main network and auxiliary network for respectively learning foreground-salient and background-salient representations. Furthermore, we devise an additional middle stream which basically consists of background-assisted fusion, shape-aware, edge-aware and position-aware units. This stream receives the coarse-to-fine representations from the main network and auxiliary network for efficiently fusing the foreground-salient and background-salient features and enhancing the ability of learning morphological information for network. Extensive experiments on five datasets demonstrate higher performance and superior robustness to the scale of dataset than several state-of-the-art deep learning approaches in breast lesion segmentation in ultrasound image.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Ultrassonografia , Ultrassonografia Mamária
5.
Med Phys ; 48(8): 4262-4278, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34053092

RESUMO

PURPOSE: Breast ultrasound (BUS) image segmentation plays a crucial role in computer-aided diagnosis systems for BUS examination, which are useful for improved accuracy of breast cancer diagnosis. However, such performance remains a challenging task owing to the poor image quality and large variations in the sizes, shapes, and locations of breast lesions. In this paper, we propose a new convolutional neural network with coarse-to-fine feature fusion to address the aforementioned challenges. METHODS: The proposed fusion network consists of an encoder path, a decoder path, and a core fusion stream path (FSP). The encoder path is used to capture the context information, and the decoder path is used for localization prediction. The FSP is designed to generate beneficial aggregate feature representations (i.e., various-sized lesion features, aggregated coarse-to-fine information, and high-resolution edge characteristics) from the encoder and decoder paths, which are eventually used for accurate breast lesion segmentation. To better retain the boundary information and alleviate the effect of image noise, we input the superpixel image along with the original image to the fusion network. Furthermore, a weighted-balanced loss function was designed to address the problem of lesion regions having different sizes. We then conducted exhaustive experiments on three public BUS datasets to evaluate the proposed network. RESULTS: The proposed method outperformed state-of-the-art (SOTA) segmentation methods on the three public BUS datasets, with average dice similarity coefficients of 84.71(±1.07), 83.76(±0.83), and 86.52(±1.52), average intersection-over-union values of 76.34(±1.50), 75.70(±0.98), and 77.86(±2.07), average sensitivities of 86.66(±1.82), 85.21(±1.98), and 87.21(±2.51), average specificities of 97.92(±0.46), 98.57(±0.19), and 99.42(±0.21), and average accuracies of 95.89(±0.57), 97.17(±0.3), and 98.51(±0.3). CONCLUSIONS: The proposed fusion network could effectively segment lesions from BUS images, thereby presenting a new feature fusion strategy to handle challenging task of segmentation, while outperforming the SOTA segmentation methods. The code is publicly available at https://github.com/mniwk/CF2-NET.


Assuntos
Neoplasias da Mama , Processamento de Imagem Assistida por Computador , Neoplasias da Mama/diagnóstico por imagem , Diagnóstico por Computador , Feminino , Humanos , Redes Neurais de Computação , Ultrassonografia Mamária
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...