Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Vis Comput Ind Biomed Art ; 6(1): 13, 2023 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-37402101

RESUMO

Sputum smear tests are critical for the diagnosis of respiratory diseases. Automatic segmentation of bacteria from sputum smear images is important for improving diagnostic efficiency. However, this remains a challenging task owing to the high interclass similarity among different categories of bacteria and the low contrast of the bacterial edges. To explore more levels of global pattern features to promote the distinguishing ability of bacterial categories and maintain sufficient local fine-grained features to ensure accurate localization of ambiguous bacteria simultaneously, we propose a novel dual-branch deformable cross-attention fusion network (DB-DCAFN) for accurate bacterial segmentation. Specifically, we first designed a dual-branch encoder consisting of multiple convolution and transformer blocks in parallel to simultaneously extract multilevel local and global features. We then designed a sparse and deformable cross-attention module to capture the semantic dependencies between local and global features, which can bridge the semantic gap and fuse features effectively. Furthermore, we designed a feature assignment fusion module to enhance meaningful features using an adaptive feature weighting strategy to obtain more accurate segmentation. We conducted extensive experiments to evaluate the effectiveness of DB-DCAFN on a clinical dataset comprising three bacterial categories: Acinetobacter baumannii, Klebsiella pneumoniae, and Pseudomonas aeruginosa. The experimental results demonstrate that the proposed DB-DCAFN outperforms other state-of-the-art methods and is effective at segmenting bacteria from sputum smear images.

2.
Comput Biol Med ; 157: 106788, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36958233

RESUMO

Deep learning methods using multimodal imagings have been proposed for the diagnosis of Alzheimer's disease (AD) and its early stages (SMC, subjective memory complaints), which may help to slow the progression of the disease through early intervention. However, current fusion methods for multimodal imagings are generally coarse and may lead to suboptimal results through the use of shared extractors or simple downscaling stitching. Another issue with diagnosing brain diseases is that they often affect multiple areas of the brain, making it important to consider potential connections throughout the brain. However, traditional convolutional neural networks (CNNs) may struggle with this issue due to their limited local receptive fields. To address this, many researchers have turned to transformer networks, which can provide global information about the brain but can be computationally intensive and perform poorly on small datasets. In this work, we propose a novel lightweight network called MENet that adaptively recalibrates the multiscale long-range receptive field to localize discriminative brain regions in a computationally efficient manner. Based on this, the network extracts the intensity and location responses between structural magnetic resonance imagings (sMRI) and 18-Fluoro-Deoxy-Glucose Positron Emission computed Tomography (FDG-PET) as an enhancement fusion for AD and SMC diagnosis. Our method is evaluated on the publicly available ADNI datasets and achieves 97.67% accuracy in AD diagnosis tasks and 81.63% accuracy in SMC diagnosis tasks using sMRI and FDG-PET. These results achieve state-of-the-art (SOTA) performance in both tasks. To the best of our knowledge, this is one of the first deep learning research methods for SMC diagnosis with FDG-PET.


Assuntos
Doença de Alzheimer , Humanos , Doença de Alzheimer/diagnóstico por imagem , Fluordesoxiglucose F18 , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons/métodos
3.
Front Neurosci ; 16: 831533, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35281501

RESUMO

18F-fluorodeoxyglucose (FDG)-positron emission tomography (PET) reveals altered brain metabolism in individuals with mild cognitive impairment (MCI) and Alzheimer's disease (AD). Some biomarkers derived from FDG-PET by computer-aided-diagnosis (CAD) technologies have been proved that they can accurately diagnosis normal control (NC), MCI, and AD. However, existing FDG-PET-based researches are still insufficient for the identification of early MCI (EMCI) and late MCI (LMCI). Compared with methods based other modalities, current methods with FDG-PET are also inadequate in using the inter-region-based features for the diagnosis of early AD. Moreover, considering the variability in different individuals, some hard samples which are very similar with both two classes limit the classification performance. To tackle these problems, in this paper, we propose a novel bilinear pooling and metric learning network (BMNet), which can extract the inter-region representation features and distinguish hard samples by constructing the embedding space. To validate the proposed method, we collect 898 FDG-PET images from Alzheimer's disease neuroimaging initiative (ADNI) including 263 normal control (NC) patients, 290 EMCI patients, 147 LMCI patients, and 198 AD patients. Following the common preprocessing steps, 90 features are extracted from each FDG-PET image according to the automatic anatomical landmark (AAL) template and then sent into the proposed network. Extensive fivefold cross-validation experiments are performed for multiple two-class classifications. Experiments show that most metrics are improved after adding the bilinear pooling module and metric losses to the Baseline model respectively. Specifically, in the classification task between EMCI and LMCI, the specificity improves 6.38% after adding the triple metric loss, and the negative predictive value (NPV) improves 3.45% after using the bilinear pooling module. In addition, the accuracy of classification between EMCI and LMCI achieves 79.64% using imbalanced FDG-PET images, which illustrates that the proposed method yields a state-of-the-art result of the classification accuracy between EMCI and LMCI based on PET images.

4.
Zhongguo Yi Liao Qi Xie Za Zhi ; 46(1): 5-9, 2022 Jan 30.
Artigo em Chinês | MEDLINE | ID: mdl-35150099

RESUMO

The glasses-free three dimensional(3D) endoscopic display system provides the surgeon with the depth information of the minimally invasive surgery scene obtained from the binocular perspective, which can effectively relieve the surgeon's posture fatigue and visual fatigue during the long-term surgery, and assist in the operation of surgical instruments more accurately to reduce the damage to the surrounding tissues of the operation area. However, the glasses-free 3D display device currently has the problem of a narrow optimal viewing zone and easy crosstalk, especially in the surgical teaching application scenario, which performs poorly. In order to overcome the limitation of the narrower field of view, we introduce deep learning algorithms to detect and locate multiple faces, fine-tune the 3D display grating of the endoscope, rearrange pixels, and change the best view area, so that more people can get the best view. The experimental results show that the face detection accuracy of the method is 97.88%, and the detection time is 135 frames/ms, which achieves high accuracy while maintaining real-time performance.


Assuntos
Endoscópios , Endoscopia , Humanos , Imageamento Tridimensional , Procedimentos Cirúrgicos Minimamente Invasivos , Instrumentos Cirúrgicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...