Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Biomed Opt Express ; 15(5): 2977-2999, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38855696

RESUMO

Accurate segmentation of polyp regions in gastrointestinal endoscopic images is pivotal for diagnosis and treatment. Despite advancements, challenges persist, like accurately segmenting small polyps and maintaining accuracy when polyps resemble surrounding tissues. Recent studies show the effectiveness of the pyramid vision transformer (PVT) in capturing global context, yet it may lack detailed information. Conversely, U-Net excels in semantic extraction. Hence, we propose the bilateral fusion enhanced network (BFE-Net) to address these challenges. Our model integrates U-Net and PVT features via a deep feature enhancement fusion module (FEF) and attention decoder module (AD). Experimental results demonstrate significant improvements, validating our model's effectiveness across various datasets and modalities, promising advancements in gastrointestinal polyp diagnosis and treatment.

2.
J Pers Med ; 13(1)2023 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-36675779

RESUMO

BACKGROUND: Accurate gastrointestinal (GI) lesion segmentation is crucial in diagnosing digestive tract diseases. An automatic lesion segmentation in endoscopic images is vital to relieving physicians' burden and improving the survival rate of patients. However, pixel-wise annotations are highly intensive, especially in clinical settings, while numerous unlabeled image datasets could be available, although the significant results of deep learning approaches in several tasks heavily depend on large labeled datasets. Limited labeled data also hinder trained models' generalizability under fully supervised learning for computer-aided diagnosis (CAD) systems. METHODS: This work proposes a generative adversarial learning-based semi-supervised segmentation framework for GI lesion diagnosis in endoscopic images to tackle the challenge of limited annotations. The proposed approach leverages limited annotated and large unlabeled datasets in the training networks. We extensively tested the proposed method on 4880 endoscopic images. RESULTS: Compared with current related works, the proposed method validates better results (Dice similarity coefficient = 89.42 ± 3.92, Intersection over union = 80.04 ± 5.75, Precision = 91.72 ± 4.05, Recall = 90.11 ± 5.64, and Hausdorff distance = 23.28 ± 14.36) on the challenging multi-sited datasets, confirming the effectiveness of the proposed framework. CONCLUSION: We explore a semi-supervised lesion segmentation method to employ the full use of multiple unlabeled endoscopic images to improve lesion segmentation accuracy. Experimental results confirmed the potential of our method and outperformed promising results compared with the current related works. The proposed CAD system can minimize diagnostic errors.

3.
J Med Syst ; 46(1): 4, 2021 Nov 22.
Artigo em Inglês | MEDLINE | ID: mdl-34807297

RESUMO

The classification of esophageal disease based on gastroscopic images is important in the clinical treatment, and is also helpful in providing patients with follow-up treatment plans and preventing lesion deterioration. In recent years, deep learning has achieved many satisfactory results in gastroscopic image classification tasks. However, most of them need a training set that consists of large numbers of images labeled by experienced experts. To reduce the image annotation burdens and improve the classification ability on small labeled gastroscopic image datasets, this study proposed a novel semi-supervised efficient contrastive learning (SSECL) classification method for esophageal disease. First, an efficient contrastive pair generation (ECPG) module was proposed to generate efficient contrastive pairs (ECPs), which took advantage of the high similarity features of images from the same lesion. Then, an unsupervised visual feature representation containing the general feature of esophageal gastroscopic images is learned by unsupervised efficient contrastive learning (UECL). At last, the feature representation will be transferred to the down-stream esophageal disease classification task. The experimental results have demonstrated that the classification accuracy of SSECL is 92.57%, which is better than that of the other state-of-the-art semi-supervised methods and is also higher than the classification method based on transfer learning (TL) by 2.28%. Thus, SSECL has solved the challenging problem of improving the classification result on small gastroscopic image dataset by fully utilizing the unlabeled gastroscopic images and the high similarity information among images from the same lesion. It also brings new insights into medical image classification tasks.


Assuntos
Doenças do Esôfago , Aprendizado de Máquina Supervisionado , Gastroscopia , Humanos
4.
Biomed Opt Express ; 12(6): 3066-3081, 2021 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-34221645

RESUMO

The accurate diagnosis of various esophageal diseases at different stages is crucial for providing precision therapy planning and improving 5-year survival rate of esophageal cancer patients. Automatic classification of various esophageal diseases in gastroscopic images can assist doctors to improve the diagnosis efficiency and accuracy. The existing deep learning-based classification method can only classify very few categories of esophageal diseases at the same time. Hence, we proposed a novel efficient channel attention deep dense convolutional neural network (ECA-DDCNN), which can classify the esophageal gastroscopic images into four main categories including normal esophagus (NE), precancerous esophageal diseases (PEDs), early esophageal cancer (EEC) and advanced esophageal cancer (AEC), covering six common sub-categories of esophageal diseases and one normal esophagus (seven sub-categories). In total, 20,965 gastroscopic images were collected from 4,077 patients and used to train and test our proposed method. Extensive experiments results have demonstrated convincingly that our proposed ECA-DDCNN outperforms the other state-of-art methods. The classification accuracy (Acc) of our method is 90.63% and the averaged area under curve (AUC) is 0.9877. Compared with other state-of-art methods, our method shows better performance in the classification of various esophageal disease. Particularly for these esophageal diseases with similar mucosal features, our method also achieves higher true positive (TP) rates. In conclusion, our proposed classification method has confirmed its potential ability in a wide variety of esophageal disease diagnosis.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...