Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE J Biomed Health Inform ; 28(5): 2879-2890, 2024 May.
Article in English | MEDLINE | ID: mdl-38358859

ABSTRACT

Learning better representations is essential in medical image analysis for computer-aided diagnosis. However, learning discriminative semantic features is a major challenge due to the lack of large-scale well-annotated datasets. Thus, how can we learn a well-structured categorizable embedding space in limited-scale and unlabeled datasets? In this paper, we proposed a novel clustering-guided twin-contrastive learning framework (CTCL) that learns the discriminative representations of probe-based confocal laser endomicroscopy (pCLE) images for gastrointestinal (GI) tumor classification. Compared with traditional contrastive learning, in which only two randomly augmented views of the same instance are considered, the proposed CTCL aligns more semantically related and class-consistent samples by clustering, which improved intra-class tightness and inter-class variability to produce more informative representations. Furthermore, based on the inherent properties of CLE (geometric invariance and intrinsic noise), we proposed to regard CLE images with any angle rotation and CLE images with different noises as the same instance, respectively, for increased variability and diversity of samples. By optimizing CTCL in an end-to-end expectation-maximization framework, comprehensive experimental results demonstrated that CTCL-based visual representations achieved competitive performance on each downstream task as well as more robustness and transferability compared with existing state-of-the-art SSL and supervised methods. Notably, CTCL achieved 75.60%/78.45% and 64.12%/77.37% top-1 accuracy on the linear evaluation protocol and few-shot classification downstream tasks, respectively, which outperformed the previous best results by 1.27%/1.63% and 0.5%/3%, respectively. The proposed method holds great potential to assist pathologists in achieving an automated, fast, and high-precision diagnosis of GI tumors and accurately determining different stages of tumor development based on CLE images.


Subject(s)
Image Interpretation, Computer-Assisted , Microscopy, Confocal , Humans , Cluster Analysis , Microscopy, Confocal/methods , Image Interpretation, Computer-Assisted/methods , Gastrointestinal Neoplasms/diagnostic imaging , Gastrointestinal Neoplasms/pathology , Algorithms , Machine Learning
2.
Phys Med Biol ; 68(19)2023 09 22.
Article in English | MEDLINE | ID: mdl-37647912

ABSTRACT

Objective.As an emerging diagnosis technology for gastrointestinal diseases, confocal laser endomicroscopy (CLE) is limited by the physical structure of the fiber bundle, leading to the inevitable production of various forms of noise during the imaging process. However, existing denoising methods based on hand-crafted features inefficiently deal with realistic noise in CLE images. To alleviate this challenge, we proposed context-aware kernel estimation and multi-scale dynamic fusion modules to remove realistic noise in CLE images, including multiplicative and additive white noise.Approach.Specifically, a realistic noise statistics model with random noise specific to CLE data is constructed and further used to develop a self-supervised denoised model without the participation of clean images. Secondly, context-aware kernel estimation, which improves the representation of features by similar learnable region weights, addresses the problem of the non-uniform distribution of noises in CLE images and proposes a lightweight denoised model (CLENet). Thirdly, we have developed a multi-scale dynamic fusion module that decouples and recalibrates features, providing a precise and contextually enriched representation of features. Finally, we integrated two developed modules into a U-shaped backbone to build an efficient denoising network named U-CLENet.Main Results.Both proposed methods achieve comparable or better performance with low computational complexity on two gastrointestinal disease CLE image datasets using the same training benchmark.Significance.The proposed approaches improve the visual quality of unclear CLE images for various stages of tumor development, helping to reduce the rate of misdiagnosis in clinical decision-making and achieve computer graphics-assisted diagnosis.


Subject(s)
Benchmarking , Endoscopy , Diagnosis, Computer-Assisted , Models, Statistical , Lasers
3.
Biomed Opt Express ; 14(3): 1054-1070, 2023 Mar 01.
Article in English | MEDLINE | ID: mdl-36950231

ABSTRACT

As an emerging early diagnostic technology for gastrointestinal diseases, confocal laser endomicroscopy lacks large-scale perfect annotated data, leading to a major challenge in learning discriminative semantic features. So, how should we learn representations without labels or a few labels? In this paper, we proposed a feature-level MixSiam method based on the traditional Siamese network that learns the discriminative features of probe-based confocal laser endomicroscopy (pCLE) images for gastrointestinal (GI) tumor classification. The proposed method is divided into two stages: self-supervised learning (SSL) and few-shot learning (FS). First, in the self-supervised learning stage, the novel feature-level-based feature mixing approach introduced more task-relevant information via regularization, facilitating the traditional Siamese structure can adapt to the large intra-class variance of the pCLE dataset. Then, in the few-shot learning stage, we adopted the pre-trained model obtained through self-supervised learning as the base learner in the few-shot learning pipeline, enabling the feature extractor to learn richer and more transferable visual representations for rapid generalization to other pCLE classification tasks when labeled data are limited. On two disjoint pCLE gastrointestinal image datasets, the proposed method is evaluated. With the linear evaluation protocol, feature-level MixSiam outperforms the baseline by 6% (Top-1) and the supervised model by 2% (Top1), which demonstrates the effectiveness of the proposed feature-level-based feature mixing method. Furthermore, the proposed method outperforms the previous baseline method for the few-shot classification task, which can help improve the classification of pCLE images lacking large-scale annotated data for different stages of tumor development.

4.
J Biomed Opt ; 27(5)2022 05.
Article in English | MEDLINE | ID: mdl-35585672

ABSTRACT

SIGNIFICANCE: Confocal endoscopy images often suffer distortions, resulting in image quality degradation and information loss, increasing the difficulty of diagnosis and even leading to misdiagnosis. It is important to assess image quality and filter images with low diagnostic value before diagnosis. AIM: We propose a no-reference image quality assessment (IQA) method for confocal endoscopy images based on Weber's law and local descriptors. The proposed method can detect the severity of image degradation by capturing the perceptual structure of an image. APPROACH: We created a new dataset of 642 confocal endoscopy images to validate the performance of the proposed method. We then conducted extensive experiments to compare the accuracy and speed of the proposed method with other state-of-the-art IQA methods. RESULTS: Experimental results demonstrate that the proposed method achieved an SROCC of 0.85 and outperformed other IQA methods. CONCLUSIONS: Given its high consistency in subjective quality assessment, the proposed method can screen high-quality images in practical applications and contribute to diagnosis.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Endoscopy , Image Processing, Computer-Assisted/methods
5.
Med Phys ; 49(7): 4478-4493, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35396712

ABSTRACT

PURPOSE: Gadolinium-based contrast agents (GBCAs) have been successfully applied in magnetic resonance (MR) imaging to facilitate better lesion visualization. However, gadolinium deposition in the human brain raised widespread concerns recently. On the other hand, although high-resolution three-dimensional (3D) MR images are more desired for most existing medical image processing algorithms, their long scan duration and high acquiring costs make 2D MR images still much more common clinically. Therefore, developing alternative solutions for 3D contrast-enhanced MR image synthesis to replace GBCAs injection becomes an urgent requirement. METHODS: This study proposed a deep learning framework that produces 3D isotropic full-contrast T2Flair images from 2D anisotropic noncontrast T2Flair image stacks. The super-resolution (SR) and contrast-enhanced (CE) synthesis tasks are completed in sequence by using an identical generative adversarial network (GAN) with the same techniques. To solve the problem that intramodality datasets from different scanners have specific combinations of orientations, contrasts, and resolutions, we conducted a region-based data augmentation technique on the fly during training to simulate various imaging protocols in the clinic. We further improved our network by introducing atrous spatial pyramid pooling, enhanced residual blocks, and deep supervision for better quantitative and qualitative results. RESULTS: Our proposed method achieved superior CE-synthesized performance in quantitative metrics and perceptual evaluation. In detail, the PSNR, structural-similarity-index, and AUC are 32.25 dB, 0.932, and 0.991 in the whole brain and 24.93 dB, 0.851, and 0.929 in tumor regions. The radiologists' evaluations confirmed that our proposed method has high confidence in the diagnosis. Analysis of the generalization ability showed that benefiting from the proposed data augmentation technique, our network can be applied to "unseen" datasets with slight drops in quantitative and qualitative results. CONCLUSION: Our work demonstrates the clinical potential of synthesizing diagnostic 3D isotropic CE brain MR images from a single 2D anisotropic noncontrast sequence.


Subject(s)
Deep Learning , Contrast Media , Gadolinium , Humans , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...