Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Med Imaging ; PP2024 Feb 19.
Article in English | MEDLINE | ID: mdl-38373127

ABSTRACT

Medical image analysis techniques have been employed in diagnosing and screening clinical diseases. However, both poor medical image quality and illumination style inconsistency increase uncertainty in clinical decision-making, potentially resulting in clinician misdiagnosis. The majority of current image enhancement methods primarily concentrate on enhancing medical image quality by leveraging high-quality reference images, which are challenging to collect in clinical applications. In this study, we address image quality enhancement within a fully self-supervised learning setting, wherein neither high-quality images nor paired images are required. To achieve this goal, we investigate the potential of self-supervised learning combined with domain adaptation to enhance the quality of medical images without the guidance of high-quality medical images. We design a Domain Adaptation Self-supervised Quality Enhancement framework, called DASQE. More specifically, we establish multiple domains at the patch level through a designed rule-based quality assessment scheme and style clustering. To achieve image quality enhancement and maintain style consistency, we formulate the image quality enhancement as a collaborative self-supervised domain adaptation task for disentangling the low-quality factors, medical image content, and illumination style characteristics by exploring intrinsic supervision in the low-quality medical images. Finally, we perform extensive experiments on six benchmark datasets of medical images, and the experimental results demonstrate that DASQE attains state-of-the-art performance. Furthermore, we explore the impact of the proposed method on various clinical tasks, such as retinal fundus vessel/lesion segmentation, nerve fiber segmentation, polyp segmentation, skin lesion segmentation, and disease classification. The results demonstrate that DASQE is advantageous for diverse downstream image analysis tasks.

2.
Comput Biol Med ; 144: 105341, 2022 05.
Article in English | MEDLINE | ID: mdl-35279423

ABSTRACT

Early detection and treatment of diabetic retinopathy (DR) can significantly reduce the risk of vision loss in patients. In essence, we are faced with two challenges: (i) how to simultaneously achieve domain adaptation from the different domains and (ii) how to build an interpretable multi-instance learning (MIL) on the target domain in an end-to-end framework. In this paper, we address these issues and propose a unified weakly-supervised domain adaptation framework, which consists of three components: domain adaptation, instance progressive discriminator and multi-instance learning with attention. The method models the relationship between the patches and images in the target domain with a multi-instance learning scheme and an attention mechanism. Meanwhile, it incorporates all available information from both source and target domains for a jointly learning strategy. We validate the performance of the proposed framework for DR grading on the Messidor dataset and the large-scale Eyepacs dataset. The experimental results demonstrate that it achieves an average accuracy of 0.949 (95% CI 0.931-0.958)/0.764 (95% CI 0.755-0.772) and an average AUC value of 0.958 (95% CI 0.945-0.962)/0.749 (95% CI 0.732-0.761) for binary-class/multi-class classification tasks on the Messidor dataset. Moreover, the proposed method achieves an accuracy of 0.887 and a quadratic weighted kappa score value of 0.860 on the Eyepacs dataset, outperforming the state-of-the-art approaches. Comprehensive experiments confirm the effectiveness of the approach in terms of both grading performance and interpretability. The source code is available at https://github.com/HouQingshan/WAD-Net.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Interdisciplinary Placement , Diabetic Retinopathy/diagnostic imaging , Head , Humans , Learning
3.
Article in English | MEDLINE | ID: mdl-37015399

ABSTRACT

Diabetic retinopathy (DR) is one of the most serious complications of diabetes and is a prominent cause of permanent blindness. However, the low-quality fundus images increase the uncertainty of clinical diagnosis, resulting in a significant decrease on the grading performance of the fundus images. Therefore, enhancing the image quality is essential for predicting the grade level in DR diagnosis. In essence, we are faced with three challenges: (I) How to appropriately evaluate the quality of fundus images; (II) How to effectively enhance low-quality fundus images for providing reliable fundus images to ophthalmologists or automated analysis systems; (III) How to jointly train the quality assessment and enhancement for improving the DR grading performance. Considering the importance of image quality assessment and enhancement for DR grading, we propose a collaborative learning framework to jointly train the subnetworks of the image quality assessment as well as enhancement, and DR disease grading in a unified framework. The key contribution of the proposed framework lies in modelling the potential correlation of these tasks and the joint training of these subnetworks, which significantly improves the fundus image quality and DR grading performance. Our framework is a general learning model, which may be useful in other medical images with low-quality data. Extensive experimental results have shown that our method outperforms state-of-the-art DR grading methods by a considerable 73.6% ACC/71.2% Kappa and 88.5% ACC/86.3% Kappa on Messidor and EyeQ benchmark datasets, respectively. In addition, our method significantly enhances the low-quality fundus images while preserving fundus structure features and lesion information. To make the framework more general, we also evaluate the enhancement results in more downstream tasks, such as vessel segmentation.

SELECTION OF CITATIONS
SEARCH DETAIL
...