Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Br J Ophthalmol ; 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38839251

ABSTRACT

BACKGROUND/AIMS: The aim of this study was to develop and evaluate digital ray, based on preoperative and postoperative image pairs using style transfer generative adversarial networks (GANs), to enhance cataractous fundus images for improved retinopathy detection. METHODS: For eligible cataract patients, preoperative and postoperative colour fundus photographs (CFP) and ultra-wide field (UWF) images were captured. Then, both the original CycleGAN and a modified CycleGAN (C2ycleGAN) framework were adopted for image generation and quantitatively compared using Frechet Inception Distance (FID) and Kernel Inception Distance (KID). Additionally, CFP and UWF images from another cataract cohort were used to test model performances. Different panels of ophthalmologists evaluated the quality, authenticity and diagnostic efficacy of the generated images. RESULTS: A total of 959 CFP and 1009 UWF image pairs were included in model development. FID and KID indicated that images generated by C2ycleGAN presented significantly improved quality. Based on ophthalmologists' average ratings, the percentages of inadequate-quality images decreased from 32% to 18.8% for CFP, and from 18.7% to 14.7% for UWF. Only 24.8% and 13.8% of generated CFP and UWF images could be recognised as synthetic. The accuracy of retinopathy detection significantly increased from 78% to 91% for CFP and from 91% to 93% for UWF. For retinopathy subtype diagnosis, the accuracies also increased from 87%-94% to 91%-100% for CFP and from 87%-95% to 93%-97% for UWF. CONCLUSION: Digital ray could generate realistic postoperative CFP and UWF images with enhanced quality and accuracy for overall detection and subtype diagnosis of retinopathies, especially for CFP.\ TRIAL REGISTRATION NUMBER: This study was registered with ClinicalTrials.gov (NCT05491798).

2.
Nat Commun ; 15(1): 3650, 2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38688925

ABSTRACT

Utilization of digital technologies for cataract screening in primary care is a potential solution for addressing the dilemma between the growing aging population and unequally distributed resources. Here, we propose a digital technology-driven hierarchical screening (DH screening) pattern implemented in China to promote the equity and accessibility of healthcare. It consists of home-based mobile artificial intelligence (AI) screening, community-based AI diagnosis, and referral to hospitals. We utilize decision-analytic Markov models to evaluate the cost-effectiveness and cost-utility of different cataract screening strategies (no screening, telescreening, AI screening and DH screening). A simulated cohort of 100,000 individuals from age 50 is built through a total of 30 1-year Markov cycles. The primary outcomes are incremental cost-effectiveness ratio and incremental cost-utility ratio. The results show that DH screening dominates no screening, telescreening and AI screening in urban and rural China. Annual DH screening emerges as the most economically effective strategy with 341 (338 to 344) and 1326 (1312 to 1340) years of blindness avoided compared with telescreening, and 37 (35 to 39) and 140 (131 to 148) years compared with AI screening in urban and rural settings, respectively. The findings remain robust across all sensitivity analyses conducted. Here, we report that DH screening is cost-effective in urban and rural China, and the annual screening proves to be the most cost-effective option, providing an economic rationale for policymakers promoting public eye health in low- and middle-income countries.


Subject(s)
Cataract , Cost-Benefit Analysis , Mass Screening , Humans , China/epidemiology , Cataract/economics , Cataract/diagnosis , Cataract/epidemiology , Middle Aged , Mass Screening/economics , Mass Screening/methods , Male , Digital Technology/economics , Female , Markov Chains , Aged , Artificial Intelligence , Telemedicine/economics , Telemedicine/methods
3.
Nat Commun ; 14(1): 7126, 2023 11 06.
Article in English | MEDLINE | ID: mdl-37932255

ABSTRACT

Age is closely related to human health and disease risks. However, chronologically defined age often disagrees with biological age, primarily due to genetic and environmental variables. Identifying effective indicators for biological age in clinical practice and self-monitoring is important but currently lacking. The human lens accumulates age-related changes that are amenable to rapid and objective assessment. Here, using lens photographs from 20 to 96-year-olds, we develop LensAge to reflect lens aging via deep learning. LensAge is closely correlated with chronological age of relatively healthy individuals (R2 > 0.80, mean absolute errors of 4.25 to 4.82 years). Among the general population, we calculate the LensAge index by contrasting LensAge and chronological age to reflect the aging rate relative to peers. The LensAge index effectively reveals the risks of age-related eye and systemic disease occurrence, as well as all-cause mortality. It outperforms chronological age in reflecting age-related disease risks (p < 0.001). More importantly, our models can conveniently work based on smartphone photographs, suggesting suitability for routine self-examination of aging status. Overall, our study demonstrates that the LensAge index may serve as an ideal quantitative indicator for clinically assessing and self-monitoring biological age in humans.


Subject(s)
Deep Learning , Lens, Crystalline , Humans , Child, Preschool , Aging/genetics
4.
NPJ Digit Med ; 6(1): 192, 2023 Oct 16.
Article in English | MEDLINE | ID: mdl-37845275

ABSTRACT

Image quality variation is a prominent cause of performance degradation for intelligent disease diagnostic models in clinical applications. Image quality issues are particularly prominent in infantile fundus photography due to poor patient cooperation, which poses a high risk of misdiagnosis. Here, we developed a deep learning-based image quality assessment and enhancement system (DeepQuality) for infantile fundus images to improve infant retinopathy screening. DeepQuality can accurately detect various quality defects concerning integrity, illumination, and clarity with area under the curve (AUC) values ranging from 0.933 to 0.995. It can also comprehensively score the overall quality of each fundus photograph. By analyzing 2,015,758 infantile fundus photographs from real-world settings using DeepQuality, we found that 58.3% of them had varying degrees of quality defects, and large variations were observed among different regions and categories of hospitals. Additionally, DeepQuality provides quality enhancement based on the results of quality assessment. After quality enhancement, the performance of retinopathy of prematurity (ROP) diagnosis of clinicians was significantly improved. Moreover, the integration of DeepQuality and AI diagnostic models can effectively improve the model performance for detecting ROP. This study may be an important reference for the future development of other image-based intelligent disease screening systems.

5.
STAR Protoc ; 4(4): 102565, 2023 Dec 15.
Article in English | MEDLINE | ID: mdl-37733597

ABSTRACT

Data quality issues have been acknowledged as one of the greatest obstacles in medical artificial intelligence research. Here, we present DeepFundus, which employs deep learning techniques to perform multidimensional classification of fundus image quality and provide real-time guidance for on-site image acquisition. We describe steps for data preparation, model training, model inference, model evaluation, and the visualization of results using heatmaps. This protocol can be implemented in Python using either the suggested dataset or a customized dataset. For complete details on the use and execution of this protocol, please refer to Liu et al.1.


Subject(s)
Biomedical Research , Deep Learning , Artificial Intelligence
7.
Cell Rep Med ; 4(2): 100912, 2023 02 21.
Article in English | MEDLINE | ID: mdl-36669488

ABSTRACT

Medical artificial intelligence (AI) has been moving from the research phase to clinical implementation. However, most AI-based models are mainly built using high-quality images preprocessed in the laboratory, which is not representative of real-world settings. This dataset bias proves a major driver of AI system dysfunction. Inspired by the design of flow cytometry, DeepFundus, a deep-learning-based fundus image classifier, is developed to provide automated and multidimensional image sorting to address this data quality gap. DeepFundus achieves areas under the receiver operating characteristic curves (AUCs) over 0.9 in image classification concerning overall quality, clinical quality factors, and structural quality analysis on both the internal test and national validation datasets. Additionally, DeepFundus can be integrated into both model development and clinical application of AI diagnostics to significantly enhance model performance for detecting multiple retinopathies. DeepFundus can be used to construct a data-driven paradigm for improving the entire life cycle of medical AI practice.


Subject(s)
Artificial Intelligence , Flow Cytometry , ROC Curve , Area Under Curve
SELECTION OF CITATIONS
SEARCH DETAIL
...