Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
1.
Sci Rep ; 14(1): 10483, 2024 05 07.
Article in English | MEDLINE | ID: mdl-38714764

ABSTRACT

Automated machine learning (AutoML) allows for the simplified application of machine learning to real-world problems, by the implicit handling of necessary steps such as data pre-processing, feature engineering, model selection and hyperparameter optimization. This has encouraged its use in medical applications such as imaging. However, the impact of common parameter choices such as the number of trials allowed, and the resolution of the input images, has not been comprehensively explored in existing literature. We therefore benchmark AutoKeras (AK), an open-source AutoML framework, against several bespoke deep learning architectures, on five public medical datasets representing a wide range of imaging modalities. It was found that AK could outperform the bespoke models in general, although at the cost of increased training time. Moreover, our experiments suggest that a large number of trials and higher resolutions may not be necessary for optimal performance to be achieved.


Subject(s)
Machine Learning , Humans , Image Processing, Computer-Assisted/methods , Diagnostic Imaging/methods , Deep Learning , Algorithms
2.
Eye Vis (Lond) ; 11(1): 11, 2024 Mar 18.
Article in English | MEDLINE | ID: mdl-38494521

ABSTRACT

BACKGROUND: To describe the diagnostic performance of a deep learning (DL) algorithm in detecting Fuchs endothelial corneal dystrophy (FECD) based on specular microscopy (SM) and to reliably detect widefield peripheral SM images with an endothelial cell density (ECD) > 1000 cells/mm2. METHODS: Five hundred and forty-seven subjects had SM imaging performed for the central cornea endothelium. One hundred and seventy-three images had FECD, while 602 images had other diagnoses. Using fivefold cross-validation on the dataset containing 775 central SM images combined with ECD, coefficient of variation (CV) and hexagonal endothelial cell ratio (HEX), the first DL model was trained to discriminate FECD from other images and was further tested on an external set of 180 images. In eyes with FECD, a separate DL model was trained with 753 central/paracentral SM images to detect SM with ECD > 1000 cells/mm2 and tested on 557 peripheral SM images. Area under curve (AUC), sensitivity and specificity were evaluated. RESULTS: The first model achieved an AUC of 0.96 with 0.91 sensitivity and 0.91 specificity in detecting FECD from other images. With an external validation set, the model achieved an AUC of 0.77, with a sensitivity of 0.69 and specificity of 0.68 in differentiating FECD from other diagnoses. The second model achieved an AUC of 0.88 with 0.79 sensitivity and 0.78 specificity in detecting peripheral SM images with ECD > 1000 cells/mm2. CONCLUSIONS: Our pilot study developed a DL model that could reliably detect FECD from other SM images and identify widefield SM images with ECD > 1000 cells/mm2 in eyes with FECD. This could be the foundation for future DL models to track progression of eyes with FECD and identify candidates suitable for therapies such as Descemet stripping only.

3.
Curr Opin Ophthalmol ; 34(5): 422-430, 2023 Sep 01.
Article in English | MEDLINE | ID: mdl-37527200

ABSTRACT

PURPOSE OF REVIEW: Despite the growing scope of artificial intelligence (AI) and deep learning (DL) applications in the field of ophthalmology, most have yet to reach clinical adoption. Beyond model performance metrics, there has been an increasing emphasis on the need for explainability of proposed DL models. RECENT FINDINGS: Several explainable AI (XAI) methods have been proposed, and increasingly applied in ophthalmological DL applications, predominantly in medical imaging analysis tasks. SUMMARY: We summarize an overview of the key concepts, and categorize some examples of commonly employed XAI methods. Specific to ophthalmology, we explore XAI from a clinical perspective, in enhancing end-user trust, assisting clinical management, and uncovering new insights. We finally discuss its limitations and future directions to strengthen XAI for application to clinical practice.

4.
iScience ; 26(8): 107350, 2023 Aug 18.
Article in English | MEDLINE | ID: mdl-37554447

ABSTRACT

This paper describes the development of a deep learning model for prediction of hip fractures on pelvic radiographs (X-rays). Developed using over 40,000 pelvic radiographs from a single institution, the model demonstrated high sensitivity and specificity when applied to a test set of emergency department radiographs. This study approximates the real-world application of a deep learning fracture detection model by including radiographs with sub-optimal image quality, other non-hip fractures, and metallic implants, which were excluded from prior published work. The study also explores the effect of ethnicity on model performance, as well as the accuracy of visualization algorithm for fracture localization.

5.
Front Med (Lausanne) ; 10: 1184892, 2023.
Article in English | MEDLINE | ID: mdl-37425325

ABSTRACT

Introduction: Age-related macular degeneration (AMD) is one of the leading causes of vision impairment globally and early detection is crucial to prevent vision loss. However, the screening of AMD is resource dependent and demands experienced healthcare providers. Recently, deep learning (DL) systems have shown the potential for effective detection of various eye diseases from retinal fundus images, but the development of such robust systems requires a large amount of datasets, which could be limited by prevalence of the disease and privacy of patient. As in the case of AMD, the advanced phenotype is often scarce for conducting DL analysis, which may be tackled via generating synthetic images using Generative Adversarial Networks (GANs). This study aims to develop GAN-synthesized fundus photos with AMD lesions, and to assess the realness of these images with an objective scale. Methods: To build our GAN models, a total of 125,012 fundus photos were used from a real-world non-AMD phenotypical dataset. StyleGAN2 and human-in-the-loop (HITL) method were then applied to synthesize fundus images with AMD features. To objectively assess the quality of the synthesized images, we proposed a novel realness scale based on the frequency of the broken vessels observed in the fundus photos. Four residents conducted two rounds of gradings on 300 images to distinguish real from synthetic images, based on their subjective impression and the objective scale respectively. Results and discussion: The introduction of HITL training increased the percentage of synthetic images with AMD lesions, despite the limited number of AMD images in the initial training dataset. Qualitatively, the synthesized images have been proven to be robust in that our residents had limited ability to distinguish real from synthetic ones, as evidenced by an overall accuracy of 0.66 (95% CI: 0.61-0.66) and Cohen's kappa of 0.320. For the non-referable AMD classes (no or early AMD), the accuracy was only 0.51. With the objective scale, the overall accuracy improved to 0.72. In conclusion, GAN models built with HITL training are capable of producing realistic-looking fundus images that could fool human experts, while our objective realness scale based on broken vessels can help identifying the synthetic fundus photos.

6.
NPJ Digit Med ; 6(1): 10, 2023 Jan 26.
Article in English | MEDLINE | ID: mdl-36702878

ABSTRACT

Our study aims to identify children at risk of developing high myopia for timely assessment and intervention, preventing myopia progression and complications in adulthood through the development of a deep learning system (DLS). Using a school-based cohort in Singapore comprising of 998 children (aged 6-12 years old), we train and perform primary validation of the DLS using 7456 baseline fundus images of 1878 eyes; with external validation using an independent test dataset of 821 baseline fundus images of 189 eyes together with clinical data (age, gender, race, parental myopia, and baseline spherical equivalent (SE)). We derive three distinct algorithms - image, clinical and mix (image + clinical) models to predict high myopia development (SE ≤ -6.00 diopter) during teenage years (5 years later, age 11-17). Model performance is evaluated using area under the receiver operating curve (AUC). Our image models (Primary dataset AUC 0.93-0.95; Test dataset 0.91-0.93), clinical models (Primary dataset AUC 0.90-0.97; Test dataset 0.93-0.94) and mixed (image + clinical) models (Primary dataset AUC 0.97; Test dataset 0.97-0.98) achieve clinically acceptable performance. The addition of 1 year SE progression variable has minimal impact on the DLS performance (clinical model AUC 0.98 versus 0.97 in primary dataset, 0.97 versus 0.94 in test dataset; mixed model AUC 0.99 versus 0.97 in primary dataset, 0.95 versus 0.98 in test dataset). Thus, our DLS allows prediction of the development of high myopia by teenage years amongst school-going children. This has potential utility as a clinical-decision support tool to identify "at-risk" children for early intervention.

7.
Front Med (Lausanne) ; 9: 875242, 2022.
Article in English | MEDLINE | ID: mdl-36314006

ABSTRACT

Background: Many artificial intelligence (AI) studies have focused on development of AI models, novel techniques, and reporting guidelines. However, little is understood about clinicians' perspectives of AI applications in medical fields including ophthalmology, particularly in light of recent regulatory guidelines. The aim for this study was to evaluate the perspectives of ophthalmologists regarding AI in 4 major eye conditions: diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD) and cataract. Methods: This was a multi-national survey of ophthalmologists between March 1st, 2020 to February 29th, 2021 disseminated via the major global ophthalmology societies. The survey was designed based on microsystem, mesosystem and macrosystem questions, and the software as a medical device (SaMD) regulatory framework chaired by the Food and Drug Administration (FDA). Factors associated with AI adoption for ophthalmology analyzed with multivariable logistic regression random forest machine learning. Results: One thousand one hundred seventy-six ophthalmologists from 70 countries participated with a response rate ranging from 78.8 to 85.8% per question. Ophthalmologists were more willing to use AI as clinical assistive tools (88.1%, n = 890/1,010) especially those with over 20 years' experience (OR 3.70, 95% CI: 1.10-12.5, p = 0.035), as compared to clinical decision support tools (78.8%, n = 796/1,010) or diagnostic tools (64.5%, n = 651). A majority of Ophthalmologists felt that AI is most relevant to DR (78.2%), followed by glaucoma (70.7%), AMD (66.8%), and cataract (51.4%) detection. Many participants were confident their roles will not be replaced (68.2%, n = 632/927), and felt COVID-19 catalyzed willingness to adopt AI (80.9%, n = 750/927). Common barriers to implementation include medical liability from errors (72.5%, n = 672/927) whereas enablers include improving access (94.5%, n = 876/927). Machine learning modeling predicted acceptance from participant demographics with moderate to high accuracy, and area under the receiver operating curves of 0.63-0.83. Conclusion: Ophthalmologists are receptive to adopting AI as assistive tools for DR, glaucoma, and AMD. Furthermore, ML is a useful method that can be applied to evaluate predictive factors on clinical qualitative questionnaires. This study outlines actionable insights for future research and facilitation interventions to drive adoption and operationalization of AI tools for Ophthalmology.

10.
Eye Vis (Lond) ; 9(1): 3, 2022 Jan 07.
Article in English | MEDLINE | ID: mdl-34996524

ABSTRACT

The rise of artificial intelligence (AI) has brought breakthroughs in many areas of medicine. In ophthalmology, AI has delivered robust results in the screening and detection of diabetic retinopathy, age-related macular degeneration, glaucoma, and retinopathy of prematurity. Cataract management is another field that can benefit from greater AI application. Cataract  is the leading cause of reversible visual impairment with a rising global clinical burden. Improved diagnosis, monitoring, and surgical management are necessary to address this challenge. In addition, patients in large developing countries often suffer from limited access to tertiary care, a problem further exacerbated by the ongoing COVID-19 pandemic. AI on the other hand, can help transform cataract management by improving automation, efficacy and overcoming geographical barriers. First, AI can be applied as a telediagnostic platform to screen and diagnose patients with cataract using slit-lamp and fundus photographs. This utilizes a deep-learning, convolutional neural network (CNN) to detect and classify referable cataracts appropriately. Second, some of the latest intraocular lens formulas have used AI to enhance prediction accuracy, achieving superior postoperative refractive results compared to traditional formulas. Third, AI can be used to augment cataract surgical skill training by identifying different phases of cataract surgery on video and to optimize operating theater workflows by accurately predicting the duration of surgical procedures. Fourth, some AI CNN models are able to effectively predict the progression of posterior capsule opacification and eventual need for YAG laser capsulotomy. These advances in AI could transform cataract management and enable delivery of efficient ophthalmic services. The key challenges include ethical management of data, ensuring data security and privacy, demonstrating clinically acceptable performance, improving the generalizability of AI models across heterogeneous populations, and improving the trust of end-users.

11.
Clin Sci (Lond) ; 135(20): 2357-2376, 2021 10 29.
Article in English | MEDLINE | ID: mdl-34661658

ABSTRACT

Ophthalmology has been one of the early adopters of artificial intelligence (AI) within the medical field. Deep learning (DL), in particular, has garnered significant attention due to the availability of large amounts of data and digitized ocular images. Currently, AI in Ophthalmology is mainly focused on improving disease classification and supporting decision-making when treating ophthalmic diseases such as diabetic retinopathy, age-related macular degeneration (AMD), glaucoma and retinopathy of prematurity (ROP). However, most of the DL systems (DLSs) developed thus far remain in the research stage and only a handful are able to achieve clinical translation. This phenomenon is due to a combination of factors including concerns over security and privacy, poor generalizability, trust and explainability issues, unfavorable end-user perceptions and uncertain economic value. Overcoming this challenge would require a combination approach. Firstly, emerging techniques such as federated learning (FL), generative adversarial networks (GANs), autonomous AI and blockchain will be playing an increasingly critical role to enhance privacy, collaboration and DLS performance. Next, compliance to reporting and regulatory guidelines, such as CONSORT-AI and STARD-AI, will be required to in order to improve transparency, minimize abuse and ensure reproducibility. Thirdly, frameworks will be required to obtain patient consent, perform ethical assessment and evaluate end-user perception. Lastly, proper health economic assessment (HEA) must be performed to provide financial visibility during the early phases of DLS development. This is necessary to manage resources prudently and guide the development of DLS.


Subject(s)
Biomedical Research , Deep Learning , Eye Diseases , Ophthalmology , Animals , Clinical Decision-Making , Decision Support Techniques , Diagnosis, Computer-Assisted , Diffusion of Innovation , Eye Diseases/diagnosis , Eye Diseases/epidemiology , Eye Diseases/physiopathology , Eye Diseases/therapy , Humans , Prognosis , Reproducibility of Results
12.
Curr Opin Ophthalmol ; 32(5): 459-467, 2021 Sep 01.
Article in English | MEDLINE | ID: mdl-34324454

ABSTRACT

PURPOSE OF REVIEW: The development of deep learning (DL) systems requires a large amount of data, which may be limited by costs, protection of patient information and low prevalence of some conditions. Recent developments in artificial intelligence techniques have provided an innovative alternative to this challenge via the synthesis of biomedical images within a DL framework known as generative adversarial networks (GANs). This paper aims to introduce how GANs can be deployed for image synthesis in ophthalmology and to discuss the potential applications of GANs-produced images. RECENT FINDINGS: Image synthesis is the most relevant function of GANs to the medical field, and it has been widely used for generating 'new' medical images of various modalities. In ophthalmology, GANs have mainly been utilized for augmenting classification and predictive tasks, by synthesizing fundus images and optical coherence tomography images with and without pathologies such as age-related macular degeneration and diabetic retinopathy. Despite their ability to generate high-resolution images, the development of GANs remains data intensive, and there is a lack of consensus on how best to evaluate the outputs produced by GANs. SUMMARY: Although the problem of artificial biomedical data generation is of great interest, image synthesis by GANs represents an innovation with yet unclear relevance for ophthalmology.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Neural Networks, Computer , Ophthalmology , Artificial Intelligence , Humans , Image Processing, Computer-Assisted/methods
13.
Curr Opin Ophthalmol ; 32(5): 413-424, 2021 Sep 01.
Article in English | MEDLINE | ID: mdl-34310401

ABSTRACT

PURPOSE OF REVIEW: Myopia is one of the leading causes of visual impairment, with a projected increase in prevalence globally. One potential approach to address myopia and its complications is early detection and treatment. However, current healthcare systems may not be able to cope with the growing burden. Digital technological solutions such as artificial intelligence (AI) have emerged as a potential adjunct for myopia management. RECENT FINDINGS: There are currently four significant domains of AI in myopia, including machine learning (ML), deep learning (DL), genetics and natural language processing (NLP). ML has been demonstrated to be a useful adjunctive for myopia prediction and biometry for cataract surgery in highly myopic individuals. DL techniques, particularly convoluted neural networks, have been applied to various image-related diagnostic and predictive solutions. Applications of AI in genomics and NLP appear to be at a nascent stage. SUMMARY: Current AI research is mainly focused on disease classification and prediction in myopia. Through greater collaborative research, we envision AI will play an increasingly critical role in big data analysis by aggregating a greater variety of parameters including genomics and environmental factors. This may enable the development of generalizable adjunctive DL systems that could help realize predictive and individualized precision medicine for myopic patients.


Subject(s)
Artificial Intelligence , Myopia , Artificial Intelligence/trends , Deep Learning , Forecasting , Genomics , Humans , Machine Learning/trends , Myopia/diagnosis , Myopia/genetics , Myopia/therapy , Natural Language Processing , Neural Networks, Computer
14.
Lancet Digit Health ; 2(5): e240-e249, 2020 05.
Article in English | MEDLINE | ID: mdl-33328056

ABSTRACT

BACKGROUND: Deep learning is a novel machine learning technique that has been shown to be as effective as human graders in detecting diabetic retinopathy from fundus photographs. We used a cost-minimisation analysis to evaluate the potential savings of two deep learning approaches as compared with the current human assessment: a semi-automated deep learning model as a triage filter before secondary human assessment; and a fully automated deep learning model without human assessment. METHODS: In this economic analysis modelling study, using 39 006 consecutive patients with diabetes in a national diabetic retinopathy screening programme in Singapore in 2015, we used a decision tree model and TreeAge Pro to compare the actual cost of screening this cohort with human graders against the simulated cost for semi-automated and fully automated screening models. Model parameters included diabetic retinopathy prevalence rates, diabetic retinopathy screening costs under each screening model, cost of medical consultation, and diagnostic performance (ie, sensitivity and specificity). The primary outcome was total cost for each screening model. Deterministic sensitivity analyses were done to gauge the sensitivity of the results to key model assumptions. FINDINGS: From the health system perspective, the semi-automated screening model was the least expensive of the three models, at US$62 per patient per year. The fully automated model was $66 per patient per year, and the human assessment model was $77 per patient per year. The savings to the Singapore health system associated with switching to the semi-automated model are estimated to be $489 000, which is roughly 20% of the current annual screening cost. By 2050, Singapore is projected to have 1 million people with diabetes; at this time, the estimated annual savings would be $15 million. INTERPRETATION: This study provides a strong economic rationale for using deep learning systems as an assistive tool to screen for diabetic retinopathy. FUNDING: Ministry of Health, Singapore.


Subject(s)
Artificial Intelligence , Cost-Benefit Analysis , Diabetic Retinopathy/diagnosis , Diagnostic Techniques, Ophthalmological/economics , Image Processing, Computer-Assisted/economics , Models, Biological , Telemedicine/economics , Adult , Aged , Decision Trees , Diabetes Mellitus , Diabetic Retinopathy/economics , Health Care Costs , Humans , Machine Learning , Mass Screening/economics , Middle Aged , Ophthalmology/economics , Photography , Physical Examination , Retina/pathology , Sensitivity and Specificity , Singapore , Telemedicine/methods
15.
Eye Vis (Lond) ; 7: 21, 2020.
Article in English | MEDLINE | ID: mdl-32313813

ABSTRACT

BACKGROUND: Effective screening is a desirable method for the early detection and successful treatment for diabetic retinopathy, and fundus photography is currently the dominant medium for retinal imaging due to its convenience and accessibility. Manual screening using fundus photographs has however involved considerable costs for patients, clinicians and national health systems, which has limited its application particularly in less-developed countries. The advent of artificial intelligence, and in particular deep learning techniques, has however raised the possibility of widespread automated screening. MAIN TEXT: In this review, we first briefly survey major published advances in retinal analysis using artificial intelligence. We take care to separately describe standard multiple-field fundus photography, and the newer modalities of ultra-wide field photography and smartphone-based photography. Finally, we consider several machine learning concepts that have been particularly relevant to the domain and illustrate their usage with extant works. CONCLUSIONS: In the ophthalmology field, it was demonstrated that deep learning tools for diabetic retinopathy show clinically acceptable diagnostic performance when using colour retinal fundus images. Artificial intelligence models are among the most promising solutions to tackle the burden of diabetic retinopathy management in a comprehensive manner. However, future research is crucial to assess the potential clinical deployment, evaluate the cost-effectiveness of different DL systems in clinical practice and improve clinical acceptance.

16.
NPJ Digit Med ; 3: 40, 2020.
Article in English | MEDLINE | ID: mdl-32219181

ABSTRACT

Deep learning (DL) has been shown to be effective in developing diabetic retinopathy (DR) algorithms, possibly tackling financial and manpower challenges hindering implementation of DR screening. However, our systematic review of the literature reveals few studies studied the impact of different factors on these DL algorithms, that are important for clinical deployment in real-world settings. Using 455,491 retinal images, we evaluated two technical and three image-related factors in detection of referable DR. For technical factors, the performances of four DL models (VGGNet, ResNet, DenseNet, Ensemble) and two computational frameworks (Caffe, TensorFlow) were evaluated while for image-related factors, we evaluated image compression levels (reducing image size, 350, 300, 250, 200, 150 KB), number of fields (7-field, 2-field, 1-field) and media clarity (pseudophakic vs phakic). In detection of referable DR, four DL models showed comparable diagnostic performance (AUC 0.936-0.944). To develop the VGGNet model, two computational frameworks had similar AUC (0.936). The DL performance dropped when image size decreased below 250 KB (AUC 0.936, 0.900, p < 0.001). The DL performance performed better when there were increased number of fields (dataset 1: 2-field vs 1-field-AUC 0.936 vs 0.908, p < 0.001; dataset 2: 7-field vs 2-field vs 1-field, AUC 0.949 vs 0.911 vs 0.895). DL performed better in the pseudophakic than phakic eyes (AUC 0.918 vs 0.833, p < 0.001). Various image-related factors play more significant roles than technical factors in determining the diagnostic performance, suggesting the importance of having robust training and testing datasets for DL training and deployment in the real-world settings.

17.
Eye (Lond) ; 34(3): 451-460, 2020 03.
Article in English | MEDLINE | ID: mdl-31488886

ABSTRACT

Diabetes is a global eye health issue. Given the rising in diabetes prevalence and ageing population, this poses significant challenge to perform diabetic retinopathy (DR) screening for these patients. Artificial intelligence (AI) using machine learning and deep learning have been adopted by various groups to develop automated DR detection algorithms. This article aims to describe the state-of-art AI DR screening technologies that have been described in the literature, some of which are already commercially available. All these technologies were designed using different training datasets and technical methodologies. Although many groups have published robust diagnostic performance of the AI algorithms for DR screening, future research is required to address several challenges, for examples medicolegal implications, ethics, and clinical deployment model in order to expedite the translation of these novel technologies into the healthcare setting.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Algorithms , Artificial Intelligence , Diabetic Retinopathy/diagnosis , Diabetic Retinopathy/epidemiology , Humans , Machine Learning , Mass Screening
18.
Eye (Lond) ; 34(3): 604, 2020 Mar.
Article in English | MEDLINE | ID: mdl-31822855

ABSTRACT

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

19.
Curr Diab Rep ; 19(9): 72, 2019 07 31.
Article in English | MEDLINE | ID: mdl-31367962

ABSTRACT

PURPOSE OF REVIEW: This paper systematically reviews the recent progress in diabetic retinopathy screening. It provides an integrated overview of the current state of knowledge of emerging techniques using artificial intelligence integration in national screening programs around the world. Existing methodological approaches and research insights are evaluated. An understanding of existing gaps and future directions is created. RECENT FINDINGS: Over the past decades, artificial intelligence has emerged into the scientific consciousness with breakthroughs that are sparking increasing interest among computer science and medical communities. Specifically, machine learning and deep learning (a subtype of machine learning) applications of artificial intelligence are spreading into areas that previously were thought to be only the purview of humans, and a number of applications in ophthalmology field have been explored. Multiple studies all around the world have demonstrated that such systems can behave on par with clinical experts with robust diagnostic performance in diabetic retinopathy diagnosis. However, only few tools have been evaluated in clinical prospective studies. Given the rapid and impressive progress of artificial intelligence technologies, the implementation of deep learning systems into routinely practiced diabetic retinopathy screening could represent a cost-effective alternative to help reduce the incidence of preventable blindness around the world.


Subject(s)
Diabetic Retinopathy/diagnosis , Mass Screening/methods , Artificial Intelligence , Global Health , Humans , Machine Learning , Ophthalmology/methods , Ophthalmology/trends
20.
NPJ Digit Med ; 2: 24, 2019.
Article in English | MEDLINE | ID: mdl-31304371

ABSTRACT

In any community, the key to understanding the burden of a specific condition is to conduct an epidemiological study. The deep learning system (DLS) recently showed promising diagnostic performance for diabetic retinopathy (DR). This study aims to use DLS as the grading tool, instead of human assessors, to determine the prevalence and the systemic cardiovascular risk factors for DR on fundus photographs, in patients with diabetes. This is a multi-ethnic (5 races), multi-site (8 datasets from Singapore, USA, Hong Kong, China and Australia), cross-sectional study involving 18,912 patients (n = 93,293 images). We compared these results and the time taken for DR assessment by DLS versus 17 human assessors - 10 retinal specialists/ophthalmologists and 7 professional graders). The estimation of DR prevalence between DLS and human assessors is comparable for any DR, referable DR and vision-threatening DR (VTDR) (Human assessors: 15.9, 6.5% and 4.1%; DLS: 16.1%, 6.4%, 3.7%). Both assessment methods identified similar risk factors (with comparable AUCs), including younger age, longer diabetes duration, increased HbA1c and systolic blood pressure, for any DR, referable DR and VTDR (p > 0.05). The total time taken for DLS to evaluate DR from 93,293 fundus photographs was ~1 month compared to 2 years for human assessors. In conclusion, the prevalence and systemic risk factors for DR in multi-ethnic population could be determined accurately using a DLS, in significantly less time than human assessors. This study highlights the potential use of AI for future epidemiology or clinical trials for DR grading in the global communities.

SELECTION OF CITATIONS
SEARCH DETAIL
...