Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
Add more filters











Publication year range
1.
Ophthalmol Sci ; 4(6): 100570, 2024.
Article in English | MEDLINE | ID: mdl-39224530

ABSTRACT

Purpose: Application of artificial intelligence (AI) to macular OCT scans to segment and quantify volumetric change in anatomical and pathological features during intravitreal treatment for neovascular age-related macular degeneration (AMD). Design: Retrospective analysis of OCT images from the Moorfields Eye Hospital AMD Database. Participants: A total of 2115 eyes from 1801 patients starting anti-VEGF treatment between June 1, 2012, and June 30, 2017. Methods: The Moorfields Eye Hospital neovascular AMD database was queried for first and second eyes receiving anti-VEGF treatment and had an OCT scan at baseline and 12 months. Follow-up scans were input into the AI system and volumes of OCT variables were studied at different time points and compared with baseline volume groups. Cross-sectional comparisons between time points were conducted using Mann-Whitney U test. Main Outcome Measures: Volume outputs of the following variables were studied: intraretinal fluid, subretinal fluid, pigment epithelial detachment (PED), subretinal hyperreflective material (SHRM), hyperreflective foci, neurosensory retina, and retinal pigment epithelium. Results: Mean volumes of analyzed features decreased significantly from baseline to both 4 and 12 months, in both first-treated and second-treated eyes. Pathological features that reflect exudation, including pure fluid components (intraretinal fluid and subretinal fluid) and those with fluid and fibrovascular tissue (PED and SHRM), displayed similar responses to treatment over 12 months. Mean PED and SHRM volumes showed less pronounced but also substantial decreases over the first 2 months, reaching a plateau postloading phase, and minimal change to 12 months. Both neurosensory retina and retinal pigment epithelium volumes showed gradual reductions over time, and were not as substantial as exudative features. Conclusions: We report the results of a quantitative analysis of change in retinal segmented features over time, enabled by an AI segmentation system. Cross-sectional analysis at multiple time points demonstrated significant associations between baseline OCT-derived segmented features and the volume of biomarkers at follow-up. Demonstrating how certain OCT biomarkers progress with treatment and the impact of pretreatment retinal morphology on different structural volumes may provide novel insights into disease mechanisms and aid the personalization of care. Data will be made public for future studies. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

2.
Ophthalmol Sci ; 4(6): 100566, 2024.
Article in English | MEDLINE | ID: mdl-39139546

ABSTRACT

Objective: Recent developments in artificial intelligence (AI) have positioned it to transform several stages of the clinical trial process. In this study, we explore the role of AI in clinical trial recruitment of individuals with geographic atrophy (GA), an advanced stage of age-related macular degeneration, amidst numerous ongoing clinical trials for this condition. Design: Cross-sectional study. Subjects: Retrospective dataset from the INSIGHT Health Data Research Hub at Moorfields Eye Hospital in London, United Kingdom, including 306 651 patients (602 826 eyes) with suspected retinal disease who underwent OCT imaging between January 1, 2008 and April 10, 2023. Methods: A deep learning model was trained on OCT scans to identify patients potentially eligible for GA trials, using AI-generated segmentations of retinal tissue. This method's efficacy was compared against a traditional keyword-based electronic health record (EHR) search. A clinical validation with fundus autofluorescence (FAF) images was performed to calculate the positive predictive value of this approach, by comparing AI predictions with expert assessments. Main Outcome Measures: The primary outcomes included the positive predictive value of AI in identifying trial-eligible patients, and the secondary outcome was the intraclass correlation between GA areas segmented on FAF by experts and AI-segmented OCT scans. Results: The AI system shortlisted a larger number of eligible patients with greater precision (1139, positive predictive value: 63%; 95% confidence interval [CI]: 54%-71%) compared with the EHR search (693, positive predictive value: 40%; 95% CI: 39%-42%). A combined AI-EHR approach identified 604 eligible patients with a positive predictive value of 86% (95% CI: 79%-92%). Intraclass correlation of GA area segmented on FAF versus AI-segmented area on OCT was 0.77 (95% CI: 0.68-0.84) for cases meeting trial criteria. The AI also adjusts to the distinct imaging criteria from several clinical trials, generating tailored shortlists ranging from 438 to 1817 patients. Conclusions: This study demonstrates the potential for AI in facilitating automated prescreening for clinical trials in GA, enabling site feasibility assessments, data-driven protocol design, and cost reduction. Once treatments are available, similar AI systems could also be used to identify individuals who may benefit from treatment. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

3.
Ophthalmol Sci ; 4(4): 100472, 2024.
Article in English | MEDLINE | ID: mdl-38560277

ABSTRACT

Purpose: Periodontitis, a ubiquitous severe gum disease affecting the teeth and surrounding alveolar bone, can heighten systemic inflammation. We investigated the association between very severe periodontitis and early biomarkers of age-related macular degeneration (AMD), in individuals with no eye disease. Design: Cross-sectional analysis of the prospective community-based cohort United Kingdom (UK) Biobank. Participants: Sixty-seven thousand three hundred eleven UK residents aged 40 to 70 years recruited between 2006 and 2010 underwent retinal imaging. Methods: Macular-centered OCT images acquired at the baseline visit were segmented for retinal sublayer thicknesses. Very severe periodontitis was ascertained through a touchscreen questionnaire. Linear mixed effects regression modeled the association between very severe periodontitis and retinal sublayer thicknesses, adjusting for age, sex, ethnicity, socioeconomic status, alcohol consumption, smoking status, diabetes mellitus, hypertension, refractive error, and previous cataract surgery. Main Outcome Measures: Photoreceptor layer (PRL) and retinal pigment epithelium-Bruch's membrane (RPE-BM) thicknesses. Results: Among 36 897 participants included in the analysis, 1571 (4.3%) reported very severe periodontitis. Affected individuals were older, lived in areas of greater socioeconomic deprivation, and were more likely to be hypertensive, diabetic, and current smokers (all P < 0.001). On average, those with very severe periodontitis were hyperopic (0.05 ± 2.27 diopters) while those unaffected were myopic (-0.29 ± 2.40 diopters, P < 0.001). Following adjusted analysis, very severe periodontitis was associated with thinner PRL (-0.55 µm, 95% confidence interval [CI], -0.97 to -0.12; P = 0.022) but there was no difference in RPE-BM thickness (0.00 µm, 95% CI, -0.12 to 0.13; P = 0.97). The association between PRL thickness and very severe periodontitis was modified by age (P < 0.001). Stratifying individuals by age, thinner PRL was seen among those aged 60 to 69 years with disease (-1.19 µm, 95% CI, -1.85 to -0.53; P < 0.001) but not among those aged < 60 years. Conclusions: Among those with no known eye disease, very severe periodontitis is statistically associated with a thinner PRL, consistent with incipient AMD. Optimizing oral hygiene may hold additional relevance for people at risk of degenerative retinal disease. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

4.
Sci Rep ; 14(1): 6775, 2024 03 21.
Article in English | MEDLINE | ID: mdl-38514657

ABSTRACT

Artificial intelligence (AI) has great potential in ophthalmology. We investigated how ambiguous outputs from an AI diagnostic support system (AI-DSS) affected diagnostic responses from optometrists when assessing cases of suspected retinal disease. Thirty optometrists (15 more experienced, 15 less) assessed 30 clinical cases. For ten, participants saw an optical coherence tomography (OCT) scan, basic clinical information and retinal photography ('no AI'). For another ten, they were also given AI-generated OCT-based probabilistic diagnoses ('AI diagnosis'); and for ten, both AI-diagnosis and AI-generated OCT segmentations ('AI diagnosis + segmentation') were provided. Cases were matched across the three types of presentation and were selected to include 40% ambiguous and 20% incorrect AI outputs. Optometrist diagnostic agreement with the predefined reference standard was lowest for 'AI diagnosis + segmentation' (204/300, 68%) compared to 'AI diagnosis' (224/300, 75% p = 0.010), and 'no Al' (242/300, 81%, p = < 0.001). Agreement with AI diagnosis consistent with the reference standard decreased (174/210 vs 199/210, p = 0.003), but participants trusted the AI more (p = 0.029) with segmentations. Practitioner experience did not affect diagnostic responses (p = 0.24). More experienced participants were more confident (p = 0.012) and trusted the AI less (p = 0.038). Our findings also highlight issues around reference standard definition.


Subject(s)
Deep Learning , Ophthalmology , Optometrists , Retinal Diseases , Humans , Artificial Intelligence , Ophthalmology/methods , Tomography, Optical Coherence/methods
5.
Br J Ophthalmol ; 108(4): 625-632, 2024 Mar 20.
Article in English | MEDLINE | ID: mdl-37217292

ABSTRACT

BACKGROUND/AIMS: Evaluation of telemedicine care models has highlighted its potential for exacerbating healthcare inequalities. This study seeks to identify and characterise factors associated with non-attendance across face-to-face and telemedicine outpatient appointments. METHODS: A retrospective cohort study at a tertiary-level ophthalmic institution in the UK, between 1 January 2019 and 31 October 2021. Logistic regression modelled non-attendance against sociodemographic, clinical and operational exposure variables for all new patient registrations across five delivery modes: asynchronous, synchronous telephone, synchronous audiovisual and face to face prior to the pandemic and face to face during the pandemic. RESULTS: A total of 85 924 patients (median age 55 years, 54.4% female) were newly registered. Non-attendance differed significantly by delivery mode: (9.0% face to face prepandemic, 10.5% face to face during the pandemic, 11.7% asynchronous and 7.8%, synchronous during pandemic). Male sex, greater levels of deprivation, a previously cancelled appointment and not self-reporting ethnicity were strongly associated with non-attendance across all delivery modes. Individuals identifying as black ethnicity had worse attendance in synchronous audiovisual clinics (adjusted OR 4.24, 95% CI 1.59 to 11.28) but not asynchronous. Those not self-reporting their ethnicity were from more deprived backgrounds, had worse broadband access and had significantly higher non-attendance across all modes (all p<0.001). CONCLUSION: Persistent non-attendance among underserved populations attending telemedicine appointments highlights the challenge digital transformation faces for reducing healthcare inequalities. Implementation of new programmes should be accompanied by investigation into the differential health outcomes of vulnerable populations.


Subject(s)
Telemedicine , Humans , Male , Female , Middle Aged , Retrospective Studies , Referral and Consultation , Appointments and Schedules , Surveys and Questionnaires
6.
JAMA Ophthalmol ; 141(11): 1029-1036, 2023 Nov 01.
Article in English | MEDLINE | ID: mdl-37856110

ABSTRACT

Importance: Democratizing artificial intelligence (AI) enables model development by clinicians with a lack of coding expertise, powerful computing resources, and large, well-labeled data sets. Objective: To determine whether resource-constrained clinicians can use self-training via automated machine learning (ML) and public data sets to design high-performing diabetic retinopathy classification models. Design, Setting, and Participants: This diagnostic quality improvement study was conducted from January 1, 2021, to December 31, 2021. A self-training method without coding was used on 2 public data sets with retinal images from patients in France (Messidor-2 [n = 1748]) and the UK and US (EyePACS [n = 58 689]) and externally validated on 1 data set with retinal images from patients of a private Egyptian medical retina clinic (Egypt [n = 210]). An AI model was trained to classify referable diabetic retinopathy as an exemplar use case. Messidor-2 images were assigned adjudicated labels available on Kaggle; 4 images were deemed ungradable and excluded, leaving 1744 images. A total of 300 images randomly selected from the EyePACS data set were independently relabeled by 3 blinded retina specialists using the International Classification of Diabetic Retinopathy protocol for diabetic retinopathy grade and diabetic macular edema presence; 19 images were deemed ungradable, leaving 281 images. Data analysis was performed from February 1 to February 28, 2021. Exposures: Using public data sets, a teacher model was trained with labeled images using supervised learning. Next, the resulting predictions, termed pseudolabels, were used on an unlabeled public data set. Finally, a student model was trained with the existing labeled images and the additional pseudolabeled images. Main Outcomes and Measures: The analyzed metrics for the models included the area under the receiver operating characteristic curve (AUROC), accuracy, sensitivity, specificity, and F1 score. The Fisher exact test was performed, and 2-tailed P values were calculated for failure case analysis. Results: For the internal validation data sets, AUROC values for performance ranged from 0.886 to 0.939 for the teacher model and from 0.916 to 0.951 for the student model. For external validation of automated ML model performance, AUROC values and accuracy were 0.964 and 93.3% for the teacher model, 0.950 and 96.7% for the student model, and 0.890 and 94.3% for the manually coded bespoke model, respectively. Conclusions and Relevance: These findings suggest that self-training using automated ML is an effective method to increase both model performance and generalizability while decreasing the need for costly expert labeling. This approach advances the democratization of AI by enabling clinicians without coding expertise or access to large, well-labeled private data sets to develop their own AI models.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Macular Edema , Humans , Artificial Intelligence , Diabetic Retinopathy/diagnosis , Macular Edema/diagnosis , Retina , Referral and Consultation
7.
Nature ; 622(7981): 156-163, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37704728

ABSTRACT

Medical artificial intelligence (AI) offers great potential for recognizing signs of health conditions in retinal images and expediting the diagnosis of eye diseases and systemic disorders1. However, the development of AI models requires substantial annotation and models are usually task-specific with limited generalizability to different clinical applications2. Here, we present RETFound, a foundation model for retinal images that learns generalizable representations from unlabelled retinal images and provides a basis for label-efficient model adaptation in several applications. Specifically, RETFound is trained on 1.6 million unlabelled retinal images by means of self-supervised learning and then adapted to disease detection tasks with explicit labels. We show that adapted RETFound consistently outperforms several comparison models in the diagnosis and prognosis of sight-threatening eye diseases, as well as incident prediction of complex systemic disorders such as heart failure and myocardial infarction with fewer labelled data. RETFound provides a generalizable solution to improve model performance and alleviate the annotation workload of experts to enable broad clinical AI applications from retinal imaging.


Subject(s)
Artificial Intelligence , Eye Diseases , Retina , Humans , Eye Diseases/complications , Eye Diseases/diagnostic imaging , Heart Failure/complications , Heart Failure/diagnosis , Myocardial Infarction/complications , Myocardial Infarction/diagnosis , Retina/diagnostic imaging , Supervised Machine Learning
8.
Neurology ; 101(16): e1581-e1593, 2023 10 17.
Article in English | MEDLINE | ID: mdl-37604659

ABSTRACT

BACKGROUND AND OBJECTIVES: Cadaveric studies have shown disease-related neurodegeneration and other morphological abnormalities in the retina of individuals with Parkinson disease (PD); however, it remains unclear whether this can be reliably detected with in vivo imaging. We investigated inner retinal anatomy, measured using optical coherence tomography (OCT), in prevalent PD and subsequently assessed the association of these markers with the development of PD using a prospective research cohort. METHODS: This cross-sectional analysis used data from 2 studies. For the detection of retinal markers in prevalent PD, we used data from AlzEye, a retrospective cohort of 154,830 patients aged 40 years and older attending secondary care ophthalmic hospitals in London, United Kingdom, between 2008 and 2018. For the evaluation of retinal markers in incident PD, we used data from UK Biobank, a prospective population-based cohort where 67,311 volunteers aged 40-69 years were recruited between 2006 and 2010 and underwent retinal imaging. Macular retinal nerve fiber layer (mRNFL), ganglion cell-inner plexiform layer (GCIPL), and inner nuclear layer (INL) thicknesses were extracted from fovea-centered OCT. Linear mixed-effects models were fitted to examine the association between prevalent PD and retinal thicknesses. Hazard ratios for the association between time to PD diagnosis and retinal thicknesses were estimated using frailty models. RESULTS: Within the AlzEye cohort, there were 700 individuals with prevalent PD and 105,770 controls (mean age 65.5 ± 13.5 years, 51.7% female). Individuals with prevalent PD had thinner GCIPL (-2.12 µm, 95% CI -3.17 to -1.07, p = 8.2 × 10-5) and INL (-0.99 µm, 95% CI -1.52 to -0.47, p = 2.1 × 10-4). The UK Biobank included 50,405 participants (mean age 56.1 ± 8.2 years, 54.7% female), of whom 53 developed PD at a mean of 2,653 ± 851 days. Thinner GCIPL (hazard ratio [HR] 0.62 per SD increase, 95% CI 0.46-0.84, p = 0.002) and thinner INL (HR 0.70, 95% CI 0.51-0.96, p = 0.026) were also associated with incident PD. DISCUSSION: Individuals with PD have reduced thickness of the INL and GCIPL of the retina. Involvement of these layers several years before clinical presentation highlight a potential role for retinal imaging for at-risk stratification of PD.


Subject(s)
Parkinson Disease , Retinal Ganglion Cells , Humans , Female , Adult , Middle Aged , Aged , Male , Parkinson Disease/diagnostic imaging , Parkinson Disease/epidemiology , Tomography, Optical Coherence/methods , Retrospective Studies , Prospective Studies , Cross-Sectional Studies , Nerve Fibers , Retina/diagnostic imaging
9.
Lancet Digit Health ; 5(6): e340-e349, 2023 06.
Article in English | MEDLINE | ID: mdl-37088692

ABSTRACT

BACKGROUND: Retinopathy of prematurity (ROP), a leading cause of childhood blindness, is diagnosed through interval screening by paediatric ophthalmologists. However, improved survival of premature neonates coupled with a scarcity of available experts has raised concerns about the sustainability of this approach. We aimed to develop bespoke and code-free deep learning-based classifiers for plus disease, a hallmark of ROP, in an ethnically diverse population in London, UK, and externally validate them in ethnically, geographically, and socioeconomically diverse populations in four countries and three continents. Code-free deep learning is not reliant on the availability of expertly trained data scientists, thus being of particular potential benefit for low resource health-care settings. METHODS: This retrospective cohort study used retinal images from 1370 neonates admitted to a neonatal unit at Homerton University Hospital NHS Foundation Trust, London, UK, between 2008 and 2018. Images were acquired using a Retcam Version 2 device (Natus Medical, Pleasanton, CA, USA) on all babies who were either born at less than 32 weeks gestational age or had a birthweight of less than 1501 g. Each images was graded by two junior ophthalmologists with disagreements adjudicated by a senior paediatric ophthalmologist. Bespoke and code-free deep learning models (CFDL) were developed for the discrimination of healthy, pre-plus disease, and plus disease. Performance was assessed internally on 200 images with the majority vote of three senior paediatric ophthalmologists as the reference standard. External validation was on 338 retinal images from four separate datasets from the USA, Brazil, and Egypt with images derived from Retcam and the 3nethra neo device (Forus Health, Bengaluru, India). FINDINGS: Of the 7414 retinal images in the original dataset, 6141 images were used in the final development dataset. For the discrimination of healthy versus pre-plus or plus disease, the bespoke model had an area under the curve (AUC) of 0·986 (95% CI 0·973-0·996) and the CFDL model had an AUC of 0·989 (0·979-0·997) on the internal test set. Both models generalised well to external validation test sets acquired using the Retcam for discriminating healthy from pre-plus or plus disease (bespoke range was 0·975-1·000 and CFDL range was 0·969-0·995). The CFDL model was inferior to the bespoke model on discriminating pre-plus disease from healthy or plus disease in the USA dataset (CFDL 0·808 [95% CI 0·671-0·909, bespoke 0·942 [0·892-0·982]], p=0·0070). Performance also reduced when tested on the 3nethra neo imaging device (CFDL 0·865 [0·742-0·965] and bespoke 0·891 [0·783-0·977]). INTERPRETATION: Both bespoke and CFDL models conferred similar performance to senior paediatric ophthalmologists for discriminating healthy retinal images from ones with features of pre-plus or plus disease; however, CFDL models might generalise less well when considering minority classes. Care should be taken when testing on data acquired using alternative imaging devices from that used for the development dataset. Our study justifies further validation of plus disease classifiers in ROP screening and supports a potential role for code-free approaches to help prevent blindness in vulnerable neonates. FUNDING: National Institute for Health Research Biomedical Research Centre based at Moorfields Eye Hospital NHS Foundation Trust and the University College London Institute of Ophthalmology. TRANSLATIONS: For the Portuguese and Arabic translations of the abstract see Supplementary Materials section.


Subject(s)
Deep Learning , Retinopathy of Prematurity , Infant, Newborn , Infant , Humans , Child , Retrospective Studies , Retinopathy of Prematurity/diagnosis , Sensitivity and Specificity , Infant, Premature
10.
JAMA Psychiatry ; 80(5): 478-487, 2023 05 01.
Article in English | MEDLINE | ID: mdl-36947045

ABSTRACT

Importance: The potential association of schizophrenia with distinct retinal changes is of clinical interest but has been challenging to investigate because of a lack of sufficiently large and detailed cohorts. Objective: To investigate the association between retinal biomarkers from multimodal imaging (oculomics) and schizophrenia in a large real-world population. Design, Setting, and Participants: This cross-sectional analysis used data from a retrospective cohort of 154 830 patients 40 years and older from the AlzEye study, which linked ophthalmic data with hospital admission data across England. Patients attended Moorfields Eye Hospital, a secondary care ophthalmic hospital with a principal central site, 4 district hubs, and 5 satellite clinics in and around London, United Kingdom, and had retinal imaging during the study period (January 2008 and April 2018). Data were analyzed from January 2022 to July 2022. Main Outcomes and Measures: Retinovascular and optic nerve indices were computed from color fundus photography. Macular retinal nerve fiber layer (RNFL) and ganglion cell-inner plexiform layer (mGC-IPL) thicknesses were extracted from optical coherence tomography. Linear mixed-effects models were used to examine the association between schizophrenia and retinal biomarkers. Results: A total of 485 individuals (747 eyes) with schizophrenia (mean [SD] age, 64.9 years [12.2]; 258 [53.2%] female) and 100 931 individuals (165 400 eyes) without schizophrenia (mean age, 65.9 years [13.7]; 53 253 [52.8%] female) were included after images underwent quality control and potentially confounding conditions were excluded. Individuals with schizophrenia were more likely to have hypertension (407 [83.9%] vs 49 971 [48.0%]) and diabetes (364 [75.1%] vs 28 762 [27.6%]). The schizophrenia group had thinner mGC-IPL (-4.05 µm, 95% CI, -5.40 to -2.69; P = 5.4 × 10-9), which persisted when investigating only patients without diabetes (-3.99 µm; 95% CI, -6.67 to -1.30; P = .004) or just those 55 years and younger (-2.90 µm; 95% CI, -5.55 to -0.24; P = .03). On adjusted analysis, retinal fractal dimension among vascular variables was reduced in individuals with schizophrenia (-0.14 units; 95% CI, -0.22 to -0.05; P = .001), although this was not present when excluding patients with diabetes. Conclusions and Relevance: In this study, patients with schizophrenia had measurable differences in neural and vascular integrity of the retina. Differences in retinal vasculature were mostly secondary to the higher prevalence of diabetes and hypertension in patients with schizophrenia. The role of retinal features as adjunct outcomes in patients with schizophrenia warrants further investigation.


Subject(s)
Hypertension , Schizophrenia , Humans , Female , Aged , Middle Aged , Male , Retinal Ganglion Cells , Retrospective Studies , Cross-Sectional Studies , Schizophrenia/diagnostic imaging , Retina/diagnostic imaging , Tomography, Optical Coherence/methods , Multimodal Imaging
11.
Transl Vis Sci Technol ; 11(7): 12, 2022 07 08.
Article in English | MEDLINE | ID: mdl-35833885

ABSTRACT

Purpose: To externally validate a deep learning pipeline (AutoMorph) for automated analysis of retinal vascular morphology on fundus photographs. AutoMorph has been made publicly available, facilitating widespread research in ophthalmic and systemic diseases. Methods: AutoMorph consists of four functional modules: image preprocessing, image quality grading, anatomical segmentation (including binary vessel, artery/vein, and optic disc/cup segmentation), and vascular morphology feature measurement. Image quality grading and anatomical segmentation use the most recent deep learning techniques. We employ a model ensemble strategy to achieve robust results and analyze the prediction confidence to rectify false gradable cases in image quality grading. We externally validate the performance of each module on several independent publicly available datasets. Results: The EfficientNet-b4 architecture used in the image grading module achieves performance comparable to that of the state of the art for EyePACS-Q, with an F1-score of 0.86. The confidence analysis reduces the number of images incorrectly assessed as gradable by 76%. Binary vessel segmentation achieves an F1-score of 0.73 on AV-WIDE and 0.78 on DR HAGIS. Artery/vein scores are 0.66 on IOSTAR-AV, and disc segmentation achieves 0.94 in IDRID. Vascular morphology features measured from the AutoMorph segmentation map and expert annotation show good to excellent agreement. Conclusions: AutoMorph modules perform well even when external validation data show domain differences from training data (e.g., with different imaging devices). This fully automated pipeline can thus allow detailed, efficient, and comprehensive analysis of retinal vascular morphology on color fundus photographs. Translational Relevance: By making AutoMorph publicly available and open source, we hope to facilitate ophthalmic and systemic disease research, particularly in the emerging field of oculomics.


Subject(s)
Deep Learning , Diagnostic Techniques, Ophthalmological , Fundus Oculi , Photography
12.
BMJ Open ; 12(3): e058552, 2022 03 16.
Article in English | MEDLINE | ID: mdl-35296488

ABSTRACT

PURPOSE: Retinal signatures of systemic disease ('oculomics') are increasingly being revealed through a combination of high-resolution ophthalmic imaging and sophisticated modelling strategies. Progress is currently limited not mainly by technical issues, but by the lack of large labelled datasets, a sine qua non for deep learning. Such data are derived from prospective epidemiological studies, in which retinal imaging is typically unimodal, cross-sectional, of modest number and relates to cohorts, which are not enriched with subpopulations of interest, such as those with systemic disease. We thus linked longitudinal multimodal retinal imaging from routinely collected National Health Service (NHS) data with systemic disease data from hospital admissions using a privacy-by-design third-party linkage approach. PARTICIPANTS: Between 1 January 2008 and 1 April 2018, 353 157 participants aged 40 years or older, who attended Moorfields Eye Hospital NHS Foundation Trust, a tertiary ophthalmic institution incorporating a principal central site, four district hubs and five satellite clinics in and around London, UK serving a catchment population of approximately six million people. FINDINGS TO DATE: Among the 353 157 individuals, 186 651 had a total of 1 337 711 Hospital Episode Statistics admitted patient care episodes. Systemic diagnoses recorded at these episodes include 12 022 patients with myocardial infarction, 11 735 with all-cause stroke and 13 363 with all-cause dementia. A total of 6 261 931 retinal images of seven different modalities and across three manufacturers were acquired from 1 54 830 patients. The majority of retinal images were retinal photographs (n=1 874 175) followed by optical coherence tomography (n=1 567 358). FUTURE PLANS: AlzEye combines the world's largest single institution retinal imaging database with nationally collected systemic data to create an exceptional large-scale, enriched cohort that reflects the diversity of the population served. First analyses will address cardiovascular diseases and dementia, with a view to identifying hidden retinal signatures that may lead to earlier detection and risk management of these life-threatening conditions.


Subject(s)
Hospitals , State Medicine , Adult , Cross-Sectional Studies , Humans , London/epidemiology , Prospective Studies
13.
Graefes Arch Clin Exp Ophthalmol ; 260(8): 2461-2473, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35122132

ABSTRACT

PURPOSE: Neovascular age-related macular degeneration (nAMD) is a major global cause of blindness. Whilst anti-vascular endothelial growth factor (anti-VEGF) treatment is effective, response varies considerably between individuals. Thus, patients face substantial uncertainty regarding their future ability to perform daily tasks. In this study, we evaluate the performance of an automated machine learning (AutoML) model which predicts visual acuity (VA) outcomes in patients receiving treatment for nAMD, in comparison to a manually coded model built using the same dataset. Furthermore, we evaluate model performance across ethnic groups and analyse how the models reach their predictions. METHODS: Binary classification models were trained to predict whether patients' VA would be 'Above' or 'Below' a score of 70 one year after initiating treatment, measured using the Early Treatment Diabetic Retinopathy Study (ETDRS) chart. The AutoML model was built using the Google Cloud Platform, whilst the bespoke model was trained using an XGBoost framework. Models were compared and analysed using the What-if Tool (WIT), a novel model-agnostic interpretability tool. RESULTS: Our study included 1631 eyes from patients attending Moorfields Eye Hospital. The AutoML model (area under the curve [AUC], 0.849) achieved a highly similar performance to the XGBoost model (AUC, 0.847). Using the WIT, we found that the models over-predicted negative outcomes in Asian patients and performed worse in those with an ethnic category of Other. Baseline VA, age and ethnicity were the most important determinants of model predictions. Partial dependence plot analysis revealed a sigmoidal relationship between baseline VA and the probability of an outcome of 'Above'. CONCLUSION: We have described and validated an AutoML-WIT pipeline which enables clinicians with minimal coding skills to match the performance of a state-of-the-art algorithm and obtain explainable predictions.


Subject(s)
Macular Degeneration , Wet Macular Degeneration , Angiogenesis Inhibitors/therapeutic use , Humans , Intravitreal Injections , Machine Learning , Macular Degeneration/drug therapy , Ranibizumab/therapeutic use , Retrospective Studies , Treatment Outcome , Vascular Endothelial Growth Factor A , Visual Acuity , Wet Macular Degeneration/diagnosis , Wet Macular Degeneration/drug therapy
14.
JAMA Ophthalmol ; 140(2): 153-160, 2022 Feb 01.
Article in English | MEDLINE | ID: mdl-34913967

ABSTRACT

IMPORTANCE: Telemedicine is accelerating the remote detection and monitoring of medical conditions, such as vision-threatening diseases. Meaningful deployment of smartphone apps for home vision monitoring should consider the barriers to patient uptake and engagement and address issues around digital exclusion in vulnerable patient populations. OBJECTIVE: To quantify the associations between patient characteristics and clinical measures with vision monitoring app uptake and engagement. DESIGN, SETTING, AND PARTICIPANTS: In this cohort and survey study, consecutive adult patients attending Moorfields Eye Hospital receiving intravitreal injections for retinal disease between May 2020 and February 2021 were included. EXPOSURES: Patients were offered the Home Vision Monitor (HVM) smartphone app to self-test their vision. A patient survey was conducted to capture their experience. App data, demographic characteristics, survey results, and clinical data from the electronic health record were analyzed via regression and machine learning. MAIN OUTCOMES AND MEASURES: Associations of patient uptake, compliance, and use rate measured in odds ratios (ORs). RESULTS: Of 417 included patients, 236 (56.6%) were female, and the mean (SD) age was 72.8 (12.8) years. A total of 258 patients (61.9%) were active users. Uptake was negatively associated with age (OR, 0.98; 95% CI, 0.97-0.998; P = .02) and positively associated with both visual acuity in the better-seeing eye (OR, 1.02; 95% CI, 1.00-1.03; P = .01) and baseline number of intravitreal injections (OR, 1.01; 95% CI, 1.00-1.02; P = .02). Of 258 active patients, 166 (64.3%) fulfilled the definition of compliance. Compliance was associated with patients diagnosed with neovascular age-related macular degeneration (OR, 1.94; 95% CI, 1.07-3.53; P = .002), White British ethnicity (OR, 1.69; 95% CI, 0.96-3.01; P = .02), and visual acuity in the better-seeing eye at baseline (OR, 1.02; 95% CI, 1.01-1.04; P = .04). Use rate was higher with increasing levels of comfort with use of modern technologies (ß = 0.031; 95% CI, 0.007-0.055; P = .02). A total of 119 patients (98.4%) found the app either easy or very easy to use, while 96 (82.1%) experienced increased reassurance from using the app. CONCLUSIONS AND RELEVANCE: This evaluation of home vision monitoring for patients with common vision-threatening disease within a clinical practice setting revealed demographic, clinical, and patient-related factors associated with patient uptake and engagement. These insights inform targeted interventions to address risks of digital exclusion with smartphone-based medical devices.


Subject(s)
Mobile Applications , Smartphone , Adult , Aged , Female , Humans , Intravitreal Injections , Male , Vision Disorders/diagnosis , Visual Acuity
15.
Article in English | MEDLINE | ID: mdl-34671767

ABSTRACT

The developmental process of embryos follows a monotonic order. An embryo can progressively cleave from one cell to multiple cells and finally transform to morula and blastocyst. For time-lapse videos of embryos, most existing developmental stage classification methods conduct per-frame predictions using an image frame at each time step. However, classification using only images suffers from overlapping between cells and imbalance between stages. Temporal information can be valuable in addressing this problem by capturing movements between neighboring frames. In this work, we propose a two-stream model for developmental stage classification. Unlike previous methods, our two-stream model accepts both temporal and image information. We develop a linear-chain conditional random field (CRF) on top of neural network features extracted from the temporal and image streams to make use of both modalities. The linear-chain CRF formulation enables tractable training of global sequential models over multiple frames while also making it possible to inject monotonic development order constraints into the learning process explicitly. We demonstrate our algorithm on two time-lapse embryo video datasets: i) mouse and ii) human embryo datasets. Our method achieves 98.1% and 80.6% for mouse and human embryo stage classification, respectively. Our approach will enable more pro-found clinical and biological studies and suggests a new direction for developmental stage classification by utilizing temporal information.

16.
Lancet Digit Health ; 3(10): e665-e675, 2021 10.
Article in English | MEDLINE | ID: mdl-34509423

ABSTRACT

BACKGROUND: Geographic atrophy is a major vision-threatening manifestation of age-related macular degeneration, one of the leading causes of blindness globally. Geographic atrophy has no proven treatment or method for easy detection. Rapid, reliable, and objective detection and quantification of geographic atrophy from optical coherence tomography (OCT) retinal scans is necessary for disease monitoring, prognostic research, and to serve as clinical endpoints for therapy development. To this end, we aimed to develop and validate a fully automated method to detect and quantify geographic atrophy from OCT. METHODS: We did a deep-learning model development and external validation study on OCT retinal scans at Moorfields Eye Hospital Reading Centre and Clinical AI Hub (London, UK). A modified U-Net architecture was used to develop four distinct deep-learning models for segmentation of geographic atrophy and its constituent retinal features from OCT scans acquired with Heidelberg Spectralis. A manually segmented clinical dataset for model development comprised 5049 B-scans from 984 OCT volumes selected randomly from 399 eyes of 200 patients with geographic atrophy secondary to age-related macular degeneration, enrolled in a prospective, multicentre, phase 2 clinical trial for the treatment of geographic atrophy (FILLY study). Performance was externally validated on an independently recruited dataset from patients receiving routine care at Moorfields Eye Hospital (London, UK). The primary outcome was segmentation and classification agreement between deep-learning model geographic atrophy prediction and consensus of two independent expert graders on the external validation dataset. FINDINGS: The external validation cohort included 884 B-scans from 192 OCT volumes taken from 192 eyes of 110 patients as part of real-life clinical care at Moorfields Eye Hospital between Jan 1, 2016, and Dec, 31, 2019 (mean age 78·3 years [SD 11·1], 58 [53%] women). The resultant geographic atrophy deep-learning model produced predictions similar to consensus human specialist grading on the external validation dataset (median Dice similarity coefficient [DSC] 0·96 [IQR 0·10]; intraclass correlation coefficient [ICC] 0·93) and outperformed agreement between human graders (DSC 0·80 [0·28]; ICC 0·79). Similarly, the three independent feature-specific deep-learning models could accurately segment each of the three constituent features of geographic atrophy: retinal pigment epithelium loss (median DSC 0·95 [IQR 0·15]), overlying photoreceptor degeneration (0·96 [0·12]), and hypertransmission (0·97 [0·07]) in the external validation dataset versus consensus grading. INTERPRETATION: We present a fully developed and validated deep-learning composite model for segmentation of geographic atrophy and its subtypes that achieves performance at a similar level to manual specialist assessment. Fully automated analysis of retinal OCT from routine clinical practice could provide a promising horizon for diagnosis and prognosis in both research and real-life patient care, following further clinical validation FUNDING: Apellis Pharmaceuticals.


Subject(s)
Deep Learning , Geographic Atrophy/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Tomography, Optical Coherence/methods , Aged , Aged, 80 and over , Humans , Middle Aged , Reproducibility of Results , Retina/diagnostic imaging
17.
Curr Opin Ophthalmol ; 32(5): 445-451, 2021 Sep 01.
Article in English | MEDLINE | ID: mdl-34265784

ABSTRACT

PURPOSE OF REVIEW: This article aims to discuss the current state of resources enabling the democratization of artificial intelligence (AI) in ophthalmology. RECENT FINDINGS: Open datasets, efficient labeling techniques, code-free automated machine learning (AutoML) and cloud-based platforms for deployment are resources that enable clinicians with scarce resources to drive their own AI projects. SUMMARY: Clinicians are the use-case experts who are best suited to drive AI projects tackling patient-relevant outcome measures. Taken together, open datasets, efficient labeling techniques, code-free AutoML and cloud platforms break the barriers for clinician-driven AI. As AI becomes increasingly democratized through such tools, clinicians and patients stand to benefit greatly.


Subject(s)
Artificial Intelligence , Health Services Accessibility , Ophthalmology , Cloud Computing , Datasets as Topic , Delivery of Health Care , Health Resources , Humans , Machine Learning
18.
Ann Clin Transl Neurol ; 6(7): 1178-1190, 2019 07.
Article in English | MEDLINE | ID: mdl-31353853

ABSTRACT

OBJECTIVE: Diffusion tensor imaging (DTI) of the white matter is a biomarker for neurological disease burden in tuberous sclerosis complex (TSC). To clarify the basis of abnormal diffusion in TSC, we correlated ex vivo high-resolution diffusion imaging with histopathology in four tissue types: cortex, tuber, perituber, and white matter. METHODS: Surgical specimens of three children with TSC were scanned in a 3T or 7T MRI with a structural image isotropic resolution of 137-300 micron, and diffusion image isotropic resolution of 270-1,000 micron. We stained for myelin (luxol fast blue, LFB), gliosis (glial fibrillary acidic protein, GFAP), and neurons (NeuN) and registered the digitized histopathology slides (0.686 micron resolution) to MRI for visual comparison. We then performed colocalization analysis in four tissue types in each specimen. Finally, we applied a linear mixed model (LMM) for pooled analysis across the three specimens. RESULTS: In white matter and perituber regions, LFB optical density measures correlated with fractional anisotropy (FA) and inversely with mean diffusivity (MD). In white matter only, GFAP correlated with MD, and inversely with FA. In tubers and in the cortex, there was little variation in mean LFB and GFAP signal intensity, and no correlation with MRI metrics. Neuronal density correlated with MD. In the analysis of the combined specimens, the most robust correlation was between white matter MD and LFB metrics. INTERPRETATION: In TSC, diffusion imaging abnormalities in microscopic tissue types correspond to specific histopathological markers. Across all specimens, white matter diffusivity correlates with myelination.


Subject(s)
Myelin Sheath/pathology , Tuberous Sclerosis/diagnostic imaging , Tuberous Sclerosis/pathology , White Matter/diagnostic imaging , White Matter/pathology , Anisotropy , Brain/pathology , Cerebral Cortex/diagnostic imaging , Cerebral Cortex/pathology , Diffusion Tensor Imaging/methods , Female , Gliosis/pathology , Humans , Infant , Infant, Newborn , Male , Neurons/pathology
SELECTION OF CITATIONS
SEARCH DETAIL