Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 40
Filter
1.
J Clin Hypertens (Greenwich) ; 26(6): 724-734, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38683601

ABSTRACT

Although the association between persistent hypertension and the compromise of both micro- and macro-circulatory functions is well recognized, a significant gap in quantitative investigations exploring the interplay between microvascular and macrovascular injuries still exists. In this study, the authors looked into the relationship between brachial-ankle pulse wave velocity (baPWV) and hypertensive retinopathy in treated hypertensive adults. The authors conducted a cross-sectional study of treated hypertensive patients with the last follow-up data from the China Stoke Primary Prevention Trial (CSPPT) in 2013. With the use of PWV/ABI instruments, baPWV was automatically measured. The Keith-Wagener-Barker classification was used to determine the diagnosis of hypertensive retinopathy. The odds ratio (OR) and 95% confidence interval (CI) for the connection between baPWV and hypertensive retinopathy were determined using multivariable logistic regression models. The OR curves were created using a multivariable-adjusted restricted cubic spline model to investigate any potential non-linear dose-response relationships between baPWV and hypertensive retinopathy. A total of 8514 (75.5%) of 11,279 participants were diagnosed with hypertensive retinopathy. The prevalence of hypertensive retinopathy increased from the bottom quartile of baPWV to the top quartile: quartile 1: 70.7%, quartile 2: 76.1%, quartile 3: 76.7%, quartile 4: 78.4%. After adjusting for potential confounders, baPWV was positively associated with hypertensive retinopathy (OR = 1.05, 95% CI, 1.03-1.07, p < .001). Compared to those in the lowest baPWV quartile, those in the highest baPWV quartile had an odds ratio for hypertensive retinopathy of 1.61 (OR = 1.61, 95% CI: 1.37-1.89, p < .001). Two-piece-wise logistic regression model demonstrated a nonlinear relationship between baPWV and hypertensive retinopathy with an inflection point of 17.1 m/s above which the effect was saturated .


Subject(s)
Ankle Brachial Index , Hypertension , Hypertensive Retinopathy , Pulse Wave Analysis , Humans , Male , Female , Ankle Brachial Index/methods , Middle Aged , China/epidemiology , Cross-Sectional Studies , Pulse Wave Analysis/methods , Hypertension/physiopathology , Hypertension/epidemiology , Hypertension/diagnosis , Hypertension/drug therapy , Hypertension/complications , Aged , Hypertensive Retinopathy/epidemiology , Hypertensive Retinopathy/diagnosis , Prevalence , Primary Prevention/methods , Stroke/epidemiology , Stroke/prevention & control , Stroke/physiopathology , Risk Factors , Antihypertensive Agents/therapeutic use
2.
Ophthalmol Retina ; 8(7): 666-677, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38280426

ABSTRACT

OBJECTIVE: We aimed to develop a deep learning system capable of identifying subjects with cognitive impairment quickly and easily based on multimodal ocular images. DESIGN: Cross sectional study. SUBJECTS: Participants of Beijing Eye Study 2011 and patients attending Beijing Tongren Eye Center and Beijing Tongren Hospital Physical Examination Center. METHODS: We trained and validated a deep learning algorithm to assess cognitive impairment using retrospectively collected data from the Beijing Eye Study 2011. Cognitive impairment was defined as a Mini-Mental State Examination score < 24. Based on fundus photographs and OCT images, we developed 5 models based on the following sets of images: macula-centered fundus photographs, optic disc-centered fundus photographs, fundus photographs of both fields, OCT images, and fundus photographs of both fields with OCT (multimodal). The performance of the models was evaluated and compared in an external validation data set, which was collected from patients attending Beijing Tongren Eye Center and Beijing Tongren Hospital Physical Examination Center. MAIN OUTCOME MEASURES: Area under the curve (AUC). RESULTS: A total of 9424 retinal photographs and 4712 OCT images were used to develop the model. The external validation sets from each center included 1180 fundus photographs and 590 OCT images. Model comparison revealed that the multimodal performed best, achieving an AUC of 0.820 in the internal validation set, 0.786 in external validation set 1, and 0.784 in external validation set 2. We evaluated the performance of the multi-model in different sexes and different age groups; there were no significant differences. The heatmap analysis showed that signals around the optic disc in fundus photographs and the retina and choroid around the macular and optic disc regions in OCT images were used by the multimodal to identify participants with cognitive impairment. CONCLUSIONS: Fundus photographs and OCT can provide valuable information on cognitive function. Multimodal models provide richer information compared with single-mode models. Deep learning algorithms based on multimodal retinal images may be capable of screening cognitive impairment. This technique has potential value for broader implementation in community-based screening or clinic settings. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.


Subject(s)
Cognitive Dysfunction , Deep Learning , Fundus Oculi , Tomography, Optical Coherence , Humans , Cross-Sectional Studies , Female , Male , Tomography, Optical Coherence/methods , Retrospective Studies , Aged , Cognitive Dysfunction/diagnosis , Middle Aged , Multimodal Imaging , ROC Curve , Optic Disk/diagnostic imaging , Optic Disk/pathology , Mass Screening/methods
3.
Cureus ; 15(11): e48532, 2023 Nov.
Article in English | MEDLINE | ID: mdl-38074014

ABSTRACT

Chorioretinal atrophy with pigmentation along the retinal veins was observed in the right fundus of a 49-year-old patient. Extensive retinitis pigmentosa (RP) was observed in the left eye. Dynamic quantitative visual field testing revealed a scotoma in the right eye that corresponded to the area of ​​retinochoroidal atrophy and afferent visual field constriction was observed on the left eye. An electroretinogram test revealed that the right eye showed attenuated type and the left eye showed negative type. Thus, the conditions of his right eye and left eye were diagnosed as pigmented paravenous retinochoroidal atrophy (PPRCA) and RP, respectively. Thus, there may be a higher proportion of PPRCA patients with unilateral RP than expected.

4.
Ophthalmol Retina ; 7(10): 910-917, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37423485

ABSTRACT

PURPOSE: To describe the alterations of the peripheral retina in extensive macular atrophy with pseudodrusen-like deposits (EMAP) by means of ultrawidefield fundus photography (UWFFP) and ultrawidefield fundus autofluorescence (UWF-FAF). STUDY DESIGN: Prospective, observational case series. PARTICIPANTS: Twenty-three patients affected by EMAP. METHODS: Each patient underwent best-corrected visual acuity (BCVA) measurement, UWFFP, and UWF-FAF. The area of macular atrophy, as well as the pseudodrusen-like deposits and peripheral degeneration, were assessed using UWF images, at baseline and over the follow-up. MAIN OUTCOME MEASURES: The assessment of the clinical patterns of both pseudodrusen-like deposits and peripheral retinal degeneration. Secondary outcomes included assessing macular atrophy by means of UWFFP and UWF-FAF, and tracking progression over the follow-up. RESULTS: Twenty-three patients (46 eyes) were included, of whom 14 (60%) were female. Mean age was 59.0 ± 5 years. Mean BCVA at baseline was 0.4 ± 0.4, declining at a mean rate of 0.13 ± 0.21 logarithm of the minimum angle of resolution/year. Macular atrophy at baseline was 18.8 ± 14.2 mm2 on UWF-FAF, enlarging at a rate of 0.46 ± 0.28 mm/year, after the square root transformation. Pseudodrusen-like deposits were present in all cases at baseline, and their detection decreased over the follow-up. Three main types of peripheral degeneration were identified: retinal pigment epithelium alterations, pavingstone-like changes, and pigmented chorioretinal atrophy. Peripheral degeneration progressed in 29 eyes (63.0%), at a median rate of 0.7 (interquartile range, 0.4-1.2) sectors/year. CONCLUSIONS: Extensive macular atrophy with pseudodrusen-like deposits is a complex disease involving not only the macula, but also the midperiphery and the periphery of the retina. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

5.
Neuroophthalmology ; 47(4): 177-192, 2023.
Article in English | MEDLINE | ID: mdl-37434667

ABSTRACT

Optic disc swelling is a manifestation of a broad range of processes affecting the optic nerve head and/or the anterior segment of the optic nerve. Accurately diagnosing optic disc oedema, grading its severity, and recognising its cause, is crucial in order to treat patients in a timely manner and limit vision loss. Some ocular fundus features, in light of a patient's history and visual symptoms, may suggest a specific mechanism or aetiology of the visible disc oedema, but current criteria can at most enable an educated guess as to the most likely cause. In many cases only the clinical evolution and ancillary testing can inform the exact diagnosis. The development of ocular fundus imaging, including colour fundus photography, fluorescein angiography, optical coherence tomography, and multimodal imaging, has provided assistance in quantifying swelling, distinguishing true optic disc oedema from pseudo-optic disc oedema, and differentiating among the numerous causes of acute optic disc oedema. However, the diagnosis of disc oedema is often delayed or not made in busy emergency departments and outpatient neurology clinics. Indeed, most non-eye care providers are not able to accurately perform ocular fundus examination, increasing the risk of diagnostic errors in acute neurological settings. The implementation of non-mydriatic fundus photography and artificial intelligence technology in the diagnostic process addresses these important gaps in clinical practice.

6.
Diagnostics (Basel) ; 13(10)2023 May 09.
Article in English | MEDLINE | ID: mdl-37238165

ABSTRACT

Polypoidal choroidal vasculopathy (PCV) is a subtype of neovascular age-related macular degeneration (nAMD) that is characterized by a branching neovascular network and polypoidal lesions. It is important to differentiate PCV from typical nAMD as there are differences in treatment response between subtypes. Indocyanine green angiography (ICGA) is the gold standard for diagnosing PCV; however, ICGA is an invasive detection method and impractical for extensive use for regular long-term monitoring. In addition, access to ICGA may be limited in some settings. The purpose of this review is to summarize the utilization of multimodal imaging modalities (color fundus photography, optical coherence tomography (OCT), OCT angiography (OCTA), and fundus autofluorescence (FAF)) in differentiating PCV from typical nAMD and predicting disease activity and prognosis. In particular, OCT shows tremendous potential in diagnosing PCV. Characteristics such as subretinal pigment epithelium (RPE) ring-like lesion, en face OCT-complex RPE elevation, and sharp-peaked pigment epithelial detachment provide high sensitivity and specificity for differentiating PCV from nAMD. With the use of more practical, non-ICGA imaging modalities, the diagnosis of PCV can be more easily made and treatment tailored as necessary for optimal outcomes.

7.
Front Med (Lausanne) ; 10: 1115032, 2023.
Article in English | MEDLINE | ID: mdl-36936225

ABSTRACT

Purpose: The aim of this study was to prospectively quantify the level of agreement among the deep learning system, non-physician graders, and general ophthalmologists with different levels of clinical experience in detecting referable diabetic retinopathy, age-related macular degeneration, and glaucomatous optic neuropathy. Methods: Deep learning systems for diabetic retinopathy, age-related macular degeneration, and glaucomatous optic neuropathy classification, with accuracy proven through internal and external validation, were established using 210,473 fundus photographs. Five trained non-physician graders and 47 general ophthalmologists from China were chosen randomly and included in the analysis. A test set of 300 fundus photographs were randomly identified from an independent dataset of 42,388 gradable images. The grading outcomes of five retinal and five glaucoma specialists were used as the reference standard that was considered achieved when ≥50% of gradings were consistent among the included specialists. The area under receiver operator characteristic curve of different groups in relation to the reference standard was used to compare agreement for referable diabetic retinopathy, age-related macular degeneration, and glaucomatous optic neuropathy. Results: The test set included 45 images (15.0%) with referable diabetic retinopathy, 46 (15.3%) with age-related macular degeneration, 46 (15.3%) with glaucomatous optic neuropathy, and 163 (55.4%) without these diseases. The area under receiver operator characteristic curve for non-physician graders, ophthalmologists with 3-5 years of clinical practice, ophthalmologists with 5-10 years of clinical practice, ophthalmologists with >10 years of clinical practice, and the deep learning system for referable diabetic retinopathy were 0.984, 0.964, 0.965, 0.954, and 0.990 (p = 0.415), respectively. The results for referable age-related macular degeneration were 0.912, 0.933, 0.946, 0.958, and 0.945, respectively, (p = 0.145), and 0.675, 0.862, 0.894, 0.976, and 0.994 for referable glaucomatous optic neuropathy, respectively (p < 0.001). Conclusion: The findings of this study suggest that the accuracy of this deep learning system is comparable to that of trained non-physician graders and general ophthalmologists for referable diabetic retinopathy and age-related macular degeneration, but the deep learning system performance is better than that of trained non-physician graders for the detection of referable glaucomatous optic neuropathy.

8.
J Clin Med ; 12(3)2023 Feb 03.
Article in English | MEDLINE | ID: mdl-36769865

ABSTRACT

This study describes the development of a convolutional neural network (CNN) for automated assessment of optic disc photograph quality. Using a code-free deep learning platform, a total of 2377 optic disc photographs were used to develop a deep CNN capable of determining optic disc photograph quality. Of these, 1002 were good-quality images, 609 were acceptable-quality, and 766 were poor-quality images. The dataset was split 80/10/10 into training, validation, and test sets and balanced for quality. A ternary classification model (good, acceptable, and poor quality) and a binary model (usable, unusable) were developed. In the ternary classification system, the model had an overall accuracy of 91% and an AUC of 0.98. The model had higher predictive accuracy for images of good (93%) and poor quality (96%) than for images of acceptable quality (91%). The binary model performed with an overall accuracy of 98% and an AUC of 0.99. When validated on 292 images not included in the original training/validation/test dataset, the model's accuracy was 85% on the three-class classification task and 97% on the binary classification task. The proposed system for automated image-quality assessment for optic disc photographs achieves high accuracy in both ternary and binary classification systems, and highlights the success achievable with a code-free platform. There is wide clinical and research potential for such a model, with potential applications ranging from integration into fundus camera software to provide immediate feedback to ophthalmic photographers, to prescreening large databases before their use in research.

9.
Ophthalmol Sci ; 3(1): 100228, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36345378

ABSTRACT

Objective: To compare general ophthalmologists, retina specialists, and the EyeArt Artificial Intelligence (AI) system to the clinical reference standard for detecting more than mild diabetic retinopathy (mtmDR). Design: Prospective, pivotal, multicenter trial conducted from April 2017 to May 2018. Participants: Participants were aged ≥ 18 years who had diabetes mellitus and underwent dilated ophthalmoscopy. A total of 521 of 893 participants met these criteria and completed the study protocol. Testing: Participants underwent 2-field fundus photography (macula centered, disc centered) for the EyeArt system, dilated ophthalmoscopy, and 4-widefield stereoscopic dilated fundus photography for reference standard grading. Main Outcome Measures: For mtmDR detection, sensitivity and specificity of EyeArt gradings of 2-field, fundus photographs and ophthalmoscopy grading versus a rigorous clinical reference standard comprising Reading Center grading of 4-widefield stereoscopic dilated fundus photographs using the ETDRS severity scale. The AI system provided automatic eye-level results regarding mtmDR. Results: Overall, 521 participants (999 eyes) at 10 centers underwent dilated ophthalmoscopy: 406 by nonretina and 115 by retina specialists. Reading Center graded 207 positive and 792 eyes negative for mtmDR. Of these 999 eyes, 26 eyes were ungradable by the EyeArt system, leaving 973 eyes with both EyeArt and Reading Center gradings. Retina specialists correctly identified 22 of 37 eyes as positive (sensitivity 59.5%) and 182 of 184 eyes as negative (specificity 98.9%) for mtmDR versus the EyeArt AI system that identified 36 of 37 as positive (sensitivity 97%) and 162 of 184 eyes as negative (specificity of 88%) for mtmDR. General ophthalmologists correctly identified 35 of 170 eyes as positive (sensitivity 20.6%) and 607 of 608 eyes as negative (specificity 99.8%) for mtmDR compared with the EyeArt AI system that identified 164 of 170 as positive (sensitivity 96.5%) and 525 of 608 eyes as negative (specificity 86%) for mtmDR. Conclusions: The AI system had a higher sensitivity for detecting mtmDR than either general ophthalmologists or retina specialists compared with the clinical reference standard. It can potentially serve as a low-cost point-of-care diabetic retinopathy detection tool and help address the diabetic eye screening burden.

10.
Ophthalmol Sci ; 2(4): 100198, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36531570

ABSTRACT

Purpose: The curation of images using human resources is time intensive but an essential step for developing artificial intelligence (AI) algorithms. Our goal was to develop and implement an AI algorithm for image curation in a high-volume setting. We also explored AI tools that will assist in deploying a tiered approach, in which the AI model labels images and flags potential mislabels for human review. Design: Implementation of an AI algorithm. Participants: Seven-field stereoscopic images from multiple clinical trials. Methods: The 7-field stereoscopic image protocol includes 7 pairs of images from various parts of the central retina along with images of the anterior part of the eye. All images were labeled for field number by reading center graders. The model output included classification of the retinal images into 8 field numbers. Probability scores (0-1) were generated to identify misclassified images, with 1 indicating a high probability of a correct label. Main Outcome Measures: Agreement of AI prediction with grader classification of field number and the use of probability scores to identify mislabeled images. Results: The AI model was trained and validated on 17 529 images and tested on 3004 images. The pooled agreement of field numbers between grader classification and the AI model was 88.3% (kappa, 0.87). The pooled mean probability score was 0.97 (standard deviation [SD], 0.08) for images for which the graders agreed with the AI-generated labels and 0.77 (SD, 0.19) for images for which the graders disagreed with the AI-generated labels (P < 0.0001). Using receiver operating characteristic curves, a probability score of 0.99 was identified as a cutoff for distinguishing mislabeled images. A tiered workflow using a probability score of < 0.99 as a cutoff would include 27.6% of the 3004 images for human review and reduce the error rate from 11.7% to 1.5%. Conclusions: The implementation of AI algorithms requires measures in addition to model validation. Tools to flag potential errors in the labels generated by AI models will reduce inaccuracies, increase trust in the system, and provide data for continuous model development.

11.
BMC Ophthalmol ; 22(1): 483, 2022 Dec 12.
Article in English | MEDLINE | ID: mdl-36510171

ABSTRACT

BACKGROUND: To verify efficacy of automatic screening and classification of glaucoma with deep learning system. METHODS: A cross-sectional, retrospective study in a tertiary referral hospital. Patients with healthy optic disc, high-tension, or normal-tension glaucoma were enrolled. Complicated non-glaucomatous optic neuropathy was excluded. Colour and red-free fundus images were collected for development of DLS and comparison of their efficacy. The convolutional neural network with the pre-trained EfficientNet-b0 model was selected for machine learning. Glaucoma screening (Binary) and ternary classification with or without additional demographics (age, gender, high myopia) were evaluated, followed by creating confusion matrix and heatmaps. Area under receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and F1 score were viewed as main outcome measures. RESULTS: Two hundred and twenty-two cases (421 eyes) were enrolled, with 1851 images in total (1207 normal and 644 glaucomatous disc). Train set and test set were comprised of 1539 and 312 images, respectively. If demographics were not provided, AUC, accuracy, precision, sensitivity, F1 score, and specificity of our deep learning system in eye-based glaucoma screening were 0.98, 0.91, 0.86, 0.86, 0.86, and 0.94 in test set. Same outcome measures in eye-based ternary classification without demographic data were 0.94, 0.87, 0.87, 0.87, 0.87, and 0.94 in our test set, respectively. Adding demographics has no significant impact on efficacy, but establishing a linkage between eyes and images is helpful for a better performance. Confusion matrix and heatmaps suggested that retinal lesions and quality of photographs could affect classification. Colour fundus images play a major role in glaucoma classification, compared to red-free fundus images. CONCLUSIONS: Promising results with high AUC and specificity were shown in distinguishing normal optic nerve from glaucomatous fundus images and doing further classification.


Subject(s)
Deep Learning , Glaucoma , Optic Disk , Humans , Case-Control Studies , Retrospective Studies , Cross-Sectional Studies , Optic Disk/diagnostic imaging , Optic Disk/pathology , Fundus Oculi , Glaucoma/pathology , ROC Curve
12.
Front Med (Lausanne) ; 9: 923096, 2022.
Article in English | MEDLINE | ID: mdl-36250081

ABSTRACT

Objective: To assess the accuracy of probabilistic deep learning models to discriminate normal eyes and eyes with glaucoma from fundus photographs and visual fields. Design: Algorithm development for discriminating normal and glaucoma eyes using data from multicenter, cross-sectional, case-control study. Subjects and participants: Fundus photograph and visual field data from 1,655 eyes of 929 normal and glaucoma subjects to develop and test deep learning models and an independent group of 196 eyes of 98 normal and glaucoma patients to validate deep learning models. Main outcome measures: Accuracy and area under the receiver-operating characteristic curve (AUC). Methods: Fundus photographs and OCT images were carefully examined by clinicians to identify glaucomatous optic neuropathy (GON). When GON was detected by the reader, the finding was further evaluated by another clinician. Three probabilistic deep convolutional neural network (CNN) models were developed using 1,655 fundus photographs, 1,655 visual fields, and 1,655 pairs of fundus photographs and visual fields collected from Compass instruments. Deep learning models were trained and tested using 80% of fundus photographs and visual fields for training set and 20% of the data for testing set. Models were further validated using an independent validation dataset. The performance of the probabilistic deep learning model was compared with that of the corresponding deterministic CNN model. Results: The AUC of the deep learning model in detecting glaucoma from fundus photographs, visual fields, and combined modalities using development dataset were 0.90 (95% confidence interval: 0.89-0.92), 0.89 (0.88-0.91), and 0.94 (0.92-0.96), respectively. The AUC of the deep learning model in detecting glaucoma from fundus photographs, visual fields, and both modalities using the independent validation dataset were 0.94 (0.92-0.95), 0.98 (0.98-0.99), and 0.98 (0.98-0.99), respectively. The AUC of the deep learning model in detecting glaucoma from fundus photographs, visual fields, and both modalities using an early glaucoma subset were 0.90 (0.88,0.91), 0.74 (0.73,0.75), 0.91 (0.89,0.93), respectively. Eyes that were misclassified had significantly higher uncertainty in likelihood of diagnosis compared to eyes that were classified correctly. The uncertainty level of the correctly classified eyes is much lower in the combined model compared to the model based on visual fields only. The AUCs of the deterministic CNN model using fundus images, visual field, and combined modalities based on the development dataset were 0.87 (0.85,0.90), 0.88 (0.84,0.91), and 0.91 (0.89,0.94), and the AUCs based on the independent validation dataset were 0.91 (0.89,0.93), 0.97 (0.95,0.99), and 0.97 (0.96,0.99), respectively, while the AUCs based on an early glaucoma subset were 0.88 (0.86,0.91), 0.75 (0.73,0.77), and 0.92 (0.89,0.95), respectively. Conclusion and relevance: Probabilistic deep learning models can detect glaucoma from multi-modal data with high accuracy. Our findings suggest that models based on combined visual field and fundus photograph modalities detects glaucoma with higher accuracy. While probabilistic and deterministic CNN models provided similar performance, probabilistic models generate certainty level of the outcome thus providing another level of confidence in decision making.

13.
J Biomed Inform ; 136: 104233, 2022 12.
Article in English | MEDLINE | ID: mdl-36280089

ABSTRACT

Glaucoma is the leading cause of irreversible blindness, and the early detection and timely treatment are essential for glaucoma management. However, due to the interindividual variability in the characteristics of glaucoma onset, a single feature is not yet sufficient for monitoring glaucoma progression in isolation. There is an urgent need to develop more comprehensive diagnostic methods with higher accuracy. In this study, we proposed a multi- feature deep learning (MFDL) system based on intraocular pressure (IOP), color fundus photograph (CFP) and visual field (VF) to classify the glaucoma into four severity levels. We designed a three-phase framework for glaucoma severity diagnosis from coarse to fine, which contains screening, detection and classification. We trained it on 6,131 samples from 3,324 patients and tested it on independent 240 samples from 185 patients. Our results show that MFDL achieved a higher accuracy of 0.842 (95 % CI, 0.795-0.888) than the direct four classification deep learning (DFC-DL, accuracy of 0.513 [0.449-0.576]), CFP-based single-feature deep learning (CFP-DL, accuracy of 0.483 [0.420-0.547]) and VF-based single-feature deep learning (VF-DL, accuracy of 0.725 [0.668-0.782]). Its performance was statistically significantly superior to that of 8 juniors. It also outperformed 3 seniors and 1 expert, and was comparable with 2 glaucoma experts (0.842 vs 0.854, p = 0.663; 0.842 vs 0.858, p = 0.580). With the assistance of MFDL, junior ophthalmologists achieved statistically significantly higher accuracy performance, with the increased accuracy ranged from 7.50 % to 17.9 %, and that of seniors and experts were 6.30 % to 7.50 % and 5.40 % to 7.50 %. The mean diagnosis time per patient of MFDL was 5.96 s. The proposed model can potentially assist ophthalmologists in efficient and accurate glaucoma diagnosis that could aid the clinical management of glaucoma.


Subject(s)
Deep Learning , Glaucoma , Humans , Glaucoma/diagnosis , Diagnostic Techniques, Ophthalmological , Photography/methods , Diagnosis, Computer-Assisted/methods
14.
Front Med (Lausanne) ; 9: 794045, 2022.
Article in English | MEDLINE | ID: mdl-35847781

ABSTRACT

Purpose: To develop artificial intelligence (AI)-based deep learning (DL) models for automatically detecting the ischemia type and the non-perfusion area (NPA) from color fundus photographs (CFPs) of patients with branch retinal vein occlusion (BRVO). Methods: This was a retrospective analysis of 274 CFPs from patients diagnosed with BRVO. All DL models were trained using a deep convolutional neural network (CNN) based on 45 degree CFPs covering the fovea and the optic disk. We first trained a DL algorithm to identify BRVO patients with or without the necessity of retinal photocoagulation from 219 CFPs and validated the algorithm on 55 CFPs. Next, we trained another DL algorithm to segment NPA from 104 CFPs and validated it on 29 CFPs, in which the NPA was manually delineated by 3 experienced ophthalmologists according to fundus fluorescein angiography. Both DL models have been cross-validated 5-fold. The recall, precision, accuracy, and area under the curve (AUC) were used to evaluate the DL models in comparison with three types of independent ophthalmologists of different seniority. Results: In the first DL model, the recall, precision, accuracy, and area under the curve (AUC) were 0.75 ± 0.08, 0.80 ± 0.07, 0.79 ± 0.02, and 0.82 ± 0.03, respectively, for predicting the necessity of laser photocoagulation for BRVO CFPs. The second DL model was able to segment NPA in CFPs of BRVO with an AUC of 0.96 ± 0.02. The recall, precision, and accuracy for segmenting NPA was 0.74 ± 0.05, 0.87 ± 0.02, and 0.89 ± 0.02, respectively. The performance of the second DL model was nearly comparable with the senior doctors and significantly better than the residents. Conclusion: These results indicate that the DL models can directly identify and segment retinal NPA from the CFPs of patients with BRVO, which can further guide laser photocoagulation. Further research is needed to identify NPA of the peripheral retina in BRVO, or other diseases, such as diabetic retinopathy.

15.
Biomedicines ; 10(6)2022 Jun 03.
Article in English | MEDLINE | ID: mdl-35740336

ABSTRACT

Automated glaucoma detection using deep learning may increase the diagnostic rate of glaucoma to prevent blindness, but generalizable models are currently unavailable despite the use of huge training datasets. This study aims to evaluate the performance of a convolutional neural network (CNN) classifier trained with a limited number of high-quality fundus images in detecting glaucoma and methods to improve its performance across different datasets. A CNN classifier was constructed using EfficientNet B3 and 944 images collected from one medical center (core model) and externally validated using three datasets. The performance of the core model was compared with (1) the integrated model constructed by using all training images from the four datasets and (2) the dataset-specific model built by fine-tuning the core model with training images from the external datasets. The diagnostic accuracy of the core model was 95.62% but dropped to ranges of 52.5-80.0% on the external datasets. Dataset-specific models exhibited superior diagnostic performance on the external datasets compared to other models, with a diagnostic accuracy of 87.50-92.5%. The findings suggest that dataset-specific tuning of the core CNN classifier effectively improves its applicability across different datasets when increasing training images fails to achieve generalization.

16.
J Clin Med ; 11(12)2022 Jun 09.
Article in English | MEDLINE | ID: mdl-35743380

ABSTRACT

PURPOSE: We investigated whether a deep learning algorithm applied to retinal fundoscopic images could predict cerebral white matter hyperintensity (WMH), as represented by a modified Fazekas scale (FS), on brain magnetic resonance imaging (MRI). METHODS: Participants who had undergone brain MRI and health-screening fundus photography at Hallym University Sacred Heart Hospital between 2010 and 2020 were consecutively included. The subjects were divided based on the presence of WMH, then classified into three groups according to the FS grade (0 vs. 1 vs. 2+) using age matching. Two pre-trained convolutional neural networks were fine-tuned and evaluated for prediction performance using 10-fold cross-validation. RESULTS: A total of 3726 fundus photographs from 1892 subjects were included, of which 905 fundus photographs from 462 subjects were included in the age-matched balanced dataset. In predicting the presence of WMH, the mean area under the receiver operating characteristic curve was 0.736 ± 0.030 for DenseNet-201 and 0.724 ± 0.026 for EfficientNet-B7. For the prediction of FS grade, the mean accuracies reached 41.4 ± 5.7% with DenseNet-201 and 39.6 ± 5.6% with EfficientNet-B7. The deep learning models focused on the macula and retinal vasculature to detect an FS of 2+. CONCLUSIONS: Cerebral WMH might be partially predicted by non-invasive fundus photography via deep learning, which may suggest an eye-brain association.

17.
J Clin Med ; 12(1)2022 Dec 24.
Article in English | MEDLINE | ID: mdl-36614953

ABSTRACT

The retina is a window to the human body. Oculomics is the study of the correlations between ophthalmic biomarkers and systemic health or disease states. Deep learning (DL) is currently the cutting-edge machine learning technique for medical image analysis, and in recent years, DL techniques have been applied to analyze retinal images in oculomics studies. In this review, we summarized oculomics studies that used DL models to analyze retinal images-most of the published studies to date involved color fundus photographs, while others focused on optical coherence tomography images. These studies showed that some systemic variables, such as age, sex and cardiovascular disease events, could be consistently robustly predicted, while other variables, such as thyroid function and blood cell count, could not be. DL-based oculomics has demonstrated fascinating, "super-human" predictive capabilities in certain contexts, but it remains to be seen how these models will be incorporated into clinical care and whether management decisions influenced by these models will lead to improved clinical outcomes.

18.
J Digit Imaging ; 34(4): 948-958, 2021 08.
Article in English | MEDLINE | ID: mdl-34244880

ABSTRACT

The purpose of this study was to detect the presence of retinitis pigmentosa (RP) based on color fundus photographs using a deep learning model. A total of 1670 color fundus photographs from the Taiwan inherited retinal degeneration project and National Taiwan University Hospital were acquired and preprocessed. The fundus photographs were labeled RP or normal and divided into training and validation datasets (n = 1284) and a test dataset (n = 386). Three transfer learning models based on pre-trained Inception V3, Inception Resnet V2, and Xception deep learning architectures, respectively, were developed to classify the presence of RP on fundus images. The model sensitivity, specificity, and area under the receiver operating characteristic (AUROC) curve were compared. The results from the best transfer learning model were compared with the reading results of two general ophthalmologists, one retinal specialist, and one specialist in retina and inherited retinal degenerations. A total of 935 RP and 324 normal images were used to train the models. The test dataset consisted of 193 RP and 193 normal images. Among the three transfer learning models evaluated, the Xception model had the best performance, achieving an AUROC of 96.74%. Gradient-weighted class activation mapping indicated that the contrast between the periphery and the macula on fundus photographs was an important feature in detecting RP. False-positive results were mostly obtained in cases of high myopia with highly tessellated retina, and false-negative results were mostly obtained in cases of unclear media, such as cataract, that led to a decrease in the contrast between the peripheral retina and the macula. Our model demonstrated the highest accuracy of 96.00%, which was comparable with the average results of 81.50%, of the other four ophthalmologists. Moreover, the accuracy was obtained at the same level of sensitivity (95.71%), as compared to an inherited retinal disease specialist. RP is an important disease, but its early and precise diagnosis is challenging. We developed and evaluated a transfer-learning-based model to detect RP from color fundus photographs. The results of this study validate the utility of deep learning in automating the identification of RP from fundus photographs.


Subject(s)
Deep Learning , Retinal Degeneration , Retinitis Pigmentosa , Artificial Intelligence , Fundus Oculi , Humans , Retinitis Pigmentosa/diagnostic imaging , Retinitis Pigmentosa/genetics
19.
J Pers Med ; 11(5)2021 Apr 21.
Article in English | MEDLINE | ID: mdl-33918998

ABSTRACT

Artificial intelligence (AI)-based diagnostic tools have been accepted in ophthalmology. The use of retinal images, such as fundus photographs, is a promising approach for the development of AI-based diagnostic platforms. Retinal pathologies usually occur in a broad spectrum of eye diseases, including neovascular or dry age-related macular degeneration, epiretinal membrane, rhegmatogenous retinal detachment, retinitis pigmentosa, macular hole, retinal vein occlusions, and diabetic retinopathy. Here, we report a fundus image-based AI model for differential diagnosis of retinal diseases. We classified retinal images with three convolutional neural network models: ResNet50, VGG19, and Inception v3. Furthermore, the performance of several dense (fully connected) layers was compared. The prediction accuracy for diagnosis of nine classes of eight retinal diseases and normal control was 87.42% in the ResNet50 model, which added a dense layer with 128 nodes. Furthermore, our AI tool augments ophthalmologist's performance in the diagnosis of retinal disease. These results suggested that the fundus image-based AI tool is applicable for the medical diagnosis process of retinal diseases.

20.
JMIR Public Health Surveill ; 7(3): e23538, 2021 03 09.
Article in English | MEDLINE | ID: mdl-33411671

ABSTRACT

BACKGROUND: Diabetic retinopathy can cause blindness even in the absence of symptoms. Although routine eye screening remains the mainstay of diabetic retinopathy treatment and it can prevent 95% of blindness, this screening is not available in many low- and middle-income countries even though these countries contribute to 75% of the global diabetic retinopathy burden. OBJECTIVE: The aim of this study was to assess the diagnostic accuracy of diabetic retinopathy screening done by non-ophthalmologists using 2 different digital fundus cameras and to assess the risk factors for the occurrence of diabetic retinopathy. METHODS: This validation study was conducted in 6 peripheral health facilities in Bangladesh from July 2017 to June 2018. A double-blinded diagnostic approach was used to test the accuracy of the diabetic retinopathy screening done by non-ophthalmologists against the gold standard diagnosis by ophthalmology-trained eye consultants. Retinal images were taken by using either a desk-based camera or a hand-held camera following pupil dilatation. Test accuracy was assessed using measures of sensitivity, specificity, and positive and negative predictive values. Overall agreement with the gold standard test was reported using the Cohen kappa statistic (κ) and area under the receiver operating curve (AUROC). Risk factors for diabetic retinopathy occurrence were assessed using binary logistic regression. RESULTS: In 1455 patients with diabetes, the overall sensitivity to detect any form of diabetic retinopathy by non-ophthalmologists was 86.6% (483/558, 95% CI 83.5%-89.3%) and the specificity was 78.6% (705/897, 95% CI 75.8%-81.2%). The accuracy of the correct classification was excellent with a desk-based camera (AUROC 0.901, 95% CI 0.88-0.92) and fair with a hand-held camera (AUROC 0.710, 95% CI 0.67-0.74). Out of the 3 non-ophthalmologist categories, registered nurses and paramedics had strong agreement with kappa values of 0.70 and 0.85 in the diabetic retinopathy assessment, respectively, whereas the nonclinical trained staff had weak agreement (κ=0.35). The odds of having retinopathy increased with the duration of diabetes measured in 5-year intervals (P<.001); the odds of having retinopathy in patients with diabetes for 5-10 years (odds ratio [OR] 1.81, 95% CI 1.37-2.41) and more than 10 years (OR 3.88, 95% CI 2.91-5.15) were greater than that in patients with diabetes for less than 5 years. Obesity was found to have a negative association (P=.04) with diabetic retinopathy. CONCLUSIONS: Digital fundus photography is an effective screening tool with acceptable diagnostic accuracy. Our findings suggest that diabetic retinopathy screening can be accurately performed by health care personnel other than eye consultants. People with more than 5 years of diabetes should receive priority in any community-level retinopathy screening program. In a country like Bangladesh where no diabetic retinopathy screening services exist, the use of hand-held cameras can be considered as a cost-effective option for potential system-wide implementation.


Subject(s)
Diabetic Retinopathy/diagnosis , Fundus Oculi , Mass Screening/methods , Photography , Adult , Bangladesh , Female , Health Facilities , Humans , Male , Middle Aged , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL