Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Ophthalmol Ther ; 13(6): 1703-1722, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38658491

RESUMO

INTRODUCTION: This study aims to evaluate the accuracy of 12 different intraocular lens (IOL) power calculation formulas for post-radial keratotomy (RK) eyes. The investigation utilizes recent advances in topography/tomography devices and artificial intelligence (AI)-based calculators, comparing the results to those reported in current literature to assess the efficacy and predictability of IOL calculations for this patient group. METHODS: In this retrospective study, 37 eyes from 24 individuals with a history of RK who underwent cataract surgery at Hoopes Vision Center were analyzed. Biometry and corneal topography measurements were taken preoperatively. Subjective refraction was obtained 6 months postoperatively. Twelve different IOL power calculations were used, including the American Society of Cataract and Refractive Surgery (ASCRS) post-RK online formula, and the Barrett True K, Double K modified-Holladay 1, Haigis-L, Panacea, Camellin-Calossi, Emmetropia Verifying Optical (EVO) 2.0, Kane, and Prediction Enhanced by Artificial Intelligence and output Linearization-Debellemanière, Gatinel, and Saad (PEARL-DGS) formulas. Outcome measures included median absolute error (MedAE), mean absolute error (MAE), arithmetic mean error (AME), and percentage of eyes achieving refractive prediction errors (RPE) within ± 0.50 D, ± 0.75 D, and ± 1 D for each formula. A search of the literature was also performed by two independent reviewers based on relevant formulas. RESULTS: Overall, the best performing IOL power calculations were the Camellin-Calossi (MedAE = 0.515 D), the ASCRS average (MedAE = 0.535 D), and the EVO (MedAE = 0.545 D) and Kane (MedAE = 0.555 D) AI-based formulas. The EVO and Kane formulas along with the ASCRS calculation performed similarly, with 48.65% of eyes scoring within ± 0.50 D of the target range, while the Equivalent Keratometry Reading (EKR) 65 Holladay formula achieved the greatest percentage of eyes scoring within ± 0.25 D of the target range (35.14%). Additionally, the EVO 2.0 formula achieved 64.86% of eyes scoring within the ± 0.75 D RPE category, while the Kane formula achieved 75.68% of eyes scoring within the ± 1 D RPE category. There was no significant difference in MAE between the established and newer generation formulas (P > 0.05). The Panacea formula consistently underperformed when compared to the ASCRS average and other high-performing formulas (P < 0.05). CONCLUSION: This study demonstrates the potential of AI-based IOL calculation formulas, such as EVO 2.0 and Kane, for improving the accuracy of IOL power calculation in post-RK eyes undergoing cataract surgery. Established calculations, such as the ASCRS and Barrett True K formula, remain effective options, while under-utilized formulas, like the EKR65 and Camellin-Calossi formulas, show promise, emphasizing the need for further research and larger studies to validate and enhance IOL power calculation for this patient group.

2.
Cureus ; 15(6): e40822, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37485215

RESUMO

Importance Chat Generative Pre-Trained Transformer (ChatGPT) has shown promising performance in various fields, including medicine, business, and law, but its accuracy in specialty-specific medical questions, particularly in ophthalmology, is still uncertain. Purpose This study evaluates the performance of two ChatGPT models (GPT-3.5 and GPT-4) and human professionals in answering ophthalmology questions from the StatPearls question bank, assessing their outcomes, and providing insights into the integration of artificial intelligence (AI) technology in ophthalmology. Methods ChatGPT's performance was evaluated using 467 ophthalmology questions from the StatPearls question bank. These questions were stratified into 11 subcategories, four difficulty levels, and three generalized anatomical categories. The answer accuracy of GPT-3.5, GPT-4, and human participants was assessed. Statistical analysis was conducted via the Kolmogorov-Smirnov test for normality, one-way analysis of variance (ANOVA) for the statistical significance of GPT-3 versus GPT-4 versus human performance, and repeated unpaired two-sample t-tests to compare the means of two groups. Results GPT-4 outperformed both GPT-3.5 and human professionals on ophthalmology StatPearls questions, except in the "Lens and Cataract" category. The performance differences were statistically significant overall, with GPT-4 achieving higher accuracy (73.2%) compared to GPT-3.5 (55.5%, p-value < 0.001) and humans (58.3%, p-value < 0.001). There were variations in performance across difficulty levels (rated one to four), but GPT-4 consistently performed better than both GPT-3.5 and humans on level-two, -three, and -four questions. On questions of level-four difficulty, human performance significantly exceeded that of GPT-3.5 (p = 0.008). Conclusion The study's findings demonstrate GPT-4's significant performance improvements over GPT-3.5 and human professionals on StatPearls ophthalmology questions. Our results highlight the potential of advanced conversational AI systems to be utilized as important tools in the education and practice of medicine.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...