Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Am J Ophthalmol ; 2024 May 30.
Article in English | MEDLINE | ID: mdl-38823673

ABSTRACT

PURPOSE: To investigate the capability of ChatGPT for forecasting the conversion from ocular hypertension (OHT) to glaucoma based on the Ocular Hypertension Treatment Study (OHTS). DESIGN: Retrospective case-control study. PARTICIPANTS: A total of 3008 eyes of 1504 subjects from the OHTS were included in the study. METHODS: We selected demographic, clinical, ocular, optic nerve head, and visual field (VF) parameters one year prior to glaucoma development from the OHTS participants. Subsequently, we developed queries by converting tabular parameters into textual format based on both eyes of all participants. We used the ChatGPT application program interface (API) to automatically perform ChatGPT prompting for all subjects. We then investigated whether ChatGPT can accurately forecast conversion from OHT to glaucoma based on various objective metrics. MAIN OUTCOME MEASURE: Accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and weighted F1 score. RESULTS: ChatGPT4.0 demonstrated an accuracy of 75%, AUC of 0.67, sensitivity of 56%, specificity of 78%, and weighted F1 score of 0.77 in predicting conversion to glaucoma one year before onset. ChatGPT3.5 provided an accuracy of 61%, AUC of 0.62, sensitivity of 64%, specificity of 59%, and weighted F1 score of 0.63 in predicting conversion to glaucoma one year before onset. CONCLUSIONS: The performance of ChatGPT4.0 in forecasting development of glaucoma one year before onset was reasonable. The overall performance of ChatGPT4.0 was consistently higher than ChatGPT3.5. Large language models (LLMs) hold great promise for augmenting glaucoma research capabilities and enhancing clinical care. Future efforts in creating ophthalmology specific LLMs that leverage multi-modal data in combination with active learning may lead to more useful integration with clinical practice and deserve further investigations.

3.
Oman J Ophthalmol ; 17(1): 43-46, 2024.
Article in English | MEDLINE | ID: mdl-38524332

ABSTRACT

OBJECTIVES: The objective of this study was to investigate the efficacy of intravitreal antivascular endothelial growth factor (VEGF) therapy in the treatment of macular edema secondary to retinal vein occlusion (RVO) in Afghanistan. METHODS: A retrospective analysis was conducted of all RVO cases that underwent intravitreal ant-VEGF injection at the two leading hospitals in Kabul. The main outcome measures were visual acuity and central retinal thickness as determined by optical coherence tomography. Information was also collected on the distance traveled by each patient and the frequency of injections. RESULTS: One hundred and twenty-five eyes of 121 patients (86 males) with RVO were identified as having undergone treatment, with a mean age of 53.1 years (range 20-80). The only agent used was bevacizumab. The mean central retinal thickness reduced from 624.2 ± 24.9 µm at the baseline to 257.8 ± 5.7 µm following treatment (P < 0.001). There was a small increase in visual acuity from 1.33 LogMAR at the baseline to 1.13 LogMAR following the most recent injection (P = 0.03, paired t-test). The mean distance traveled by patients was 173.9 km (range 2-447 km). CONCLUSION: Despite the challenges of health-care provision in Afghanistan, this review shows that the use of intravitreal bevacizumab has provided an effective treatment for macular edema after RVO.

4.
JMIR Form Res ; 8: e52462, 2024 Mar 22.
Article in English | MEDLINE | ID: mdl-38517457

ABSTRACT

BACKGROUND: In this paper, we present an automated method for article classification, leveraging the power of large language models (LLMs). OBJECTIVE: The aim of this study is to evaluate the applicability of various LLMs based on textual content of scientific ophthalmology papers. METHODS: We developed a model based on natural language processing techniques, including advanced LLMs, to process and analyze the textual content of scientific papers. Specifically, we used zero-shot learning LLMs and compared Bidirectional and Auto-Regressive Transformers (BART) and its variants with Bidirectional Encoder Representations from Transformers (BERT) and its variants, such as distilBERT, SciBERT, PubmedBERT, and BioBERT. To evaluate the LLMs, we compiled a data set (retinal diseases [RenD] ) of 1000 ocular disease-related articles, which were expertly annotated by a panel of 6 specialists into 19 distinct categories. In addition to the classification of articles, we also performed analysis on different classified groups to find the patterns and trends in the field. RESULTS: The classification results demonstrate the effectiveness of LLMs in categorizing a large number of ophthalmology papers without human intervention. The model achieved a mean accuracy of 0.86 and a mean F1-score of 0.85 based on the RenD data set. CONCLUSIONS: The proposed framework achieves notable improvements in both accuracy and efficiency. Its application in the domain of ophthalmology showcases its potential for knowledge organization and retrieval. We performed a trend analysis that enables researchers and clinicians to easily categorize and retrieve relevant papers, saving time and effort in literature review and information gathering as well as identification of emerging scientific trends within different disciplines. Moreover, the extendibility of the model to other scientific fields broadens its impact in facilitating research and trend analysis across diverse disciplines.

5.
Cornea ; 43(5): 664-670, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38391243

ABSTRACT

PURPOSE: The aim of this study was to assess the capabilities of ChatGPT-4.0 and ChatGPT-3.5 for diagnosing corneal eye diseases based on case reports and compare with human experts. METHODS: We randomly selected 20 cases of corneal diseases including corneal infections, dystrophies, and degenerations from a publicly accessible online database from the University of Iowa. We then input the text of each case description into ChatGPT-4.0 and ChatGPT-3.5 and asked for a provisional diagnosis. We finally evaluated the responses based on the correct diagnoses, compared them with the diagnoses made by 3 corneal specialists (human experts), and evaluated interobserver agreements. RESULTS: The provisional diagnosis accuracy based on ChatGPT-4.0 was 85% (17 correct of 20 cases), whereas the accuracy of ChatGPT-3.5 was 60% (12 correct cases of 20). The accuracy of 3 corneal specialists compared with ChatGPT-4.0 and ChatGPT-3.5 was 100% (20 cases, P = 0.23, P = 0.0033), 90% (18 cases, P = 0.99, P = 0.6), and 90% (18 cases, P = 0.99, P = 0.6), respectively. The interobserver agreement between ChatGPT-4.0 and ChatGPT-3.5 was 65% (13 cases), whereas the interobserver agreement between ChatGPT-4.0 and 3 corneal specialists was 85% (17 cases), 80% (16 cases), and 75% (15 cases), respectively. However, the interobserver agreement between ChatGPT-3.5 and each of 3 corneal specialists was 60% (12 cases). CONCLUSIONS: The accuracy of ChatGPT-4.0 in diagnosing patients with various corneal conditions was markedly improved than ChatGPT-3.5 and promising for potential clinical integration. A balanced approach that combines artificial intelligence-generated insights with clinical expertise holds a key role for unveiling its full potential in eye care.


Subject(s)
Artificial Intelligence , Corneal Diseases , Humans , Cornea , Corneal Diseases/diagnosis , Databases, Factual
6.
Curr Opin Ophthalmol ; 35(3): 238-243, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38277274

ABSTRACT

PURPOSE OF REVIEW: Recent advances in artificial intelligence (AI), robotics, and chatbots have brought these technologies to the forefront of medicine, particularly ophthalmology. These technologies have been applied in diagnosis, prognosis, surgical operations, and patient-specific care in ophthalmology. It is thus both timely and pertinent to assess the existing landscape, recent advances, and trajectory of trends of AI, AI-enabled robots, and chatbots in ophthalmology. RECENT FINDINGS: Some recent developments have integrated AI enabled robotics with diagnosis, and surgical procedures in ophthalmology. More recently, large language models (LLMs) like ChatGPT have shown promise in augmenting research capabilities and diagnosing ophthalmic diseases. These developments may portend a new era of doctor-patient-machine collaboration. SUMMARY: Ophthalmology is undergoing a revolutionary change in research, clinical practice, and surgical interventions. Ophthalmic AI-enabled robotics and chatbot technologies based on LLMs are converging to create a new era of digital ophthalmology. Collectively, these developments portend a future in which conventional ophthalmic knowledge will be seamlessly integrated with AI to improve the patient experience and enhance therapeutic outcomes.


Subject(s)
Ophthalmology , Robotics , Humans , Artificial Intelligence
7.
medRxiv ; 2023 Sep 14.
Article in English | MEDLINE | ID: mdl-37781591

ABSTRACT

Purpose: To evaluate the efficiency of large language models (LLMs) including ChatGPT to assist in diagnosing neuro-ophthalmic diseases based on case reports. Design: Prospective study. Subjects or Participants: We selected 22 different case reports of neuro-ophthalmic diseases from a publicly available online database. These cases included a wide range of chronic and acute diseases that are commonly seen by neuro-ophthalmic sub-specialists. Methods: We inserted the text from each case as a new prompt into both ChatGPT v3.5 and ChatGPT Plus v4.0 and asked for the most probable diagnosis. We then presented the exact information to two neuro-ophthalmologists and recorded their diagnoses followed by comparison to responses from both versions of ChatGPT. Main Outcome Measures: Diagnostic accuracy in terms of number of correctly diagnosed cases among diagnoses. Results: ChatGPT v3.5, ChatGPT Plus v4.0, and the two neuro-ophthalmologists were correct in 13 (59%), 18 (82%), 19 (86%), and 19 (86%) out of 22 cases, respectively. The agreement between the various diagnostic sources were as follows: ChatGPT v3.5 and ChatGPT Plus v4.0, 13 (59%); ChatGPT v3.5 and the first neuro-ophthalmologist, 12 (55%); ChatGPT v3.5 and the second neuro-ophthalmologist, 12 (55%); ChatGPT Plus v4.0 and the first neuro-ophthalmologist, 17 (77%); ChatGPT Plus v4.0 and the second neuro-ophthalmologist, 16 (73%); and first and second neuro-ophthalmologists 17 (17%). Conclusions: The accuracy of ChatGPT v3.5 and ChatGPT Plus v4.0 in diagnosing patients with neuro-ophthalmic diseases was 59% and 82%, respectively. With further development, ChatGPT Plus v4.0 may have potential to be used in clinical care settings to assist clinicians in providing quick, accurate diagnoses of patients in neuro-ophthalmology. The applicability of using LLMs like ChatGPT in clinical settings that lack access to subspeciality trained neuro-ophthalmologists deserves further research.

8.
Ophthalmol Ther ; 12(6): 3121-3132, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37707707

ABSTRACT

INTRODUCTION: The purpose of this study was to evaluate the capabilities of large language models such as Chat Generative Pretrained Transformer (ChatGPT) to diagnose glaucoma based on specific clinical case descriptions with comparison to the performance of senior ophthalmology resident trainees. METHODS: We selected 11 cases with primary and secondary glaucoma from a publicly accessible online database of case reports. A total of four cases had primary glaucoma including open-angle, juvenile, normal-tension, and angle-closure glaucoma, while seven cases had secondary glaucoma including pseudo-exfoliation, pigment dispersion glaucoma, glaucomatocyclitic crisis, aphakic, neovascular, aqueous misdirection, and inflammatory glaucoma. We input the text of each case detail into ChatGPT and asked for provisional and differential diagnoses. We then presented the details of 11 cases to three senior ophthalmology residents and recorded their provisional and differential diagnoses. We finally evaluated the responses based on the correct diagnoses and evaluated agreements. RESULTS: The provisional diagnosis based on ChatGPT was correct in eight out of 11 (72.7%) cases and three ophthalmology residents were correct in six (54.5%), eight (72.7%), and eight (72.7%) cases, respectively. The agreement between ChatGPT and the first, second, and third ophthalmology residents were 9, 7, and 7, respectively. CONCLUSIONS: The accuracy of ChatGPT in diagnosing patients with primary and secondary glaucoma, using specific case examples, was similar or better than senior ophthalmology residents. With further development, ChatGPT may have the potential to be used in clinical care settings, such as primary care offices, for triaging and in eye care clinical practices to provide objective and quick diagnoses of patients with glaucoma.

9.
medRxiv ; 2023 Aug 28.
Article in English | MEDLINE | ID: mdl-37720035

ABSTRACT

Introduction: Assessing the capabilities of ChatGPT-4.0 and ChatGPT-3.5 for diagnosing corneal eye diseases based on case reports and compare with human experts. Methods: We randomly selected 20 cases of corneal diseases including corneal infections, dystrophies, degenerations, and injuries from a publicly accessible online database from the University of Iowa. We then input the text of each case description into ChatGPT-4.0 and ChatGPT3.5 and asked for a provisional diagnosis. We finally evaluated the responses based on the correct diagnoses then compared with the diagnoses of three cornea specialists (Human experts) and evaluated interobserver agreements. Results: The provisional diagnosis accuracy based on ChatGPT-4.0 was 85% (17 correct out of 20 cases) while the accuracy of ChatGPT-3.5 was 60% (12 correct cases out of 20). The accuracy of three cornea specialists were 100% (20 cases), 90% (18 cases), and 90% (18 cases), respectively. The interobserver agreement between ChatGPT-4.0 and ChatGPT-3.5 was 65% (13 cases) while the interobserver agreement between ChatGPT-4.0 and three cornea specialists were 85% (17 cases), 80% (16 cases), and 75% (15 cases), respectively. However, the interobserver agreement between ChatGPT-3.5 and each of three cornea specialists was 60% (12 cases). Conclusions: The accuracy of ChatGPT-4.0 in diagnosing patients with various corneal conditions was markedly improved than ChatGPT-3.5 and promising for potential clinical integration.

10.
Disaster Med Public Health Prep ; : 1-7, 2021 May 05.
Article in English | MEDLINE | ID: mdl-33947492

ABSTRACT

OBJECTIVE: Community responses are important for the management of early-phase outbreaks of coronavirus disease 2019 (COVID-19). Perceived susceptibility and severity are considered key elements that motivate people to adopt nonpharmaceutical interventions. This study aimed to (i) explore perceived susceptibility and severity of the COVID-19 pandemic, (ii) examine the practice of nonpharmaceutical interventions, and (iii) assess the potential association of perceived COVID-19 susceptibility and severity with the practice of nonpharmaceutical interventions among people living in Afghanistan. METHODS: A cross-sectional design was used, using online surveys disseminated from April to May 2020. Convenience sampling was used to recruit the participants of this study. The previously developed scales were used to assess the participants' demographic information, perceived risk of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection, and perceived severity of COVID-19. Multivariate analyses were conducted to assess the potential association of perceived COVID-19 susceptibility and severity with the practice of nonpharmaceutical interventions. RESULTS: The Internet was the main source for obtaining COVID-19 information among participants in this study. While 45.8% of the participants believed it was "very unlikely" for them to get infected with COVID-19, 76.7% perceived COVID-19 as a severe disease. Similarly, 37.5% believed the chance of being cured if infected with COVID-19 is "unlikely/very unlikely." The majority of participants (95.6%) perceived their health to be in "good" and "very good" status. Overall, 74.2% mentioned that they stopped visiting public places, 49.7% started using gloves, and 70.4% started wearing a mask. Participants who believed they have a low probability of survival if infected with COVID-19 were more likely to wear masks and practice hand washing. CONCLUSIONS: It appears that communities' psychological and behavioral responses were affected by the early phase of the COVID-19 pandemic in Afghanistan, especially among young Internet users. The findings gained from a timely behavioral assessment of the community might be useful to develop interventions and risk communication strategies in epidemics within and beyond COVID-19.

11.
Front Reprod Health ; 3: 783271, 2021.
Article in English | MEDLINE | ID: mdl-36303966

ABSTRACT

Objectives: The present study aimed to investigate the potential delays in healthcare seeking and diagnosis of women with cervical cancer (CC) in Afghanistan. Methods: Clinical records of three hospitals in Kabul were searched for CC cases, and the women identified were interviewed by a trained physician using a semi-structured questionnaire. The main outcomes were the prevalence of potential delays over 90 days (1) from symptoms onset to healthcare seeking (patient delay), and (2) from first healthcare visit to CC diagnosis (healthcare delay). Information was also collected on: type and stage of CC, diagnostic test utilized, familiarity for CC, signs and symptoms, treatment type, and potential reasons for delaying healthcare seeking. Results: 31 women with CC were identified, however only 11 continued their treatment in the study hospitals or were reachable by telephone, and accepted the interview. The mean age was 51 ± 14 years, and only 18.2% had a previous history of seeking medical care. Patient delay was seen in 90.9% of the women (95% CI: 58.7-99.8), with a median of 304 ± 183 days. Instead, healthcare delay was found in 45.4% (95% CI: 16.7-76.6), with a median of 61 ± 152 days. The main reasons for patient delays were unawareness of the seriousness of the symptoms (70.0%) and unwillingness to consult a healthcare professional (30.0%). None of the women ever underwent cervical screening or heard of the HPV vaccination. Conclusions: Given the global effort to provide quality health care to all CC patients, Afghanistan needs interventions to reduce the delays in the diagnosis of this cancer, for instance by improving all women's awareness of gynecological signs and symptoms.

SELECTION OF CITATIONS
SEARCH DETAIL
...