Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Type of study
Language
Publication year range
1.
Indian J Radiol Imaging ; 34(2): 276-282, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38549897

ABSTRACT

Background The field of radiology relies on accurate interpretation of medical images for effective diagnosis and patient care. Recent advancements in artificial intelligence (AI) and natural language processing have sparked interest in exploring the potential of AI models in assisting radiologists. However, limited research has been conducted to assess the performance of AI models in radiology case interpretation, particularly in comparison to human experts. Objective This study aimed to evaluate the performance of ChatGPT, Google Bard, and Bing in solving radiology case vignettes (Fellowship of the Royal College of Radiologists 2A [FRCR2A] examination style questions) by comparing their responses to those provided by two radiology residents. Methods A total of 120 multiple-choice questions based on radiology case vignettes were formulated according to the pattern of FRCR2A examination. The questions were presented to ChatGPT, Google Bard, and Bing. Two residents wrote the examination with the same questions in 3 hours. The responses generated by the AI models were collected and compared to the answer keys and explanation of the answers was rated by the two radiologists. A cutoff of 60% was set as the passing score. Results The two residents (63.33 and 57.5%) outperformed the three AI models: Bard (44.17%), Bing (53.33%), and ChatGPT (45%), but only one resident passed the examination. The response patterns among the five respondents were significantly different ( p = 0.0117). In addition, the agreement among the generative AI models was significant (intraclass correlation coefficient [ICC] = 0.628), but there was no agreement between the residents (Kappa = -0.376). The explanation of generative AI models in support of answer was 44.72% accurate. Conclusion Humans exhibited superior accuracy compared to the AI models, showcasing a stronger comprehension of the subject matter. All three AI models included in the study could not achieve the minimum percentage needed to pass an FRCR2A examination. However, generative AI models showed significant agreement in their answers where the residents exhibited low agreement, highlighting a lack of consistency in their responses.

2.
J Med Case Rep ; 7: 288, 2013 Dec 30.
Article in English | MEDLINE | ID: mdl-24377770

ABSTRACT

INTRODUCTION: The object of this case is to report the clinical findings, microbiological findings and management of a case of fungal scleritis following cataract surgery, which mimicked surgically induced necrotizing scleritis. CASE PRESENTATION: A 72-year-old Asian (Indian) man presented with scleritis following cataract surgery at another facility. He had been treated elsewhere for suspected scleritis, primarily with steroids followed by empiric antibiotic and antifungal agents. At our institute he underwent a complete microbiological workup and a scleral patch graft. The scleral scraping revealed fungal filaments. He was treated postoperatively with topical and systemic antifungal agent along with topical cyclosporine. The follow-up examination at 5 months revealed that the scleral patch graft was successful in maintaining the integrity of his globe and restoring partial vision. CONCLUSIONS: Fungal scleritis may mimic surgically induced necrotizing scleritis. Early diagnosis and prompt management can prevent progression of the disease and further devastating complications.

SELECTION OF CITATIONS
SEARCH DETAIL
...