Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
J Clin Densitom ; 27(2): 101480, 2024.
Article in English | MEDLINE | ID: mdl-38401238

ABSTRACT

BACKGROUND: Artificial intelligence (AI) large language models (LLMs) such as ChatGPT have demonstrated the ability to pass standardized exams. These models are not trained for a specific task, but instead trained to predict sequences of text from large corpora of documents sourced from the internet. It has been shown that even models trained on this general task can pass exams in a variety of domain-specific fields, including the United States Medical Licensing Examination. We asked if large language models would perform as well on a much narrower subdomain tests designed for medical specialists. Furthermore, we wanted to better understand how progressive generations of GPT (generative pre-trained transformer) models may be evolving in the completeness and sophistication of their responses even while generational training remains general. In this study, we evaluated the performance of two versions of GPT (GPT 3 and 4) on their ability to pass the certification exam given to physicians to work as osteoporosis specialists and become a certified clinical densitometrists. The CCD exam has a possible score range of 150 to 400. To pass, you need a score of 300. METHODS: A 100-question multiple-choice practice exam was obtained from a 3rd party exam preparation website that mimics the accredited certification tests given by the ISCD (International Society for Clinical Densitometry). The exam was administered to two versions of GPT, the free version (GPT Playground) and ChatGPT+, which are based on GPT-3 and GPT-4, respectively (OpenAI, San Francisco, CA). The systems were prompted with the exam questions verbatim. If the response was purely textual and did not specify which of the multiple-choice answers to select, the authors matched the text to the closest answer. Each exam was graded and an estimated ISCD score was provided from the exam website. In addition, each response was evaluated by a rheumatologist CCD and ranked for accuracy using a 5-level scale. The two GPT versions were compared in terms of response accuracy and length. RESULTS: The average response length was 11.6 ±19 words for GPT-3 and 50.0±43.6 words for GPT-4. GPT-3 answered 62 questions correctly resulting in a failing ISCD score of 289. However, GPT-4 answered 82 questions correctly with a passing score of 342. GPT-3 scored highest on the "Overview of Low Bone Mass and Osteoporosis" category (72 % correct) while GPT-4 scored well above 80 % accuracy on all categories except "Imaging Technology in Bone Health" (65 % correct). Regarding subjective accuracy, GPT-3 answered 23 questions with nonsensical or totally wrong responses while GPT-4 had no responses in that category. CONCLUSION: If this had been an actual certification exam, GPT-4 would now have a CCD suffix to its name even after being trained using general internet knowledge. Clearly, more goes into physician training than can be captured in this exam. However, GPT algorithms may prove to be valuable physician aids in the diagnoses and monitoring of osteoporosis and other diseases.


Subject(s)
Artificial Intelligence , Certification , Humans , Osteoporosis/diagnosis , Clinical Competence , Educational Measurement/methods , United States
2.
Am J Med Sci ; 345(6): 491-3, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23221514

ABSTRACT

A case of anti-N-methyl-D-aspartate receptor (NMDAR) encephalitis with an atypical finding of transient increased intracranial pressure is reported. Anti-NMDAR encephalitis is an underrecognized, novel and treatable form of encephalitis being increasingly identified as an explanation of encephalitis in young adults. Management of these patients requires a multidisciplinary approach involving neurologists, internists, nursing and rehabilitation staff. It is important for internists to recognize this condition and consider it in the differential diagnosis of encephalopathy. Internists also need to be familiar with the clinical manifestations and the treatment of the disease as they have an important role in the care of these patients during their prolonged stay in the hospital. Increased intracranial pressure is an atypical and underrecognized finding that has been only noted in a previous review on this disorder. It may present a diagnostic or management challenge in patients with anti-NMDAR encephalitis.


Subject(s)
Anti-N-Methyl-D-Aspartate Receptor Encephalitis/diagnosis , Anti-N-Methyl-D-Aspartate Receptor Encephalitis/therapy , Intracranial Hypertension/diagnosis , Intracranial Hypertension/therapy , Adolescent , Adrenal Cortex Hormones/therapeutic use , Anti-N-Methyl-D-Aspartate Receptor Encephalitis/immunology , Antibodies/blood , Antibodies/cerebrospinal fluid , Diagnosis, Differential , Humans , Immunoglobulins, Intravenous/therapeutic use , Immunotherapy , Intracranial Hypertension/immunology , Male , Plasmapheresis , Receptors, N-Methyl-D-Aspartate/immunology , Treatment Outcome
SELECTION OF CITATIONS
SEARCH DETAIL
...