Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
J Clin Densitom ; 27(2): 101480, 2024.
Article in English | MEDLINE | ID: mdl-38401238

ABSTRACT

BACKGROUND: Artificial intelligence (AI) large language models (LLMs) such as ChatGPT have demonstrated the ability to pass standardized exams. These models are not trained for a specific task, but instead trained to predict sequences of text from large corpora of documents sourced from the internet. It has been shown that even models trained on this general task can pass exams in a variety of domain-specific fields, including the United States Medical Licensing Examination. We asked if large language models would perform as well on a much narrower subdomain tests designed for medical specialists. Furthermore, we wanted to better understand how progressive generations of GPT (generative pre-trained transformer) models may be evolving in the completeness and sophistication of their responses even while generational training remains general. In this study, we evaluated the performance of two versions of GPT (GPT 3 and 4) on their ability to pass the certification exam given to physicians to work as osteoporosis specialists and become a certified clinical densitometrists. The CCD exam has a possible score range of 150 to 400. To pass, you need a score of 300. METHODS: A 100-question multiple-choice practice exam was obtained from a 3rd party exam preparation website that mimics the accredited certification tests given by the ISCD (International Society for Clinical Densitometry). The exam was administered to two versions of GPT, the free version (GPT Playground) and ChatGPT+, which are based on GPT-3 and GPT-4, respectively (OpenAI, San Francisco, CA). The systems were prompted with the exam questions verbatim. If the response was purely textual and did not specify which of the multiple-choice answers to select, the authors matched the text to the closest answer. Each exam was graded and an estimated ISCD score was provided from the exam website. In addition, each response was evaluated by a rheumatologist CCD and ranked for accuracy using a 5-level scale. The two GPT versions were compared in terms of response accuracy and length. RESULTS: The average response length was 11.6 ±19 words for GPT-3 and 50.0±43.6 words for GPT-4. GPT-3 answered 62 questions correctly resulting in a failing ISCD score of 289. However, GPT-4 answered 82 questions correctly with a passing score of 342. GPT-3 scored highest on the "Overview of Low Bone Mass and Osteoporosis" category (72 % correct) while GPT-4 scored well above 80 % accuracy on all categories except "Imaging Technology in Bone Health" (65 % correct). Regarding subjective accuracy, GPT-3 answered 23 questions with nonsensical or totally wrong responses while GPT-4 had no responses in that category. CONCLUSION: If this had been an actual certification exam, GPT-4 would now have a CCD suffix to its name even after being trained using general internet knowledge. Clearly, more goes into physician training than can be captured in this exam. However, GPT algorithms may prove to be valuable physician aids in the diagnoses and monitoring of osteoporosis and other diseases.


Subject(s)
Artificial Intelligence , Certification , Humans , Osteoporosis/diagnosis , Clinical Competence , Educational Measurement/methods , United States
2.
Med Phys ; 49(4): 2663-2671, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35106767

ABSTRACT

BACKGROUND: Late-stage breast cancer rates in the Pacific where mammography services are limited are exceedingly high: Marshall Islands (61%), Palau (94%), and Samoa (79%). Due to the limited medical resources in these areas an alternative accessible technology is needed. The iBreast Exam (iBE) is a point-of-care electronic palpitation device that has a reported sensitivity of 86%. However, little is known about the performance and acceptability of this device for women in the Pacific. METHODS: A total of 39 women (ages 42-73 years) were recruited in Guam with 19 women having a mammogram requiring biopsy (Breast Imaging-Reporting and Data System [BI-RADS] category 4 or above) and 20 women with a negative screening mammogram before the study visit. Participants received an iBE exam and completed a 26-item breast health questionnaire to evaluate the iBE. Furthermore, the performance characteristics of the iBE were tested using gelatin breast phantoms in terms of tumor size, tumor depth, and overall breast stiffness. RESULTS: The iBE had a sensitivity of 20% (two true positives to eight false negatives) and specificity of 92% (24 false positives to 278 true negatives) when analyzed based on the location of the tumor by quadrant. The iBE also had generally poor agreement according to a Cohen's kappa value of 0.068. The phantom experiments showed that the iBE can detect tumors as deep as 2.5 cm, but only if the lesion is greater than 8 mm in diameter. However, the iBE did demonstrate acceptability; 67% of the women reported that they had high trust in iBE as an early detection device. CONCLUSIONS: The iBE had generally poor sensitivity and specificity when tested in a clinical setting which does not allow its use as a screening tool. IMPACT: This study demonstrates the need for an alternative screening method other than electronic palpation for lower-middle-income areas.


Subject(s)
Breast Neoplasms , Adult , Aged , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/pathology , Early Detection of Cancer/methods , Electronics , Female , Humans , Mammography , Middle Aged , Palpation , Point-of-Care Systems , Treatment Outcome
SELECTION OF CITATIONS
SEARCH DETAIL
...