Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
J Clin Densitom ; 27(2): 101480, 2024.
Article in English | MEDLINE | ID: mdl-38401238

ABSTRACT

BACKGROUND: Artificial intelligence (AI) large language models (LLMs) such as ChatGPT have demonstrated the ability to pass standardized exams. These models are not trained for a specific task, but instead trained to predict sequences of text from large corpora of documents sourced from the internet. It has been shown that even models trained on this general task can pass exams in a variety of domain-specific fields, including the United States Medical Licensing Examination. We asked if large language models would perform as well on a much narrower subdomain tests designed for medical specialists. Furthermore, we wanted to better understand how progressive generations of GPT (generative pre-trained transformer) models may be evolving in the completeness and sophistication of their responses even while generational training remains general. In this study, we evaluated the performance of two versions of GPT (GPT 3 and 4) on their ability to pass the certification exam given to physicians to work as osteoporosis specialists and become a certified clinical densitometrists. The CCD exam has a possible score range of 150 to 400. To pass, you need a score of 300. METHODS: A 100-question multiple-choice practice exam was obtained from a 3rd party exam preparation website that mimics the accredited certification tests given by the ISCD (International Society for Clinical Densitometry). The exam was administered to two versions of GPT, the free version (GPT Playground) and ChatGPT+, which are based on GPT-3 and GPT-4, respectively (OpenAI, San Francisco, CA). The systems were prompted with the exam questions verbatim. If the response was purely textual and did not specify which of the multiple-choice answers to select, the authors matched the text to the closest answer. Each exam was graded and an estimated ISCD score was provided from the exam website. In addition, each response was evaluated by a rheumatologist CCD and ranked for accuracy using a 5-level scale. The two GPT versions were compared in terms of response accuracy and length. RESULTS: The average response length was 11.6 ±19 words for GPT-3 and 50.0±43.6 words for GPT-4. GPT-3 answered 62 questions correctly resulting in a failing ISCD score of 289. However, GPT-4 answered 82 questions correctly with a passing score of 342. GPT-3 scored highest on the "Overview of Low Bone Mass and Osteoporosis" category (72 % correct) while GPT-4 scored well above 80 % accuracy on all categories except "Imaging Technology in Bone Health" (65 % correct). Regarding subjective accuracy, GPT-3 answered 23 questions with nonsensical or totally wrong responses while GPT-4 had no responses in that category. CONCLUSION: If this had been an actual certification exam, GPT-4 would now have a CCD suffix to its name even after being trained using general internet knowledge. Clearly, more goes into physician training than can be captured in this exam. However, GPT algorithms may prove to be valuable physician aids in the diagnoses and monitoring of osteoporosis and other diseases.


Subject(s)
Artificial Intelligence , Certification , Humans , Osteoporosis/diagnosis , Clinical Competence , Educational Measurement/methods , United States
2.
Pac Symp Biocomput ; 28: 472-483, 2023.
Article in English | MEDLINE | ID: mdl-36541001

ABSTRACT

AI has shown radiologist-level performance at diagnosis and detection of breast cancer from breast imaging such as ultrasound and mammography. Integration of AI-enhanced breast imaging into a radiologist's workflow through the use of computer-aided diagnosis systems, may affect the relationship they maintain with their patient. This raises ethical questions about the maintenance of the radiologist-patient relationship and the achievement of the ethical ideal of shared decision-making (SDM) in breast imaging. In this paper we propose a caring radiologist-patient relationship characterized by adherence to four care-ethical qualities: attentiveness, competency, responsiveness, and responsibility. We examine the effect of AI-enhanced imaging on the caring radiologist-patient relationship, using breast imaging to illustrate potential ethical pitfalls.Drawing on the work of care ethicists we establish an ethical framework for radiologist-patient contact. Joan Tronto's four-phase model offers corresponding elements that outline a caring relationship. In conjunction with other care ethicists, we propose an ethical framework applicable to the radiologist-patient relationship. Among the elements that support a caring relationship, attentiveness is achieved after AI-integration through emphasizing radiologist interaction with their patient. Patients perceive radiologist competency by effective communication and medical interpretation of CAD results from the radiologist. Radiologists are able to administer competent care when their personal perception of their competency is unaffected by AI-integration and they effectively identify AI errors. Responsive care is reciprocal care wherein the radiologist responds to the reactions of the patient in performing comprehensive ethical framing of AI recommendations. Lastly, responsibility is established when the radiologist demonstrates goodwill and earns patient trust by acting as a mediator between their patient and the AI system.


Subject(s)
Breast Neoplasms , Computational Biology , Humans , Female , Mammography/methods , Breast Neoplasms/diagnostic imaging , Radiologists , Artificial Intelligence
SELECTION OF CITATIONS
SEARCH DETAIL
...