Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
1.
J Am Geriatr Soc ; 2024 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-39105505

RESUMEN

BACKGROUND: Frailty is an important predictor of health outcomes, characterized by increased vulnerability due to physiological decline. The Clinical Frailty Scale (CFS) is commonly used for frailty assessment but may be influenced by rater bias. Use of artificial intelligence (AI), particularly Large Language Models (LLMs) offers a promising method for efficient and reliable frailty scoring. METHODS: The study utilized seven standardized patient scenarios to evaluate the consistency and reliability of CFS scoring by OpenAI's GPT-3.5-turbo model. Two methods were tested: a basic prompt and an instruction-tuned prompt incorporating CFS definition, a directive for accurate responses, and temperature control. The outputs were compared using the Mann-Whitney U test and Fleiss' Kappa for inter-rater reliability. The outputs were compared with historic human scores of the same scenarios. RESULTS: The LLM's median scores were similar to human raters, with differences of no more than one point. Significant differences in score distributions were observed between the basic and instruction-tuned prompts in five out of seven scenarios. The instruction-tuned prompt showed high inter-rater reliability (Fleiss' Kappa of 0.887) and produced consistent responses in all scenarios. Difficulty in scoring was noted in scenarios with less explicit information on activities of daily living (ADLs). CONCLUSIONS: This study demonstrates the potential of LLMs in consistently scoring clinical frailty with high reliability. It demonstrates that prompt engineering via instruction-tuning can be a simple but effective approach for optimizing LLMs in healthcare applications. The LLM may overestimate frailty scores when less information about ADLs is provided, possibly as it is less subject to implicit assumptions and extrapolation than humans. Future research could explore the integration of LLMs in clinical research and frailty-related outcome prediction.

2.
J Med Internet Res ; 26: e57721, 2024 Jul 24.
Artículo en Inglés | MEDLINE | ID: mdl-39047282

RESUMEN

BACKGROUND: Discharge letters are a critical component in the continuity of care between specialists and primary care providers. However, these letters are time-consuming to write, underprioritized in comparison to direct clinical care, and are often tasked to junior doctors. Prior studies assessing the quality of discharge summaries written for inpatient hospital admissions show inadequacies in many domains. Large language models such as GPT have the ability to summarize large volumes of unstructured free text such as electronic medical records and have the potential to automate such tasks, providing time savings and consistency in quality. OBJECTIVE: The aim of this study was to assess the performance of GPT-4 in generating discharge letters written from urology specialist outpatient clinics to primary care providers and to compare their quality against letters written by junior clinicians. METHODS: Fictional electronic records were written by physicians simulating 5 common urology outpatient cases with long-term follow-up. Records comprised simulated consultation notes, referral letters and replies, and relevant discharge summaries from inpatient admissions. GPT-4 was tasked to write discharge letters for these cases with a specified target audience of primary care providers who would be continuing the patient's care. Prompts were written for safety, content, and style. Concurrently, junior clinicians were provided with the same case records and instructional prompts. GPT-4 output was assessed for instances of hallucination. A blinded panel of primary care physicians then evaluated the letters using a standardized questionnaire tool. RESULTS: GPT-4 outperformed human counterparts in information provision (mean 4.32, SD 0.95 vs 3.70, SD 1.27; P=.03) and had no instances of hallucination. There were no statistically significant differences in the mean clarity (4.16, SD 0.95 vs 3.68, SD 1.24; P=.12), collegiality (4.36, SD 1.00 vs 3.84, SD 1.22; P=.05), conciseness (3.60, SD 1.12 vs 3.64, SD 1.27; P=.71), follow-up recommendations (4.16, SD 1.03 vs 3.72, SD 1.13; P=.08), and overall satisfaction (3.96, SD 1.14 vs 3.62, SD 1.34; P=.36) between the letters generated by GPT-4 and humans, respectively. CONCLUSIONS: Discharge letters written by GPT-4 had equivalent quality to those written by junior clinicians, without any hallucinations. This study provides a proof of concept that large language models can be useful and safe tools in clinical documentation.


Asunto(s)
Alta del Paciente , Humanos , Alta del Paciente/normas , Registros Electrónicos de Salud/normas , Método Simple Ciego , Lenguaje
4.
J Gastroenterol Hepatol ; 39(1): 81-106, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37855067

RESUMEN

BACKGROUND AND AIM: Colonoscopy is commonly used in screening and surveillance for colorectal cancer. Multiple different guidelines provide recommendations on the interval between colonoscopies. This can be challenging for non-specialist healthcare providers to navigate. Large language models like ChatGPT are a potential tool for parsing patient histories and providing advice. However, the standard GPT model is not designed for medical use and can hallucinate. One way to overcome these challenges is to provide contextual information with medical guidelines to help the model respond accurately to queries. Our study compares the standard GPT4 against a contextualized model provided with relevant screening guidelines. We evaluated whether the models could provide correct advice for screening and surveillance intervals for colonoscopy. METHODS: Relevant guidelines pertaining to colorectal cancer screening and surveillance were formulated into a knowledge base for GPT. We tested 62 example case scenarios (three times each) on standard GPT4 and on a contextualized model with the knowledge base. RESULTS: The contextualized GPT4 model outperformed the standard GPT4 in all domains. No high-risk features were missed, and only two cases had hallucination of additional high-risk features. A correct interval to colonoscopy was provided in the majority of cases. Guidelines were appropriately cited in almost all cases. CONCLUSIONS: A contextualized GPT4 model could identify high-risk features and quote appropriate guidelines without significant hallucination. It gave a correct interval to the next colonoscopy in the majority of cases. This provides proof of concept that ChatGPT with appropriate refinement can serve as an accurate physician assistant.


Asunto(s)
Colonoscopía , Neoplasias Colorrectales , Humanos , Neoplasias Colorrectales/diagnóstico , Neoplasias Colorrectales/prevención & control , Neoplasias Colorrectales/epidemiología , Factores de Riesgo , Detección Precoz del Cáncer , Alucinaciones
10.
J Am Heart Assoc ; 12(7): e026975, 2023 04 04.
Artículo en Inglés | MEDLINE | ID: mdl-36942750

RESUMEN

BACKGROUND Electrocardiography (ECG) may be performed as part of preparticipation sports screening. Recommendations on screening of athletes to identify individuals with previously unrecognized cardiac disease are robust; however, data guiding the preparticipation screening of unselected populations are scarce. T wave inversion (TWI) on ECG may suggest an undiagnosed cardiomyopathy. This study aims to describe the prevalence of abnormal TWI in an unselected young male cohort and the outcomes of an echocardiography-guided approach to investigating these individuals for structural heart diseases, focusing on the yield for cardiomyopathies. METHODS AND RESULTS Consecutive young male individuals undergoing a national preparticipation cardiac screening program for 39 months were studied. All underwent resting supine 12-lead ECG. Those manifesting abnormal TWI, defined as negatively deflected T waves of at least 0.1 mV amplitude in any 2 contiguous leads, underwent echocardiography. A total of 69 714 male individuals with a mean age of 17.9±1.1 years were studied. Of the individuals, 562 (0.8%) displayed abnormal TWI. This was most frequently observed in the anterior territory and least so in the lateral territory. A total of 12 individuals (2.1%) were diagnosed with a cardiomyopathy. Cardiomyopathy diagnoses were significantly associated with deeper maximum TWI depth and the presence of abnormal TWI in the lateral territory, but not with abnormal TWI in the anterior and inferior territories. No individual presenting with TWI restricted to solely leads V1 to V2, 2 inferior leads or both was diagnosed with a cardiomyopathy. CONCLUSIONS Cardiomyopathy diagnoses were more strongly associated with certain patterns of abnormal TWI. Our findings may support decisions to prioritize echocardiography in these individuals.


Asunto(s)
Cardiomiopatías , Ecocardiografía , Cardiopatías , Adolescente , Adulto , Humanos , Masculino , Adulto Joven , Arritmias Cardíacas/diagnóstico , Cardiomiopatías/diagnóstico , Electrocardiografía/métodos , Corazón
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA