Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Diagnosis (Berl) ; 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38446132

RESUMO

INTRODUCTION: Clinical reasoning is crucial in medical practice, yet its teaching faces challenges due to varied clinical experiences, limited time, and absence from competency frameworks. Despite efforts, effective teaching methodologies remain elusive. Strategies like the One Minute Preceptor (OMP) and SNAPPS are proposed as solutions, particularly in workplace settings. SNAPPS, introduced in 2003, offers a structured approach but lacks comprehensive evidence of its effectiveness. Methodological shortcomings hinder discerning its specific effects. Therefore, a systematic review is proposed to evaluate SNAPPS' impact on clinical reasoning teaching. CONTENT: We searched PubMed, EMBASE, and CINAHL for randomized controlled trials (RCTs) comparing SNAPPS against other methods. Data selection and extraction were performed in duplicate. Bias and certainty of evidence were evaluated using Cochrane RoB-2 and GRADE approach. SUMMARY: We identified five RCTs performed on medical students and residents. Two compared SNAPPS with an active control such as One Minute Preceptor or training with feedback. None reported the effects of SNAPPS in workplace settings (Kirkpatrick Level 3) or patients (Kirkpatrick Level 4). Low to moderate certainty of evidence suggests that SNAPPS increases the total presentation length by increasing discussion length. Low to moderate certainty of evidence may increase the number of differential diagnoses and the expression of uncertainties. Low certainty of evidence suggests that SNAPPS may increase the odds of trainees initiating a management plan and seeking clarification. OUTLOOK: Evidence from this systematic review suggests that SNAPPS has some advantages in terms of clinical reasoning, self-directed learning outcomes, and cost-effectiveness. Furthermore, it appears more beneficial when used by residents than medical students. However, future research should explore outcomes outside SNAPPS-related outcomes, such as workplace or patient-related outcomes.

2.
JMIR Med Educ ; 9: e48039, 2023 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-37768724

RESUMO

BACKGROUND: ChatGPT has shown impressive performance in national medical licensing examinations, such as the United States Medical Licensing Examination (USMLE), even passing it with expert-level performance. However, there is a lack of research on its performance in low-income countries' national licensing medical examinations. In Peru, where almost one out of three examinees fails the national licensing medical examination, ChatGPT has the potential to enhance medical education. OBJECTIVE: We aimed to assess the accuracy of ChatGPT using GPT-3.5 and GPT-4 on the Peruvian National Licensing Medical Examination (Examen Nacional de Medicina [ENAM]). Additionally, we sought to identify factors associated with incorrect answers provided by ChatGPT. METHODS: We used the ENAM 2022 data set, which consisted of 180 multiple-choice questions, to evaluate the performance of ChatGPT. Various prompts were used, and accuracy was evaluated. The performance of ChatGPT was compared to that of a sample of 1025 examinees. Factors such as question type, Peruvian-specific knowledge, discrimination, difficulty, quality of questions, and subject were analyzed to determine their influence on incorrect answers. Questions that received incorrect answers underwent a three-step process involving different prompts to explore the potential impact of adding roles and context on ChatGPT's accuracy. RESULTS: GPT-4 achieved an accuracy of 86% on the ENAM, followed by GPT-3.5 with 77%. The accuracy obtained by the 1025 examinees was 55%. There was a fair agreement (κ=0.38) between GPT-3.5 and GPT-4. Moderate-to-high-difficulty questions were associated with incorrect answers in the crude and adjusted model for GPT-3.5 (odds ratio [OR] 6.6, 95% CI 2.73-15.95) and GPT-4 (OR 33.23, 95% CI 4.3-257.12). After reinputting questions that received incorrect answers, GPT-3.5 went from 41 (100%) to 12 (29%) incorrect answers, and GPT-4 from 25 (100%) to 4 (16%). CONCLUSIONS: Our study found that ChatGPT (GPT-3.5 and GPT-4) can achieve expert-level performance on the ENAM, outperforming most of our examinees. We found fair agreement between both GPT-3.5 and GPT-4. Incorrect answers were associated with the difficulty of questions, which may resemble human performance. Furthermore, by reinputting questions that initially received incorrect answers with different prompts containing additional roles and context, ChatGPT achieved improved accuracy.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...