Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
GMS J Med Educ ; 41(2): Doc20, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38779693

RESUMO

As medical educators grapple with the consistent demand for high-quality assessments, the integration of artificial intelligence presents a novel solution. This how-to article delves into the mechanics of employing ChatGPT for generating Multiple Choice Questions (MCQs) within the medical curriculum. Focusing on the intricacies of prompt engineering, we elucidate the steps and considerations imperative for achieving targeted, high-fidelity results. The article presents varying outcomes based on different prompt structures, highlighting the AI's adaptability in producing questions of distinct complexities. While emphasizing the transformative potential of ChatGPT, we also spotlight challenges, including the AI's occasional "hallucination", underscoring the importance of rigorous review. This guide aims to furnish educators with the know-how to integrate AI into their assessment creation process, heralding a new era in medical education tools.


Assuntos
Inteligência Artificial , Currículo , Educação Médica , Avaliação Educacional , Humanos , Educação Médica/métodos , Avaliação Educacional/métodos
2.
GMS J Med Educ ; 37(7): Doc99, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33364378

RESUMO

Objective: COVID-19 challenges curriculum managers worldwide to create digital substitutes for classroom teaching. Case-based teaching formats under expert supervision can be used as a substitute for practical bedside teaching, where the focus is on teaching clinical reasoning skills. Methods: For medical students of LMU and TU Munich, the interactive, case-based, and supervised teaching format of Clinical Case Discussion (CCD) was digitised and implemented as dCCD in their respective curricula. Case discussions were realised as videoconferences, led by a student moderator, and took place under the supervision of a board-certified clinician. To prevent passive participation, additional cognitive activations were implemented. Acceptance, usability, and subjective learning outcomes were assessed in dCCDs by means of a special evaluation concept. Results: With regard to acceptance, students were of the opinion that they had learned effectively by participating in dCCDs (M=4.31; SD=1.37). The majority of students also stated that they would recommend the course to others (M=4.23; SD=1.62). The technical implementation of the teaching format was judged positively overall, but findings for usability were heterogeneous. Students rated their clinical reasoning skills at the end of the dCCDs (M=4.43; SD=0.66) as being significantly higher than at the beginning (M=4.33; SD=0.69), with low effect size, t(181)=-2.352, p=.020, d=0.15. Conclusion: Our evaluation data shows that the dCCD format is well-accepted by students as a substitute for face-to-face teaching. In the next step, we plan to examine the extent to which participation in dCCDs leads to an increase in objectively measured clinical reasoning skills, analogous to a face-to-face CCD with on-site attendance.


Assuntos
COVID-19/epidemiologia , Tomada de Decisão Clínica/métodos , Educação a Distância/organização & administração , Educação Médica/organização & administração , Comunicação por Videoconferência/organização & administração , Competência Clínica , Educação a Distância/normas , Educação Médica/normas , Avaliação Educacional , Humanos , Pandemias , SARS-CoV-2 , Estudantes de Medicina/psicologia , Comunicação por Videoconferência/normas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...