Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Acad Med ; 99(5): 534-540, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38232079

ABSTRACT

PURPOSE: Learner development and promotion rely heavily on narrative assessment comments, but narrative assessment quality is rarely evaluated in medical education. Educators have developed tools such as the Quality of Assessment for Learning (QuAL) tool to evaluate the quality of narrative assessment comments; however, scoring the comments generated in medical education assessment programs is time intensive. The authors developed a natural language processing (NLP) model for applying the QuAL score to narrative supervisor comments. METHOD: Samples of 2,500 Entrustable Professional Activities assessments were randomly extracted and deidentified from the McMaster (1,250 comments) and Saskatchewan (1,250 comments) emergency medicine (EM) residency training programs during the 2019-2020 academic year. Comments were rated using the QuAL score by 25 EM faculty members and 25 EM residents. The results were used to develop and test an NLP model to predict the overall QuAL score and QuAL subscores. RESULTS: All 50 raters completed the rating exercise. Approximately 50% of the comments had perfect agreement on the QuAL score, with the remaining resolved by the study authors. Creating a meaningful suggestion for improvement was the key differentiator between high- and moderate-quality feedback. The overall QuAL model predicted the exact human-rated score or 1 point above or below it in 87% of instances. Overall model performance was excellent, especially regarding the subtasks on suggestions for improvement and the link between resident performance and improvement suggestions, which achieved 85% and 82% balanced accuracies, respectively. CONCLUSIONS: This model could save considerable time for programs that want to rate the quality of supervisor comments, with the potential to automatically score a large volume of comments. This model could be used to provide faculty with real-time feedback or as a tool to quantify and track the quality of assessment comments at faculty, rotation, program, or institution levels.


Subject(s)
Competency-Based Education , Internship and Residency , Natural Language Processing , Humans , Competency-Based Education/methods , Internship and Residency/standards , Clinical Competence/standards , Narration , Educational Measurement/methods , Educational Measurement/standards , Emergency Medicine/education , Faculty, Medical/standards
2.
Can Med Educ J ; 13(6): 19-35, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36440075

ABSTRACT

Background: Competency based medical education (CBME) relies on supervisor narrative comments contained within entrustable professional activities (EPA) for programmatic assessment, but the quality of these supervisor comments is unassessed. There is validity evidence supporting the QuAL (Quality of Assessment for Learning) score for rating the usefulness of short narrative comments in direct observation. Objective: We sought to establish validity evidence for the QuAL score to rate the quality of supervisor narrative comments contained within an EPA by surveying the key end-users of EPA narrative comments: residents, academic advisors, and competence committee members. Methods: In 2020, the authors randomly selected 52 de-identified narrative comments from two emergency medicine EPA databases using purposeful sampling. Six collaborators (two residents, two academic advisors, and two competence committee members) were recruited from each of four EM Residency Programs (Saskatchewan, McMaster, Ottawa, and Calgary) to rate these comments with a utility score and the QuAL score. Correlation between utility and QuAL score were calculated using Pearson's correlation coefficient. Sources of variance and reliability were calculated using a generalizability study. Results: All collaborators (n = 24) completed the full study. The QuAL score had a high positive correlation with the utility score amongst the residents (r = 0.80) and academic advisors (r = 0.75) and a moderately high correlation amongst competence committee members (r = 0.68). The generalizability study found that the major source of variance was the comment indicating the tool performs well across raters. Conclusion: The QuAL score may serve as an outcome measure for program evaluation of supervisors, and as a resource for faculty development.


Contexte: Dans la formation médicale fondée sur les compétences (FMFC), l'évaluation programmatique s'appuie sur les commentaires narratifs des superviseurs en lien avec les activités professionnelles confiables (EPA). En revanche, la qualité de ces commentaires n'est pas évaluée. Il existe des preuves de la validité du score QuAL (qualité de l'évaluation pour l'apprentissage, Quality of Assessment for Learning en anglais) pour l'évaluation de l'utilité des commentaires de rétroaction courts lors de la supervision par observation directe. Objectif: Nous avons tenté de démontrer la validité du score QuAL aux fins de l'évaluation de la qualité des commentaires narratifs des superviseurs pour une APC en interrogeant les principaux utilisateurs finaux des rétroactions : les résidents, les conseillers pédagogiques et les membres du comité de compétence. Méthodes: En 2020, les auteurs ont sélectionné au hasard 52 commentaires narratifs anonymisés dans deux bases de données d'APC en médecine d'urgence au moyen d'un échantillonnage intentionnel. Six collaborateurs (deux résidents, deux conseillers pédagogiques et deux membres de comités de compétence) ont été recrutés dans chacun des quatre programmes de résidence en médecine d'urgence (Saskatchewan, McMaster, Ottawa et Calgary) pour évaluer ces commentaires à l'aide d'un score d'utilité et du score QuAL. La corrélation entre l'utilité et le score QuAL a été calculée à l'aide du coefficient de corrélation de Pearson. Les sources de variance et la fiabilité ont été calculées à l'aide d'une étude de généralisabilité. Résultats: Tous les collaborateurs (n=24) ont réalisé l'étude complète. Le score QuAL présentait une corrélation positive élevée avec le score d'utilité parmi les résidents (r=0,80) et les conseillers pédagogiques (r=0,75) et une corrélation modérément élevée parmi les membres du comité de compétence (r=0,68). L'étude de généralisation a révélé que la principale source de variance était le commentaire, ce qui indique que l'outil a fonctionné avec une efficacité égale pour tous les évaluateurs. Conclusion: Le score QuAL peut servir de mesure des résultats pour l'évaluation des superviseurs par les programmes, et de ressource pour le perfectionnement du corps professoral.

SELECTION OF CITATIONS
SEARCH DETAIL
...