Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 45
Filter
1.
CJEM ; 2024 May 27.
Article in English | MEDLINE | ID: mdl-38801634

ABSTRACT

Proficiency in Quality Improvement and Patient Safety (QIPS) methodologies has been identified as a standard of residency training. However, there is no consensus on how to achieve these competencies. We used Kern's model of curricular development to create a QIPS curriculum for the local Emergency Medicine (EM) residency training program. The curriculum was designed following best practice recommendations for QIPS education and took the form of a 10-h educational experience including two in-person live sessions. The curriculum was delivered to a mix of local transition to practice residents and faculty members. Participants reported favorable outcomes and objectively demonstrated QIPS knowledge acquisition. This curriculum serves as a model that could be adapted by other residency training programs seeking to implement their own QIPS curricula.


RéSUMé: La maîtrise des méthodologies d'amélioration de la qualité et de la sécurité des patients (QIPS) a été identifiée comme une norme de formation en résidence. Cependant, il n'y a pas de consensus sur la façon d'atteindre ces compétences. Nous avons utilisé le modèle de développement des programmes d'études de Kern pour créer un programme QIPS pour le programme de résidence en médecine d'urgence (EM) local. Le programme a été conçu selon les recommandations des meilleures pratiques pour la formation QIPS et a pris la forme d'une expérience éducative de 10 heures comprenant deux sessions en personne. Le programme a été dispensé à un mélange de transition locale aux résidents de pratique et aux membres du corps professoral. Les participants ont déclaré des résultats favorables et ont démontré objectivement l'acquisition de connaissances QIPS. Ce programme sert de modèle qui pourrait être adapté par d'autres programmes de résidence qui cherchent à mettre en œuvre leurs propres programmes QIPS.

2.
CJEM ; 26(3): 137-138, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38436909
3.
Acad Med ; 99(5): 534-540, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38232079

ABSTRACT

PURPOSE: Learner development and promotion rely heavily on narrative assessment comments, but narrative assessment quality is rarely evaluated in medical education. Educators have developed tools such as the Quality of Assessment for Learning (QuAL) tool to evaluate the quality of narrative assessment comments; however, scoring the comments generated in medical education assessment programs is time intensive. The authors developed a natural language processing (NLP) model for applying the QuAL score to narrative supervisor comments. METHOD: Samples of 2,500 Entrustable Professional Activities assessments were randomly extracted and deidentified from the McMaster (1,250 comments) and Saskatchewan (1,250 comments) emergency medicine (EM) residency training programs during the 2019-2020 academic year. Comments were rated using the QuAL score by 25 EM faculty members and 25 EM residents. The results were used to develop and test an NLP model to predict the overall QuAL score and QuAL subscores. RESULTS: All 50 raters completed the rating exercise. Approximately 50% of the comments had perfect agreement on the QuAL score, with the remaining resolved by the study authors. Creating a meaningful suggestion for improvement was the key differentiator between high- and moderate-quality feedback. The overall QuAL model predicted the exact human-rated score or 1 point above or below it in 87% of instances. Overall model performance was excellent, especially regarding the subtasks on suggestions for improvement and the link between resident performance and improvement suggestions, which achieved 85% and 82% balanced accuracies, respectively. CONCLUSIONS: This model could save considerable time for programs that want to rate the quality of supervisor comments, with the potential to automatically score a large volume of comments. This model could be used to provide faculty with real-time feedback or as a tool to quantify and track the quality of assessment comments at faculty, rotation, program, or institution levels.


Subject(s)
Competency-Based Education , Internship and Residency , Natural Language Processing , Humans , Competency-Based Education/methods , Internship and Residency/standards , Clinical Competence/standards , Narration , Educational Measurement/methods , Educational Measurement/standards , Emergency Medicine/education , Faculty, Medical/standards
4.
CJEM ; 25(7): 558-567, 2023 07.
Article in English | MEDLINE | ID: mdl-37389772

ABSTRACT

BACKGROUND: Transition from residency to unsupervised practice represents a critical stage in learning and professional identity formation, yet there is a paucity of literature to inform residency curricula and emergency department transition programming for new faculty. OBJECTIVE: The objective of this study was to develop consensus-based recommendations to optimize the transition to practice phase of emergency medicine training. METHODS: A literature review and results of a survey of emergency medicine (EM) residency program directors informed focus groups of recent (within 5 years) EM graduates. Focus group transcripts were analyzed following conventional content analysis. Preliminary recommendations, based on identified themes, were drafted and presented at the 2022 Canadian Association of Emergency Physicians (CAEP) Academic Symposium on Education. Through a live presentation, symposium attendees representing the Canadian national EM community participated in a facilitated discussion of the recommendations. The authors incorporated this feedback to construct a final set of 14 recommendations, 8 targeted toward residency training programs and 6 specific to department leadership. CONCLUSION: The Canadian EM community used a structured process to develop 14 best practice recommendations to enhance the transition to practice phase of residency training as well as the transition period in the career of junior attending physicians.


ABSTRAIT: ARRIèRE-PLAN: La transition de la résidence à la pratique non supervisée représente une étape cruciale de l'apprentissage et de la formation de l'identité professionnelle, mais il y a peu de documentation pour éclairer les programmes de résidence et les programmes de transition des services d'urgence pour les nouveaux professeurs. OBJECTIF: L'objectif de cette étude était d'élaborer des recommandations consensuelles pour optimiser la transition vers la pratique de la formation en médecine d'urgence. MéTHODES: Une recension des écrits et les résultats d'un sondage auprès des directeurs des programmes de résidence en médecine d'urgence (GU) ont informé les groupes de discussion des diplômés récents (moins de cinq ans) en GU. Les transcriptions des groupes de discussion ont été analysées à la suite d'une analyse du contenu classique. Des recommandations préliminaires, fondées sur des thèmes déterminés, ont été rédigées et présentées au Symposium universitaire sur l'éducation de 2022 de l'Association canadienne des médecins d'urgence (ACMU). Au moyen d'une présentation en direct, les participants au symposium représentant la communauté nationale canadienne de la GU ont participé à une discussion dirigée sur les recommandations. Les auteurs ont intégré ces commentaires pour élaborer un ensemble final de 14 recommandations, 8 ciblant les programmes de formation en résidence et 6 ciblant le leadership ministériel. CONCLUSIONS: La communauté canadienne de la GU a utilisé un processus structuré pour élaborer 14 recommandations de pratiques exemplaires afin d'améliorer la transition à la phase de pratique de la formation en résidence ainsi que la période de transition dans la carrière des médecins traitants débutants.


Subject(s)
Emergency Medicine , Internship and Residency , Humans , Canada , Curriculum , Emergency Service, Hospital , Surveys and Questionnaires , Emergency Medicine/education
5.
AEM Educ Train ; 7(2): e10849, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36994315

ABSTRACT

Background: Without a clear understanding of the factors contributing to the effective acquisition of high-quality entrustable professional activity (EPA) assessments, trainees, supervising faculty, and training programs may lack appropriate strategies for successful EPA implementation and utilization. The purpose of this study was to identify barriers and facilitators to acquiring high-quality EPA assessments in Canadian emergency medicine (EM) training programs. Methods: We conducted a qualitative framework analysis study utilizing the Theoretical Domains Framework (TDF). Semistructured interviews of EM resident and faculty participants underwent audio recording, deidentification, and line-by-line coding by two authors, being coded to extract themes and subthemes across the domains of the TDF. Results: From 14 interviews (eight faculty and six residents) we identified, within the 14 TDF domains, major themes and subthemes for barriers and facilitators to EPA acquisition for both faculty and residents. The two most cited domains (and their frequencies) among residents and faculty were environmental context and resources (56) and behavioral regulation (48). Example strategies to improving EPA acquisition include orienting residents to the competency-based medical education (CBME) paradigm, recalibrating expectations relating to "low ratings" on EPAs, engaging in continuous faculty development to ensure familiarity and fluency with EPAs, and implementing longitudinal coaching programs between residents and faculty to encourage repetitive longitudinal interactions and high-quality specific feedback. Conclusions: We identified key strategies to support residents, faculty, programs, and institutions in overcoming barriers and improving EPA assessment processes. This is an important step toward ensuring the successful implementation of CBME and the effective operationalization of EPAs within EM training programs.

6.
Can Med Educ J ; 14(6): 78-85, 2023 12.
Article in English | MEDLINE | ID: mdl-38226296

ABSTRACT

Background: Competency based residency programs depend on high quality feedback from the assessment of entrustable professional activities (EPA). The Quality of Assessment for Learning (QuAL) score is a tool developed to rate the quality of narrative comments in workplace-based assessments; it has validity evidence for scoring the quality of narrative feedback provided to emergency medicine residents, but it is unknown whether the QuAL score is reliable in the assessment of narrative feedback in other postgraduate programs. Methods: Fifty sets of EPA narratives from a single academic year at our competency based medical education post-graduate anesthesia program were selected by stratified sampling within defined parameters [e.g. resident gender and stage of training, assessor gender, Competency By Design training level, and word count (≥17 or <17 words)]. Two competency committee members and two medical students rated the quality of narrative feedback using a utility score and QuAL score. We used Kendall's tau-b co-efficient to compare the perceived utility of the written feedback to the quality assessed with the QuAL score. The authors used generalizability and decision studies to estimate the reliability and generalizability coefficients. Results: Both the faculty's utility scores and QuAL scores (r = 0.646, p < 0.001) and the trainees' utility scores and QuAL scores (r = 0.667, p < 0.001) were moderately correlated. Results from the generalizability studies showed that utility scores were reliable with two raters for both faculty (Epsilon=0.87, Phi=0.86) and trainees (Epsilon=0.88, Phi=0.88). Conclusions: The QuAL score is correlated with faculty- and trainee-rated utility of anesthesia EPA feedback. Both faculty and trainees can reliability apply the QuAL score to anesthesia EPA narrative feedback. This tool has the potential to be used for faculty development and program evaluation in Competency Based Medical Education. Other programs could consider replicating our study in their specialty.


Contexte: La qualité de la rétroaction à la suite de l'évaluation d'activités professionnelles confiables (APC) est d'une importance capitale dans les programmes de résidence fondés sur les compétences. Le score QuAL (Quality of Assessment for Learning) est un outil développé pour évaluer la qualité de la rétroaction narrative dans les évaluations en milieu de travail. Sa validité a été démontrée dans le cas des commentaires narratifs fournis aux résidents en médecine d'urgence, mais sa fiabilité n'a pas été évaluée dans d'autres programmes de formation postdoctorale. Méthodes: Cinquante ensembles de commentaires portant sur des APC d'une seule année universitaire dans notre programme postdoctoral en anesthésiologie ­ un programme fondé sur les compétences ­ ont été sélectionnés par échantillonnage stratifié selon des paramètres préétablis [par exemple, le sexe du résident et son niveau de formation, le sexe de l'évaluateur, le niveau de formation en Compétence par conception, et le nombre de mots (≥17 ou <17 mots)]. Deux membres du comité de compétence et deux étudiants en médecine ont évalué la qualité de la rétroaction narrative à l'aide d'un score d'utilité et d'un score QuAL. Nous avons utilisé le coefficient tau-b de Kendall pour comparer l'utilité perçue de la rétroaction écrite et sa qualité évaluée à l'aide du score QuAL. Les auteurs ont utilisé des études de généralisabilité et de décision pour estimer les coefficients de fiabilité et de généralisabilité. Résultats: Les scores d'utilité et les scores QuAL des enseignants (r = 0,646, p < 0,001) et ceux des étudiants (r = 0,667, p < 0,001) étaient modérément corrélés. Les résultats des études de généralisabilité ont montré qu'avec deux évaluateurs les scores d'utilité étaient fiables tant pour les enseignants (Epsilon=0,87, Phi=0,86) que pour les étudiants (Epsilon=0,88, Phi=0,88). Conclusions: Le score QuAL est en corrélation avec l'utilité de la rétroaction sur les APC en anesthésiologie évaluée par les enseignants et les étudiants. Les uns et les autres peuvent appliquer de manière fiable le score QuAL aux commentaires narratifs sur les APC en anesthésiologie. Cet outil pourrait être utilisé pour le perfectionnement professoral et l'évaluation des programmes dans le cadre d'une formation médicale fondée sur les compétences. D'autres programmes pourraient envisager de reproduire notre étude dans leur spécialité.


Subject(s)
Anesthesiology , Education, Medical , Humans , Feedback , Reproducibility of Results , Clinical Competence
7.
Can Med Educ J ; 13(6): 19-35, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36440075

ABSTRACT

Background: Competency based medical education (CBME) relies on supervisor narrative comments contained within entrustable professional activities (EPA) for programmatic assessment, but the quality of these supervisor comments is unassessed. There is validity evidence supporting the QuAL (Quality of Assessment for Learning) score for rating the usefulness of short narrative comments in direct observation. Objective: We sought to establish validity evidence for the QuAL score to rate the quality of supervisor narrative comments contained within an EPA by surveying the key end-users of EPA narrative comments: residents, academic advisors, and competence committee members. Methods: In 2020, the authors randomly selected 52 de-identified narrative comments from two emergency medicine EPA databases using purposeful sampling. Six collaborators (two residents, two academic advisors, and two competence committee members) were recruited from each of four EM Residency Programs (Saskatchewan, McMaster, Ottawa, and Calgary) to rate these comments with a utility score and the QuAL score. Correlation between utility and QuAL score were calculated using Pearson's correlation coefficient. Sources of variance and reliability were calculated using a generalizability study. Results: All collaborators (n = 24) completed the full study. The QuAL score had a high positive correlation with the utility score amongst the residents (r = 0.80) and academic advisors (r = 0.75) and a moderately high correlation amongst competence committee members (r = 0.68). The generalizability study found that the major source of variance was the comment indicating the tool performs well across raters. Conclusion: The QuAL score may serve as an outcome measure for program evaluation of supervisors, and as a resource for faculty development.


Contexte: Dans la formation médicale fondée sur les compétences (FMFC), l'évaluation programmatique s'appuie sur les commentaires narratifs des superviseurs en lien avec les activités professionnelles confiables (EPA). En revanche, la qualité de ces commentaires n'est pas évaluée. Il existe des preuves de la validité du score QuAL (qualité de l'évaluation pour l'apprentissage, Quality of Assessment for Learning en anglais) pour l'évaluation de l'utilité des commentaires de rétroaction courts lors de la supervision par observation directe. Objectif: Nous avons tenté de démontrer la validité du score QuAL aux fins de l'évaluation de la qualité des commentaires narratifs des superviseurs pour une APC en interrogeant les principaux utilisateurs finaux des rétroactions : les résidents, les conseillers pédagogiques et les membres du comité de compétence. Méthodes: En 2020, les auteurs ont sélectionné au hasard 52 commentaires narratifs anonymisés dans deux bases de données d'APC en médecine d'urgence au moyen d'un échantillonnage intentionnel. Six collaborateurs (deux résidents, deux conseillers pédagogiques et deux membres de comités de compétence) ont été recrutés dans chacun des quatre programmes de résidence en médecine d'urgence (Saskatchewan, McMaster, Ottawa et Calgary) pour évaluer ces commentaires à l'aide d'un score d'utilité et du score QuAL. La corrélation entre l'utilité et le score QuAL a été calculée à l'aide du coefficient de corrélation de Pearson. Les sources de variance et la fiabilité ont été calculées à l'aide d'une étude de généralisabilité. Résultats: Tous les collaborateurs (n=24) ont réalisé l'étude complète. Le score QuAL présentait une corrélation positive élevée avec le score d'utilité parmi les résidents (r=0,80) et les conseillers pédagogiques (r=0,75) et une corrélation modérément élevée parmi les membres du comité de compétence (r=0,68). L'étude de généralisation a révélé que la principale source de variance était le commentaire, ce qui indique que l'outil a fonctionné avec une efficacité égale pour tous les évaluateurs. Conclusion: Le score QuAL peut servir de mesure des résultats pour l'évaluation des superviseurs par les programmes, et de ressource pour le perfectionnement du corps professoral.

8.
CJEM ; 24(6): 561-562, 2022 09.
Article in English | MEDLINE | ID: mdl-36071323
9.
Ultrasound J ; 14(1): 1, 2022 Jan 03.
Article in English | MEDLINE | ID: mdl-34978635

ABSTRACT

BACKGROUND: While intra-arrest echocardiography can be used to guide and monitor chest compression quality, it is not currently feasible on the scene of out-of-hospital cardiac arrests. Rapid and automated sonographic localization of the heart may provide first-responders guidance to an optimal area of compression without requiring them to interpret ultrasound images. In this proof-of-concept porcine study, we sought to describe the performance of an automated ultrasound device in correctly identifying and tracing the borders of the heart in three distinct states: pre-arrest, arrest, and late arrest. METHODS: An automated ultrasound device (bladder scanner) was placed on the chests of 7 swine, along the left sternal border (4th-8th intercostal spaces). Scanner-generated images were recorded for each space during pre-arrest, arrest, and finally late arrest. 828 images of the LV and LV outflow tract were randomized and 150 (50/state) selected for analysis. Scanner tracings of the heart were then digitally obscured to facilitate tracing by expert reviewers who were blinded to the physiologic state. Reviewer tracings were compared to bladder scanner tracings; with concordance between these images determined via Sørensen-Dice index (SDI). RESULTS: When compared to human reviewers, the bladder scanner was able to identify and trace the borders during cardiac arrest. The bladder scanner performed best at the time of arrest (SDI 0.900 ± 0.059). As resuscitation efforts continued and time from initial arrest increased, the scanner's performance decreased dramatically (SDI 0.597 ± 0.241 in late arrest). CONCLUSION: An automated ultrasound device (bladder scanner) reliably traced porcine hearts during cardiac arrest. It is possible a device could be developed to indicate where compressions should be performed without requiring the operator to interpret ultrasound images. Further investigation into rapid, automated, sonographic localization of the heart to identify the area of compression in out-of-hospital cardiac arrest is warranted.

12.
J Child Adolesc Trauma ; 14(2): 271-276, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33986912

ABSTRACT

Acute medical management of traumatic brain injury (TBI) can be challenging outside of the resuscitation bay, specifically while obtaining a computed tomography (CT) scan of the brain. We sought out to determine the management practices of Canadian traumatologists for pediatric patients with severe TBI requiring CT in the emergency department (ED). In 2019, surveys were sent to trauma directors in hospitals across Canada to ascertain their clinical practices. Team members present in the CT scan included physicians (89%), registered nurses (100%), and respiratory therapists (38%). The average time to and from the CT scanner was one hour. Over half of respondents (56%) had experienced an adverse event in CT with variable access (11-56%) to necessary resuscitation equipment and medications. Significant hypotension (44%) was the most common adverse event experienced. With the exception of an end tidal CO2 monitoring (56%), heart rate, rhythm, respiratory rate, saturation, and blood pressure were always monitored during a CT scan. Head of bed elevation had an approximately equal distribution of flat (44%) versus elevated (56%). The practice variability of Canadian traumatologists may reflect a lack of evidence to guide patient management. Future research and knowledge translation efforts are needed to optimize patient care during neuroimaging.

15.
Can Med Educ J ; 11(6): e31-e45, 2020 Dec.
Article in English | MEDLINE | ID: mdl-33349752

ABSTRACT

BACKGROUND: Canadian specialty programs are implementing Competence By Design, a competency-based medical education (CBME) program which requires frequent assessments of entrustable professional activities. To be used for learning, the large amount of assessment data needs to be interpreted by residents, but little work has been done to determine how visualizing and interacting with this data can be supported. Within the University of Saskatchewan emergency medicine residency program, we sought to determine how our residents' CBME assessment data should be presented to support their learning and to develop a dashboard that meets our residents' needs. METHODS: We utilized a design-based research process to identify and address resident needs surrounding the presentation of their assessment data. Data was collected within the emergency medicine residency program at the University of Saskatchewan via four resident focus groups held over 10 months. Focus group discussions were analyzed using a grounded theory approach to identify resident needs. This guided the development of a dashboard which contained elements (data, analytics, and visualizations) that support their interpretation of the data. The identified needs are described using quotes from the focus groups as well as visualizations of the dashboard elements. RESULTS: Resident needs were classified under three themes: (1) Provide guidance through the assessment program, (2) Present workplace-based assessment data, and (3) Present other assessment data. Seventeen dashboard elements were designed to address these needs. CONCLUSIONS: Our design-based research process identified resident needs and developed dashboard elements to meet them. This work will inform the creation and evolution of CBME assessment dashboards designed to support resident learning.


CONTEXTE: Les programmes canadiens de spécialité sont à implanter la compétence par conception (CPC), un programme d'éducation médicale par compétences qui nécessite des évaluations fréquentes des activités professionnelles confiables. Pour servir aux fins d'apprentissage, la grande quantité de données d'évaluation doit être interprétée par les résidents, mais peu de travaux ont été réalisés pour déterminer comment la visualisation et l'interaction avec ces données peuvent être soutenues. Dans le cadre du programme de résidence en médecine d'urgence de l'Université de Saskatchewan, nous avons cherché à déterminer comment les données d'évaluation de la CPC de nos résidents devraient être présentées pour soutenir leur apprentissage et pour développer un tableau de bord qui réponde aux besoins de nos résidents. MÉTHODOLOGIE: Nous avons utilisé un processus de recherche orientée par la conception pour cerner les besoins des résidents en lien avec la présentation de leurs données d'évaluation. Les données ont été recueillies au cours du programme de résidence en médecine d'urgence de l'Université de Saskatchewan grâce à quatre groupes de discussion de résidents qui se sont tenus sur une période de 10 mois. Les groupes de discussion ont été analysés en utilisant l'approche de la théorisation ancrée (Grounded Theory) pour cerner les besoins des résidents, pour guider le développement d'un tableau de bord contenant des éléments (données, analyses et visualisations) qui soutiennent leur interprétation de leurs propres données. Les besoins identifiés sont décrits à l'aide de citations des groupes de discussion ainsi que de visualisations des éléments du tableau de bord. RÉSULTATS: Les besoins des résidents ont été classés sous trois thèmes : 1. être guidés quant au programme d'évaluation, 2. présenter des données d'évaluation en milieu de travail, et 3. présenter d'autres données d'évaluation. Dix-sept éléments du tableau de bord ont été conçus pour répondre à ces besoins. CONCLUSIONS: Notre méthode de recherche orientée par conception a permis de cerner les besoins des résidents et d'élaborer les éléments d'un tableau de bord pour y répondre. Ce travail servira de base à la création et à l'évolution des tableaux de bord d'évaluation en CPC conçus pour soutenir l'apprentissage des résidents.

17.
J Emerg Med ; 59(3): 384-391, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32593578

ABSTRACT

BACKGROUND: In the prehospital setting, pit-crew models of cardiopulmonary resuscitation (CPR) have shown improvements in survival after out-of-hospital cardiac arrest (OHCA). Certain districts in North America have adopted this model, including Saskatoon, Saskatchewan, Canada. OBJECTIVES: Our objectives were to determine whether pit-crew CPR has an impact on survival to discharge after OHCA in Saskatoon, Canada. METHODS: This was a retrospective pre- and postintervention study. All adult patients who had an OHCA between January 1, 2011 and December 31, 2017 of presumed cardiac origin, in which the resuscitation attempt included CPR by trained prehospital responders, were considered for analysis. Our primary outcome was survival to discharge. Survival to admission and return of spontaneous circulation were secondary outcomes. RESULTS: There were 860 OHCAs considered for our study. After 46 exclusions there were 442 in the non-pit-crew group (average age 63.7 years; 64.5% male) and 372 in the pit-crew group (average age 63.5 years; 67.5% male). Survival to discharge after an OHCA was 10.4% (95% confidence interval 7.7-13.6%) in the non-pit-crew group and 12.4% (95% CI 9.2-16.2%) in the pit-crew group, which did not meet statistical significance. Return of spontaneous circulation and survival to admission were 48.4% and 31.3%, respectively, in the non-pit-crew group and 46.7% and 32.3%, respectively, in the pit-crew group. CONCLUSIONS: In our study, implementation of a pit-crew CPR model was not associated with an improvement in survival to discharge after OHCA.


Subject(s)
Cardiopulmonary Resuscitation , Emergency Medical Services , Out-of-Hospital Cardiac Arrest , Adult , Canada , Female , Humans , Male , Middle Aged , Out-of-Hospital Cardiac Arrest/therapy , Retrospective Studies
18.
CJEM ; 22(3): 291-294, 2020 05.
Article in English | MEDLINE | ID: mdl-32340634

ABSTRACT

A 12-year-old male injured his ankle while playing hockey (Figure 1). His dad reports that he was checked into the boards. His ankle is swollen, but does not appear deformed. His distal neurovascular exam is normal. There is bony tenderness over the lateral malleolus in accordance with the Ottawa Ankle Rules.


Subject(s)
Ankle Injuries , Fractures, Bone , Salter-Harris Fractures , Ankle Joint , Child , Emergency Service, Hospital , Humans , Male
19.
Can Med Educ J ; 11(1): e16-e34, 2020 Mar.
Article in English | MEDLINE | ID: mdl-32215140

ABSTRACT

BACKGROUND: Competency-based programs are being adopted in medical education around the world. Competence Committees must visualize learner assessment data effectively to support their decision-making. Dashboards play an integral role in decision support systems in other fields. Design-based research allows the simultaneous development and study of educational environments. METHODS: We utilized a design-based research process within the emergency medicine residency program at the University of Saskatchewan to identify the data, analytics, and visualizations needed by its Competence Committee, and developed a dashboard incorporating these elements. Narrative data were collected from two focus groups, five interviews, and the observation of two Competence Committee meetings. Data were qualitatively analyzed to develop a thematic framework outlining the needs of the Competence Committee and to inform the development of the dashboard. RESULTS: The qualitative analysis identified four Competence Committee needs (Explore Workplace-Based Assessment Data, Explore Other Assessment Data, Understand the Data in Context, and Ensure the Security of the Data). These needs were described with narratives and represented through visualizations of the dashboard elements. CONCLUSIONS: This work addresses the practical challenges of supporting data-driven decision making by Competence Committees and will inform the development of dashboards for programs, institutions, and learner management systems.


CONTEXTE: Les programmes fondés sur la compétence sont adoptés dans la formation médicale à travers le monde. Les comités des compétences doivent visualiser efficacement les données d'évaluation des apprenants pour soutenir leurs prises de décision. Les tableaux de bord jouent un rôle essentiel dans les systèmes d'aide à la décision dans d'autres disciplines. La recherche orientée par la conception permet le développement et l'étude simultanés des environnements éducatifs. MÉTHODES: Nous avons utilisé un processus de recherche orienté par la conception au sein du programme de résidence en médecine d'urgence à l'Université de la Saskatchewan pour déterminer les données, les analyses et les visuels dont a besoin son comité des compétences, et avons développé un tableau de bord intégrant ces éléments. Les données narratives ont été recueillies auprès de deux groupes de discussion, lors de cinq entrevues et par l'observation de deux réunions du comité des compétences. Les données ont été analysées de manière qualitative pour élaborer un cadre thématique soulignant les besoins du comité des compétences et orienter le développement du tableau de bord. RÉSULTATS: L'analyse qualitative a dégagé quatre besoins du comité des compétences (explorer les données d'évaluation en milieu de travail, explorer d'autres données d'évaluation, comprendre les données dans leur contexte et s'assurer la sécurité des données). Ces besoins ont étédécrits avec des récits et représentés par des visuelsdes éléments du tableau de bord. CONCLUSIONS: Le présent travail aborde les difficultés pratiques de soutenir une prise de décision fondée sur des données par les comités des compétences et oriente le développement des tableaux de bord pour les programmes, les établissements et les systèmes de gestion des apprenants.

20.
CJEM ; 22(2): 194-203, 2020 03.
Article in English | MEDLINE | ID: mdl-32209155

ABSTRACT

OBJECTIVES: To address the increasing demand for the use of simulation for assessment, our objective was to review the literature pertaining to simulation-based assessment and develop a set of consensus-based expert-informed recommendations on the use of simulation-based assessment as presented at the 2019 Canadian Association of Emergency Physicians (CAEP) Academic Symposium on Education. METHODS: A panel of Emergency Medicine (EM) physicians from across Canada, with leadership roles in simulation and/or assessment, was formed to develop the recommendations. An initial scoping literature review was conducted to extract principles of simulation-based assessment. These principles were refined via thematic analysis, and then used to derive a set of recommendations for the use of simulation-based assessment, organized by the Consensus Framework for Good Assessment. This was reviewed and revised via a national stakeholder survey, and then the recommendations were presented and revised at the consensus conference to generate a final set of recommendations on the use of simulation-based assessment in EM. CONCLUSION: We developed a set of recommendations for simulation-based assessment, using consensus-based expert-informed methods, across the domains of validity, reproducibility, feasibility, educational and catalytic effects, acceptability, and programmatic assessment. While the precise role of simulation-based assessment will be a subject of continued debate, we propose that these recommendations be used to assist educators and program leaders as they incorporate simulation-based assessment into their programs of assessment.


Subject(s)
Emergency Medicine , Societies, Medical , Canada , Consensus , Humans , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...