Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Can J Anaesth ; 66(12): 1440-1449, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31559541

ABSTRACT

PURPOSE: Simulated clinical events provide a means to evaluate a practitioner's performance in a standardized manner for all candidates that are tested. We sought to provide evidence for the validity of simulation-based assessment tools in simulated pediatric anesthesia emergencies. METHODS: Nine centres in two countries recruited subjects to participate in simulated operating room events. Participants ranged in anesthesia experience from junior residents to staff anesthesiologists. Performances were video recorded for review and scored by specially trained, blinded, expert raters. The rating tools consisted of scenario-specific checklists and a global rating scale that allowed the rater to make a judgement about the subject's performance, and by extension, preparedness for independent practice. The reliability of the tools was classified as "substantial" (intraclass correlation coefficients ranged from 0.84 to 0.96 for the checklists and from 0.85 to 0.94 for the global rating scale). RESULTS: Three-hundred and ninety-one simulation encounters were analysed. Senior trainees and staff significantly out-performed junior trainees (P = 0.04 and P < 0.001 respectively). The effect size of grade (junior vs senior trainee vs staff) on performance was classified as "medium" (partial η2 = 0.06). Performance deficits were observed across all grades of anesthesiologist, particularly in two of the scenarios. CONCLUSIONS: This study supports the validity of our simulation-based anesthesiologist assessment tools in several domains of validity. We also describe some residual challenges regarding the validity of our tools, some notes of caution in terms of the intended consequences of their use, and identify opportunities for further research.


Subject(s)
Anesthesia/standards , Anesthesiology/education , Emergency Medical Services/standards , Pediatrics/standards , Simulation Training/standards , Adolescent , Anesthesiologists , Checklist , Child , Child, Preschool , Clinical Competence , Humans , Infant , Infant, Newborn , Internship and Residency , Judgment , Operating Rooms/organization & administration , Reproducibility of Results
2.
Paediatr Anaesth ; 27(10): 984-990, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28815823

ABSTRACT

2016 marked the 10-year anniversary of the inception of the Managing Emergencies in Paediatric Anaesthesia (MEPA) course. This simulation-based program was originally created to allow trainees in pediatric anesthesia to experience operating room emergencies which although infrequent, would be considered key competencies for any practicing anesthetist with responsibility for providing care to children. Since its original manifestation, the course has evolved in content, scope, and worldwide availability, such that it is now available at over 60 locations on five continents. The content has been modified for different learner groups and translated into several languages. This article describes the history, evolution, and dissemination of the MEPA course to share lessons learnt with educators considering the launch of similar initiatives in their field.


Subject(s)
Anesthesiology/education , Computer Simulation , Curriculum , Emergency Service, Hospital , Manikins , Pediatrics/education , Child , Emergencies , Humans , Internationality , United Kingdom
3.
Paediatr Anaesth ; 23(12): 1117-23, 2013 Dec.
Article in English | MEDLINE | ID: mdl-23800112

ABSTRACT

INTRODUCTION: The use of simulation-based assessments for high-stakes physician examinations remains controversial. The Managing Emergencies in Paediatric Anaesthesia course uses simulation to teach evidence-based management of anesthesia crises to trainee anesthetists in the United Kingdom (UK) and Canada. In this study, we investigated the feasibility and reliability of custom-designed scenario-specific performance checklists and a global rating scale (GRS) assessing readiness for independent practice. METHODS: After research ethics board approval, subjects were videoed managing simulated pediatric anesthesia crises in a single Canadian teaching hospital. Each subject was randomized to two of six different scenarios. All 60 scenarios were subsequently rated by four blinded raters (two in the UK, two in Canada) using the checklists and GRS. The actual and predicted reliability of the tools was calculated for different numbers of raters using the intraclass correlation coefficient (ICC) and the Spearman-Brown prophecy formula. RESULTS: Average measures ICCs ranged from 'substantial' to 'near perfect' (P ≤ 0.001). The reliability of the checklists and the GRS was similar. Single measures ICCs showed more variability than average measures ICC. At least two raters would be required to achieve acceptable reliability. CONCLUSIONS: We have established the reliability of a GRS to assess the management of simulated crisis scenarios in pediatric anesthesia, and this tool is feasible within the setting of a research study. The global rating scale allows raters to make a judgement regarding a participant's readiness for independent practice. These tools may be used in the future research examining simulation-based assessment.


Subject(s)
Anesthesia/methods , Anesthesiology/standards , Computer Simulation , Emergency Medical Services/methods , Pediatrics/standards , Anesthesiology/education , Canada , Checklist , Child , Data Interpretation, Statistical , England , Feasibility Studies , Female , Humans , Male , Observer Variation , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...