Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
1.
Anesth Analg ; 108(1): 255-62, 2009 Jan.
Article in English | MEDLINE | ID: mdl-19095860

ABSTRACT

BACKGROUND: Anesthesiologists and certified registered nurse anesthetists (CRNAs) must acquire the skills to recognize and manage a variety of acute intraoperative emergencies. A simulation-based assessment provides a useful and efficient means to evaluate these skills. In this study, we evaluated and compared the performance of board-certified anesthesiologists and CRNAs managing a set of simulated intraoperative emergencies. METHODS: We enrolled 26 CRNAs and 35 board-certified anesthesiologists in a prospective, randomized, single-blinded study. These 61 specialists each managed 8 of 12 randomly selected, scripted, intraoperative simulation exercises. Participants were expected to recognize and initiate appropriate therapy for intraoperative events during a 5-min period. Two primary raters scored 488 simulation exercises (61 participants x 8 encounters). RESULTS: Anesthesiologists achieved a modestly higher mean overall score than CRNAs (66.6% +/- 11.7 [range = 41.7%-86.7%] vs 59.9% +/- 10.2 [range = 38.3%-80.4%] P < 0.01). There were no significant differences in performance between groups on individual encounters. The raters were consistent in their identification of key actions. The reliability of the eight-scenario assessment, with two raters for each scenario, was 0.80. CONCLUSION: Although anesthesiologists, on average, achieved a modestly higher overall score, there was marked and similar variability in both groups. This wide range suggests that certification in either discipline may not yield uniform acumen in management of simulated intraoperative emergencies. In both groups, there were practitioners who failed to diagnose and treat simulated emergencies. If this is reflective of clinical practice, it represents a patient safety concern. Simulation-based assessment provides a tool to determine the ability of practitioners to respond appropriately to clinical emergencies. If all practitioners could effectively manage these critical events, the standard of patient care and ultimately patient safety could be improved.


Subject(s)
Anesthesiology , Clinical Competence , Computer Simulation , Intraoperative Complications , Nurse Anesthetists , Patient Simulation , Task Performance and Analysis , Anesthesiology/standards , Certification , Clinical Competence/standards , Critical Care , Female , Humans , Intraoperative Care , Intraoperative Complications/diagnosis , Intraoperative Complications/therapy , Male , Nurse Anesthetists/standards , Prospective Studies , Quality of Health Care , Reproducibility of Results , Single-Blind Method , Workforce
2.
Anesthesiology ; 99(6): 1270-80, 2003 Dec.
Article in English | MEDLINE | ID: mdl-14639138

ABSTRACT

BACKGROUND: Medical students and residents are expected to be able to manage a variety of critical events after training, but many of these individuals have limited clinical experiences in the diagnosis and treatment of these conditions. Life-sized mannequins that model critical events can be used to evaluate the skills required to manage and treat acute medical conditions. The purpose of this study was to develop and test simulation exercises and associated scoring methods that could be used to evaluate the acute care skills of final-year medical students and first-year residents. METHODS: The authors developed and tested 10 simulated acute care situations that clinical faculty at a major medical school expects graduating physicians to be able to recognize and treat at the conclusion of training. Forty medical students and residents participated in the evaluation of the exercises. Four faculty members scored the students/residents. RESULTS: The reliability of the simulation scores was moderate and was most strongly influenced by the choice and number of simulated encounters. The validity of the simulation scores was supported through comparisons of students'/residents' performances in relation to their clinical backgrounds and experience. CONCLUSION: Acute care skills can be validly and reliably measured using a simulation technology. However, multiple simulated encounters, covering a broad domain, are needed to effectively and accurately estimate student/resident abilities in acute care settings.


Subject(s)
Clinical Competence , Critical Care , Educational Measurement , Internship and Residency , Patient Simulation , Students, Medical , Humans
3.
Med Educ ; 36(9): 833-41, 2002 Sep.
Article in English | MEDLINE | ID: mdl-12354246

ABSTRACT

PURPOSE: This investigation aimed to explore the measurement properties of scores from a patient simulator exercise. METHODS: Analytic and holistic scores were obtained for groups of medical students and residents. Item analysis techniques were used to explore the nature of specific examinee actions. Interrater reliability was calculated. Scores were contrasted for third year medical students, fourth year medical students and emergency department residents. RESULTS: Interrater reliabilities for analytic and holistic scores were 0.92 and 0.81, respectively. Based on item analysis, proper timing and sequencing of actions discriminated between low- and high-ability examinees. In general, examinees with more advanced training obtained higher scores on the simulation exercise. CONCLUSION: Reliable and valid measures of clinical performance can be obtained from a trauma simulation provided that care is taken in the development and scoring of the scenario.


Subject(s)
Education, Medical, Undergraduate/standards , Educational Measurement/standards , Patient Simulation , Analysis of Variance , Clinical Competence , Critical Care/methods , Curriculum , Humans , Pilot Projects , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...