Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
1.
Paediatr Anaesth ; 30(7): 823-832, 2020 07.
Article in English | MEDLINE | ID: mdl-32335993

ABSTRACT

BACKGROUND: Simulation-based education is a mainstay in education of pediatric anesthesiology trainees. Despite the known benefits, there is variability in its use and availability among various pediatric anesthesiology fellowship programs. AIM: The primary aim was to understand the current state of simulation-based education among pediatric anesthesiology fellowship programs and define barriers that impede the development of an effective simulation program. METHODS: This survey-based, observational study of simulation activities within United States-based pediatric anesthesiology fellowship programs was approved by the Institutional Review Boards (IRB) of the authors' institutions. A 35-question survey was developed in an iterative manner by simulation educators (AA, WW, DY) and a statistician familiar with survey-based research (AN) using research electronic data capture (REDCap) for tool development and data collation. Descriptive and thematic analyses were performed on the quantitative and qualitative responses in the survey, respectively, and were stratified with small, medium, and large fellowship programs. RESULTS: Forty-five of 60 (75%) fellowship programs responded to the survey. The presence of a dedicated simulation program director and number of simulation instructors was positively associated with the size of program and years in operation. Dedicated simulation support was variable across programs and was usually present within the larger programs. A positive association also existed for educational activities among all programs mostly based on size of program and years in operation. Protected time was the most commonly cited barrier to having a comprehensive and sustainable simulation program. There was general agreement for establishing a standardized and shared curriculum among fellowship programs. Approximately 70% of simulation programs had no formal simulation instructor training requirement. CONCLUSIONS: Simulation-based curricula are broadly offered by many fellowship programs. Improved collaboration locally, regionally, and nationally may improve educational opportunities for fellowship programs, particularly the small ones. These efforts may begin with the development of a standardized curriculum and formal instructor training programs.


Subject(s)
Anesthesiology , Simulation Training , Anesthesiology/education , Child , Curriculum , Education, Medical, Graduate , Fellowships and Scholarships , Humans , Surveys and Questionnaires , United States
3.
Anesthesiology ; 115(6): 1308-15, 2011 Dec.
Article in English | MEDLINE | ID: mdl-22037637

ABSTRACT

BACKGROUND: Assessment of pediatric anesthesia trainees is complicated by the random nature of adverse patient events and the vagaries of clinical exposure. However, assessment is critical to improve patient safety. In previous studies, a multiple scenario assessment provided reliable and valid measures of the abilities of anesthesia residents. The purpose of this study was to develop a set of relevant simulated pediatric perioperative scenarios and to determine their effectiveness in the assessment of anesthesia residents and pediatric anesthesia fellows. METHODS: Ten simulation scenarios were designed to reflect situations encountered in perioperative pediatric anesthesia care. Anesthesiology residents and fellows consented to participate and were debriefed after each scenario. Two pediatric anesthesiologists scored each scenario by key action checklist. The psychometric properties (reliability, validity) of the scores were studied. RESULTS: Thirty-five anesthesiology residents and pediatric anesthesia fellows participated. The participants with greater experience administering pediatric anesthetics generally outperformed those with less experience. Score variance attributable to raters was low, yielding a high interrater reliability. CONCLUSIONS: A multiple-scenario, simulation-based assessment of pediatric perioperative care was designed and administered to residents and fellows. The scores obtained from the assessment indicated the content was relevant and that raters could reliably score the scenarios. Participants with more training achieved higher scores, but there was a wide range of ability among subjects. This method has the potential to contribute to pediatric anesthesia performance assessment, but additional measures of validity including correlations with more direct measures of clinical performance are needed to establish the utility of this approach.


Subject(s)
Anesthesiology/education , Clinical Competence , Computer Simulation , Pediatrics/education , Adult , Analysis of Variance , Anesthesiology/standards , Female , Humans , Internship and Residency , Male , Pediatrics/standards , Psychometrics , Reproducibility of Results
4.
Pediatrics ; 128(2): 335-43, 2011 Aug.
Article in English | MEDLINE | ID: mdl-21746717

ABSTRACT

OBJECTIVE: The goal of this study was to develop an inventory of simulated scenarios that mimic pediatric crises and determine if the resident scores could be used to establish the reliability and validity of a multiple-scenario assessment. The long-term objective is to provide pediatric residents with experiences in the recognition, diagnosis, and management of a range of simulated acute conditions. METHODS: Twenty scenarios were constructed. Each resident participated in 10 scenarios that were scored by 2 independent raters using an action-item checklist and a global score. Debriefing occurred after each scenario. Several analyses were performed to investigate the psychometric adequacy of the scores. RESULTS: Twenty-nine residents participated. The residents' scores in both sets of 10 scenarios were reliable when using either the checklist or global scoring method (>0.79). Generalizability analyses indicated that the major source of variance in scores was attributable to the scenario and scenario-resident interaction. The variance attributable to the rater was low, yielding high interrater reliability values. The more-experienced residents who participated in the study outperformed the less-experienced interns. CONCLUSIONS: An inventory of critical events was designed to assess pediatric residents' diagnostic and management skills. A reliable measure of ability could be obtained, provided the residents managed multiple scenarios. The residents outscored the interns, providing evidence to support the construct validity of the scores. Additional validity evidence is needed, including studies to determine if this type of training improves physicians' management of real-life critical events.


Subject(s)
Clinical Competence/standards , Internship and Residency/standards , Pediatrics/education , Pediatrics/standards , Problem-Based Learning/standards , Cohort Studies , Humans , Reproducibility of Results
6.
Anesth Analg ; 109(2): 426-33, 2009 Aug.
Article in English | MEDLINE | ID: mdl-19608813

ABSTRACT

BACKGROUND: Intraoperative anesthesia equipment failures are a cause of anesthetic morbidity. Our purpose in this study was 1) to design a set of simulated scenarios that measure skill in managing intraoperative equipment-related errors and 2) to evaluate the reliability and validity of the measures from this multiple scenario assessment. METHODS: Eight intraoperative scenarios were created to test anesthesia residents' skills in managing a number of equipment-related failures. Fifty-six resident physicians, divided into four groups based on their training year (Resident 1-Resident 4), participated in the individual simulation-based assessment of equipment-related failures. The score for each scenario was generated by a checklist of key actions relevant to each scenario and time to complete these actions. RESULTS: The residents' scores, on average, improved with increased level of training. The more senior residents (R3 and R4) performed better than more junior residents (R1 and R2). Despite similar training background, there was a wide range of skill among the residents within each training year. The summary score on the eight scenario assessments, measured by either the key actions or the time required to manage the events, yielded a reliable estimate of a resident's skill in managing these simulated equipment failures. DISCUSSION: Anesthesia residents' performances could be reliably evaluated using a set of simulated intraoperative equipment problems. This multiple scenario assessment was an effective method to evaluate individual performance. The summary results, by training year, could be used to determine how successful current instructional methods are for acquiring skill.


Subject(s)
Anesthesiology/education , Anesthesiology/instrumentation , Clinical Competence/standards , Equipment Failure , Internship and Residency , Analysis of Variance , Computer Simulation , Humans , Intraoperative Period , Principal Component Analysis , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...