Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Adv Simul (Lond) ; 8(1): 9, 2023 Mar 14.
Article in English | MEDLINE | ID: mdl-36918946

ABSTRACT

BACKGROUND: Debriefing is crucial for enhancing learning following healthcare simulation. Various validated tools have been shown to have contextual value for assessing debriefers. The Debriefing Assessment in Real Time (DART) tool may offer an alternative or additional assessment of conversational dynamics during debriefings. METHODS: This is a multi-method international study investigating reliability and validity. Enrolled raters (n = 12) were active simulation educators. Following tool training, the raters were asked to score a mixed sample of debriefings. Descriptive statistics are recorded, with coefficient of variation (CV%) and Cronbach's α used to estimate reliability. Raters returned a detailed reflective survey following their contribution. Kane's framework was used to construct validity arguments. RESULTS: The 8 debriefings (µ = 15.4 min (SD 2.7)) included 45 interdisciplinary learners at various levels of training. Reliability (mean CV%) for key components was as follows: instructor questions µ = 14.7%, instructor statements µ = 34.1%, and trainee responses µ = 29.0%. Cronbach α ranged from 0.852 to 0.978 across the debriefings. Post-experience responses suggested that DARTs can highlight suboptimal practices including unqualified lecturing by debriefers. CONCLUSION: The DART demonstrated acceptable reliability and may have a limited role in assessment of healthcare simulation debriefing. Inherent complexity and emergent properties of debriefing practice should be accounted for when using this tool.

2.
BMC Med Educ ; 22(1): 636, 2022 Aug 22.
Article in English | MEDLINE | ID: mdl-35989331

ABSTRACT

BACKGROUND: Various rating tools aim to assess simulation debriefing quality, but their use may be limited by complexity and subjectivity. The Debriefing Assessment in Real Time (DART) tool represents an alternative debriefing aid that uses quantitative measures to estimate quality and requires minimal training to use. The DART is uses a cumulative tally of instructor questions (IQ), instructor statements (IS) and trainee responses (TR). Ratios for IQ:IS and TR:[IQ + IS] may estimate the level of debriefer inclusivity and participant engagement. METHODS: Experienced faculty from four geographically disparate university-affiliated simulation centers rated video-based debriefings and a transcript using the DART. The primary endpoint was an assessment of the estimated reliability of the tool. The small sample size confined analysis to descriptive statistics and coefficient of variations (CV%) as an estimate of reliability. RESULTS: Ratings for Video A (n = 7), Video B (n = 6), and Transcript A (n = 6) demonstrated mean CV% for IQ (27.8%), IS (39.5%), TR (34.8%), IQ:IS (40.8%), and TR:[IQ + IS] (28.0%). Higher CV% observed in IS and TR may be attributable to rater characterizations of longer contributions as either lumped or split. Lower variances in IQ and TR:[IQ + IS] suggest overall consistency regardless of scores being lumped or split. CONCLUSION: The DART tool appears to be reliable for the recording of data which may be useful for informing feedback to debriefers. Future studies should assess reliability in a wider pool of debriefings and examine potential uses in faculty development.


Subject(s)
Clinical Competence , Simulation Training , Computer Simulation , Delivery of Health Care , Humans , Pilot Projects , Reproducibility of Results
3.
Australas Psychiatry ; 28(3): 354-358, 2020 06.
Article in English | MEDLINE | ID: mdl-32093504

ABSTRACT

OBJECTIVE: We explored the feasibility of developing, running and evaluating a simulation-based medical education (SBME) workshop to improve the knowledge, skills and attitudes of emergency department (ED) doctors when called on to assess patients in psychiatric crisis. METHOD: We designed a four-hour workshop incorporating SBME and a blend of pre-reading, short didactic elements and multiple-choice questions (MCQs). Emergency department nurses (operating as SBME faculty) used prepared scripts to portray patients presenting in psychiatric crisis. They were interviewed in front of, and by, ED doctors. We collected structured course evaluations, Debriefing Assessment for Simulation in Healthcare (DASH) scores, and pre- and post-course MCQs. RESULTS: The pilot workshop was delivered to 12 ED registrars using only existing resources of the Psychiatry and Emergency Departments. Participants highly valued both 'level of appropriateness' (Likert rating µ = 4.8/5.0) and 'overall usefulness' (µ = 4.7/5.0) of the programme. They reported an improved understanding of the mental state and of relevant legal issues and rated the debriefings highly (participant DASH rating: n = 193; score µ = 6.3/7.0). Median MCQ scores improved non-significantly pre- and post-course (7.5/12 vs 10/12, p = 0.261). CONCLUSION: An SBME workshop with these aims could be delivered and evaluated using the existing resources of the Psychiatry and Emergency Departments.


Subject(s)
Clinical Competence , Computer Simulation , Education, Medical/methods , Education/methods , Emergency Services, Psychiatric/methods , Physicians , Emergency Service, Hospital , Female , Humans , Male , Pilot Projects , Program Evaluation
4.
Australas Emerg Care ; 21(3): 81-86, 2018 Aug.
Article in English | MEDLINE | ID: mdl-30998882

ABSTRACT

INTRODUCTION: International guidelines recommend that interruptions to chest compressions are minimised during defibrillation. As a result, some resuscitation educators have adopted a more structured approach to defibrillation. One such approach is the 'C.O.A.C.H.E.D.' cognitive aid (Continue compressions, Oxygen away, All others away, Charging, Hands off, Evaluate, Defibrillate or Disarm). To date, there are no studies assessing the use of this cognitive aid. METHODS: This study utilised an Emergency Department in situ simulated model of cardiac arrest. The defibrillator used was a proprietary R-Series (Zoll, PA, USA) connected to a CS1201 rhythm generator (Symbio, Beaverton, OR, USA). The study cohorts were interdisciplinary advanced life support (ALS) providers. Paired providers were enrolled in a mechanical CPR (M-CPR) training programme with no feedback related to defibrillation performance. As part of this 6-month programme, serial defibrillation performance was assessed. The outcome measures were the length of 'peri-shock' pause and 'safety' of defibrillation practice. Comparative statistical analysis using the Mann-Whitney U-test was made between groups of providers with 'correct use or near correct' or 'entirely incorrect or absent' use of the cognitive aid. RESULTS: The C.O.A.C.H.E.D. cognitive aid was applied correctly in 92 of 109 defibrillations. Providers with correct cognitive aid use had a median length of peri-shock pause time of 6.0s (IQR 5.0-7.0). Providers with 'entirely incorrect or absent' cognitive aid use had a peri-shock pause time of 8.0s (IQRF 6.6-10.0) (p≤0.001). No unsafe defibrillation practices were observed. CONCLUSION: In this observational study of defibrillation performance, the use of the C.O.A.C.H.E.D. cognitive aid was associated with a significant decrease in the length of peri-shock pause. Therefore, we conclude that the use of a cognitive aid is appropriate for teaching and performing defibrillation.


Subject(s)
Decision Support Techniques , Electric Countershock/methods , Electric Countershock/standards , Cardiopulmonary Resuscitation/methods , Cardiopulmonary Resuscitation/standards , Emergency Service, Hospital/organization & administration , Guidelines as Topic , Humans , Prospective Studies , Teaching/standards , Teaching/trends , Western Australia
SELECTION OF CITATIONS
SEARCH DETAIL
...