Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Diagnosis (Berl) ; 9(4): 476-484, 2022 11 01.
Article in English | MEDLINE | ID: mdl-36073963

ABSTRACT

OBJECTIVES: Idiosyncratic approaches to reasoning among teachers and limited reliable workplace-based assessment and feedback methods make teaching diagnostic reasoning challenging. The Assessment of Reasoning Tool (ART) was developed to fill this gap, but its utility and feasibility in providing feedback to residents has not been studied. We evaluated how the ART was used to assess, teach, and guide feedback on diagnostic reasoning to pediatric interns. METHODS: We used an integrated mixed-methods approach to evaluate how the ART facilitates the feedback process between clinical teachers and learners. We collected data from surveys of pediatric interns and interviews of hospital medicine faculty at Baylor College of Medicine from 2019 to 2020. Interns completed the survey each time they received feedback from their attending that was guided by the ART. The preliminary intern survey results informed the faculty interview questions. We integrated descriptive statistics of the survey with the thematic analysis of the transcribed interviews. RESULTS: Survey data (52 survey responses from 38 interns) and transcribed interviews (10 faculty) were analyzed. The ART framework provided a shared mental model which facilitated a feedback conversation. The ART-guided feedback was highly rated in terms of structure, content, and clarity in goal-setting while enabling new learning opportunities. Barriers to using the ART included limited time and inter-faculty variability of its use. CONCLUSIONS: The ART facilitated effective and feasible faculty feedback to interns on their diagnostic reasoning skills.


Subject(s)
Internship and Residency , Humans , Child , Feedback , Clinical Competence , Communication , Learning
2.
Med Teach ; 43(2): 168-173, 2021 02.
Article in English | MEDLINE | ID: mdl-33073665

ABSTRACT

BACKGROUND: Assessing learners' competence in diagnostic reasoning is challenging and unstandardized in medical education. We developed a theory-informed, behaviorally anchored rubric, the Assessment of Reasoning Tool (ART), with content and response process validity. This study gathered evidence to support the internal structure and the interpretation of measurements derived from this tool. METHODS: We derived a reconstructed version of ART (ART-R) as a 15-item, 5-point Likert scale using the ART domains and descriptors. A psychometric evaluation was performed. We created 18 video variations of learner oral presentations, portraying different performance levels of the ART-R. RESULTS: 152 faculty viewed two videos and rated the learner globally and then using the ART-R. The confirmatory factor analysis showed a favorable comparative fit index = 0.99, root mean square error of approximation = 0.097, and standardized root mean square residual = 0.026. The five domains, hypothesis-directed information gathering, problem representation, prioritized differential diagnosis, diagnostic evaluation, and awareness of cognitive tendencies/emotional factors, had high internal consistency. The total score for each domain had a positive association with the global assessment of diagnostic reasoning. CONCLUSIONS: Our findings provide validity evidence for the ART-R as an assessment tool with five theoretical domains, internal consistency, and association with global assessment.


Subject(s)
Education, Medical , Problem Solving , Diagnosis, Differential , Factor Analysis, Statistical , Humans , Psychometrics , Reproducibility of Results
3.
MedEdPORTAL ; 16: 10938, 2020 08 21.
Article in English | MEDLINE | ID: mdl-32875089

ABSTRACT

Introduction: There is a need for a standardized approach to understand and assess clinical reasoning in medical learners. The Assessment of Reasoning Tool was developed based on prevalent theories and frameworks using a multidisciplinary expert panel. As the tool provides a standardized rubric for assessing clinical reasoning, we designed an interactive train-the-trainer workshop for clinical educators and education leaders interested in improving their teaching skills and/or introducing curricula surrounding diagnostic reasoning. Methods: In this workshop, participants were exposed to the major domains of diagnostic reasoning and how to apply it to the assessment of a learner's skills. Kolb's experiential learning was the underlying model, which we showcased by using multiple interactive techniques, including small-group discussion, peer sharing, and case practice. We presented the workshop at a national conference of pediatric educators and as a faculty development workshop at a single institution. Participants were asked to complete a survey after the workshop to gauge their reactions and look for areas of improvement. Results: A total of 34 participants attended the two workshops. Participants rated the workshop favorably, with most planning to make a change to their practice. Comments were largely positive, emphasizing the benefits of the interactive approach. Discussion: The workshop and teaching materials represent an important early step in the workplace-based assessment of diagnostic reasoning in medical learners. Grounded in the clinical reasoning literature, the workshop offers one approach to assessing these skills in learners with or without direct observation of clinical skills.


Subject(s)
Curriculum , Learning , Child , Clinical Competence , Faculty , Humans , Problem-Based Learning
4.
Pediatr Crit Care Med ; 21(8): 760-766, 2020 08.
Article in English | MEDLINE | ID: mdl-32168295

ABSTRACT

OBJECTIVES: Fluid overload is common in the PICU and has been associated with increased morbidity and mortality. It remains unclear whether fluid overload is a surrogate marker for severity of illness and need for increased support, an iatrogenic modifiable risk factor, or a sign of oliguria. The proportions of various fluid intake contributing to fluid overload and its recognition have not been adequately examined. We aimed to: 1) describe the types and amounts of fluid exposure in the PICU and 2) identify the clinicians' recognition of fluid overload. SETTING: Noncardiac PICU in a quaternary care hospital. PATIENTS: Pediatric patients admitted for more than 24 hours. DESIGN: Prospective observational study over 28 days. INTERVENTIONS: Data were collected on the amount and type of fluid exposure-resuscitative boluses, blood products, enteral intake, parenteral nutrition (total parenteral nutrition), or modifiable fluids (IV fluids and medications) indexed to the patients' admission body surface area on days 1 and 3. Charts of patients admitted for 3 days who developed 15% fluid overload were reviewed to assess clinicians' recognition of fluid overload. MEASUREMENTS AND MAIN RESULTS: One hundred two patients were included. Day 1 median fluid exposure was 2,318 mL/m (1,831-3,037 mL/m; 1,646 mL/m [1,296-2,086 mL/m] modifiable fluids). Forty-seven patients (46%) received fluid boluses, and 16 (16%) received blood products. Day 3 median fluid exposure was 2,233 mL/m (1,904-2,556 mL/m; 750 mL/m [375-1,816 mL/m] modifiable fluids). Of the 54 patients, one patient (1.9%) received a fluid bolus and two (3.7%) received blood products. In our cohort, 47 of 54 (87%) had fluid exposure greater than 1,600 mL/m on day 3. Fluid overload was not recognized by the clinicians in 30% of the patients who developed more than 15% fluid overload. CONCLUSIONS: Although resuscitation fluids contributed more to fluid exposure on day 1 compared with day 3, fluid exposure frequently exceeded maintenance requirements on day 3. Fluid overload was not always recognized by PICU practitioners. Further studies to correlate modifiable fluid exposure to fluid overload and explore modifiable practice improvement opportunities are needed.


Subject(s)
Critical Illness , Water-Electrolyte Imbalance , Child , Fluid Therapy , Humans , Infant , Intensive Care Units, Pediatric , Prospective Studies , Water-Electrolyte Imbalance/diagnosis , Water-Electrolyte Imbalance/etiology
5.
Med Educ Online ; 24(1): 1679945, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31640483

ABSTRACT

Background: Ensuring that learners acquire diagnostic competence in a timely fashion is critical to providing high quality and safe patient care. Resident trainees typically gain experience by undertaking repetitive clinical encounters and receiving feedback from supervising faculty. By critically engaging with the diagnostic process, learners encapsulate medical knowledge into discrete memories that are able to be recollected and refined in subsequent clinical encounters. In the setting of exponentially increasing medical complexity and current duty hour limitations, the opportunities for successful practice in the clinical arena have become limited. Novel educational methods are needed to more efficiently bridge the gap from novice to expert diagnostician. Objective: Using a conceptual framework which incorporates deliberate practice, script theory, and learning curves, we developed an educational module prototype to coach novice learners to formulate organized knowledge (i.e. a repertoire of illness scripts) in an accelerated fashion thereby simulating the ideal experiential learning in a clinical rotation. Design: We developed the Diagnostic Expertise Acceleration Module (DEAM), a web-based module for learning illness scripts of diseases causing pediatric respiratory distress. For each case, the learner selects a diagnosis, receives structured feedback, and then creates an illness script with a subsequent expert script for comparison. Results: We validated the DEAM with seven experts, seven experienced learners and five novice learners. The module data generated meaningful learning curves of diagnostic accuracy. Case performance analysis and self-reported feedback demonstrated that the module improved a learner's ability to diagnose respiratory distress and create high-quality illness scripts. Conclusions: The DEAM allowed novice learners to engage in deliberate practice to diagnose clinical problems without a clinical encounter. The module generated learning curves to visually assess progress towards expertise. Learners acquired organized knowledge through formulation of a comprehensive list of illness scripts.


Subject(s)
Computer-Assisted Instruction/methods , Internship and Residency/methods , Knowledge , Learning , Models, Educational , Clinical Competence , Humans
6.
Int J Qual Health Care ; 31(8): G97-G102, 2019 Oct 31.
Article in English | MEDLINE | ID: mdl-31665303

ABSTRACT

OBJECTIVE: To investigate effects of a cognitive intervention based on isolation of red flags (I-RED) on diagnostic accuracy of 'do-not-miss diagnoses.' DESIGN: A 2 × 2 randomized case vignette-based experiment with manipulation of I-RED strategy between subjects and case complexity within subjects. SETTING: Two university-based residency programs. PARTICIPANTS: One-hundred and nine pediatric residents from all levels of training. INTERVENTIONS: Participants were randomly assigned to the I-RED vs. control group, and within each group, they were further randomized to the order in which they saw simple and complex cases. The I-RED strategy involved an instruction to look for a constellation of symptoms, signs, clinical data or circumstances that should heighten suspicion for a serious condition. MAIN OUTCOME MEASURES: Primary outcome was diagnostic accuracy, scored as 1 if any of the three differentials given by participants included the correct diagnosis, and 0 if not. We analyzed effects of I-RED strategy on diagnostic accuracy using logistic regression. RESULTS: I-RED strategy did not yield statistically higher diagnostic accuracy compared to controls (62 vs. 48%, respectively; odd ratio = 2.07 [95% confidence interval, 0.78-5.5], P = 0.14) although participants reported higher decision confidence compared to controls (7.00 vs. 5.77 on a scale of 1 to 10, P < 0.02) in simple but not complex cases. I-RED strategy significantly shortened time to decision (460 vs. 657 s, P < 0.001) and increased the number of red flags generated (3.04 vs. 2.09, P < 0.001). CONCLUSIONS: A cognitive strategy of prompting red flag isolation prior to differential diagnosis did not improve diagnostic accuracy of 'do-not-miss diagnoses.' Given the paucity of evidence-based solutions to reduce diagnostic error and the intervention's potential effect on confidence, findings warrant additional exploration.


Subject(s)
Decision Making , Diagnostic Errors/prevention & control , Internship and Residency , Clinical Competence , Cognition , Diagnosis, Differential , Guidelines as Topic , Humans , Pediatrics/education , Pediatrics/methods , Random Allocation
7.
Diagnosis (Berl) ; 5(4): 197-203, 2018 11 27.
Article in English | MEDLINE | ID: mdl-30407911

ABSTRACT

Background Excellence in clinical reasoning is one of the most important outcomes of medical education programs, but assessing learners' reasoning to inform corrective feedback is challenging and unstandardized. Methods The Society to Improve Diagnosis in Medicine formed a multi-specialty team of medical educators to develop the Assessment of Reasoning Tool (ART). This paper describes the tool development process. The tool was designed to facilitate clinical teachers' assessment of learners' oral presentation for competence in clinical reasoning and facilitate formative feedback. Reasoning frameworks (e.g. script theory), contemporary practice goals (e.g. high-value care [HVC]) and proposed error reduction strategies (e.g. metacognition) were used to guide the development of the tool. Results The ART is a behaviorally anchored, three-point scale assessing five domains of reasoning: (1) hypothesis-directed data gathering, (2) articulation of a problem representation, (3) formulation of a prioritized differential diagnosis, (4) diagnostic testing aligned with HVC principles and (5) metacognition. Instructional videos were created for faculty development for each domain, guided by principles of multimedia learning. Conclusions The ART is a theory-informed assessment tool that allows teachers to assess clinical reasoning and structure feedback conversations.


Subject(s)
Clinical Decision-Making , Decision Making , Diagnostic Errors/prevention & control , Education, Medical/methods , Educational Measurement/methods , Faculty, Medical , Students, Medical , Clinical Competence , Cognition , Diagnosis, Differential , Feedback , Humans , Learning , Quality of Health Care , Societies , Staff Development , Teaching
8.
Pediatr Crit Care Med ; 18(3): 265-271, 2017 Mar.
Article in English | MEDLINE | ID: mdl-28125548

ABSTRACT

OBJECTIVES: To determine whether the Safer Dx Instrument, a structured tool for finding diagnostic errors in primary care, can be used to reliably detect diagnostic errors in patients admitted to a PICU. DESIGN AND SETTING: The Safer Dx Instrument consists of 11 questions to evaluate the diagnostic process and a final question to determine if diagnostic error occurred. We used the instrument to analyze four "high-risk" patient cohorts admitted to the PICU between June 2013 and December 2013. PATIENTS: High-risk cohorts were defined as cohort 1: patients who were autopsied; cohort 2: patients seen as outpatients within 2 weeks prior to PICU admission; cohort 3: patients transferred to PICU unexpectedly from an acute care floor after a rapid response and requiring vasoactive medications and/or endotracheal intubation due to decompensation within 24 hours; and cohort 4: patients transferred to PICU unexpectedly from an acute care floor after a rapid response without subsequent decompensation in 24 hours. INTERVENTIONS: Two clinicians used the instrument to independently review records in each cohort for diagnostic errors, defined as missed opportunities to make a correct or timely diagnosis. Errors were confirmed by senior expert clinicians. MEASUREMENTS AND MAIN RESULTS: Diagnostic errors were present in 26 of 214 high-risk patient records (12.1%; 95% CI, 8.2-17.5%) with the following frequency distribution: cohort 1: two of 16 (12.5%); cohort 2: one of 41 (2.4%); cohort 3: 13 of 44 (29.5%); and cohort 4: 10 of 113 (8.8%). Overall initial reviewer agreement was 93.6% (κ, 0.72). Infections and neurologic conditions were the most commonly missed diagnoses across all high-risk cohorts (16/26). CONCLUSIONS: The Safer Dx Instrument has high reliability and validity for diagnostic error detection when used in high-risk pediatric care settings. With further validation in additional clinical settings, it could be useful to enhance learning and feedback about diagnostic safety in children.


Subject(s)
Critical Care/standards , Diagnostic Errors/statistics & numerical data , Intensive Care Units, Pediatric/standards , Quality Assurance, Health Care/methods , Adolescent , Child , Child, Preschool , Critical Care/statistics & numerical data , Diagnostic Errors/prevention & control , Electronic Health Records , Female , Humans , Infant , Infant, Newborn , Intensive Care Units, Pediatric/statistics & numerical data , Male , Outcome and Process Assessment, Health Care/methods , Quality Improvement , Reproducibility of Results , Risk Assessment , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...