Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 34
Filter
1.
Cureus ; 16(4): e57439, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38699123

ABSTRACT

BACKGROUND: As of 2014, the Accreditation Council for Graduate Medical Education (ACGME) mandates initiating a Program Evaluation Committee (PEC) to guide ongoing program improvement. However, little guidance nor published reports exist about how individual PECs have undertaken this mandate. OBJECTIVE: To explore how four primary care residency PECs configure their committees, review program goals and undertake program evaluation and improvement. METHODS: We conducted a multiple case study between December 2022 and April 2023 of four purposively selected primary care residencies (e.g., family medicine, pediatrics, internal medicine). Data sources included semi-structured interviews with four PEC members per program and diverse program artifacts. Using a constructivist approach, we utilized qualitative coding to analyze participant interviews and content analysis for program artifacts. We then used coded transcripts and artifacts to construct logic models for each program guided by a systems theory lens.  Results: Programs adapt their PEC structure, execution, and outcomes to meet short- and long-term needs based on organizational and program-unique factors such as size and local practices. They relied on multiple data sources and sought diverse stakeholder participation to complete program evaluation and improvement. Identified deficiencies were often categorized as internal versus external to delineate PEC responsibility, boundaries, and feasibility of interventions. CONCLUSION: The broad guidance provided by the ACGME for PEC configuration allows programs to adapt the committee based on individual needs. However, further instruction on program evaluation and organizational change principles would augment existing PEC efforts.

2.
J Phys Ther Educ ; 38(2): 125-132, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38625694

ABSTRACT

BACKGROUND AND PURPOSE: With the growing interest for physical therapists to incorporate musculoskeletal (MSK) ultrasound comes a need to understand how to organize training to promote the transfer of training to clinical practice. A common training strategy blends asynchronous learning through online modules and virtual simulations with synchronous practice on live simulated participants. However, few physical therapists who attend MSK ultrasound continuing education courses integrate ultrasound into clinical practice. Self-efficacy is a significant predictor of training transfer effectiveness. This study describes to what degree and how a blended learning strategy influenced participants' self-efficacy for MSK ultrasound and transfer of training to clinical practice. SUBJECTS: Twenty-one outpatient physical therapists with no previous MSK ultrasound training. METHODS: Twenty-one participants assessed their self-efficacy using a 26-item self-efficacy questionnaire at 3 intervals: before asynchronous, before synchronous training, and before returning to clinical practice. Participants were interviewed within 1 week of training using a semi-structured interview guide. Quantitative analysis included descriptive statistics and repeated-measures ANOVA. Thematic analysis was used to examine participants' experiences, and "following the thread" was used to integrate findings. RESULTS: Self-efficacy questionnaire mean scores increased significantly across the 3- time points ( F [2, 40] = 172.7, P < .001, η 2 = 0.896). Thematic analysis indicated that asynchronous activities scaffolded participants' knowledge, enhanced their self-efficacy, and prepared them for synchronous learning; however, it did not replicate the challenges of MSK ultrasound. Synchronous activities further improved self-efficacy and helped participants better calibrate their self-judgments of their abilities and readiness to integrate MSK ultrasound training into clinical practice. Despite individual-level improvements in self-efficacy, interviewees recognized their limitations and a need for longitudinal training in a clinical environment. DISCUSSION AND CONCLUSION: A blended learning approach positively affects participants' self-efficacy for MSK ultrasound; however, future training designs should provide learners with additional support during the transition phase.


Subject(s)
Physical Therapists , Self Efficacy , Ultrasonography , Humans , Male , Female , Ultrasonography/methods , Physical Therapists/education , Adult , Surveys and Questionnaires , Clinical Competence , Middle Aged
3.
Clin Teach ; 20(6): e13611, 2023 12.
Article in English | MEDLINE | ID: mdl-37646343

ABSTRACT

BACKGROUND: Accessible and efficient opportunities for health professional faculty to hone feedback skills are limited. In addition, feedback models to apply to the objective structured clinical examination (OSCE) setting are lacking. APPROACH: Annually, paediatric interns from Children's National Hospital and Walter Reed National Military Medical Center participate in an OSCE, which includes faculty observation and immediate feedback to trainees. In 2018, we incorporated the subjective, objective, assessment, plan (SOAP) Feedback Training Program during 20 min of the pre-OSCE faculty orientation. The SOAP Feedback Training Program introduced the SOAP feedback model (subjective, objective, assessment, plan), facilitated practice in pairs and distributed a cognitive aid referencing the model. We evaluated the quality of faculty feedback exchanges during the 2018 OSCE via retrospective video review using the Direct Observation of Clinical Skills Feedback Scale (DOCS-FBS). We compared the results to the 2015 initial evaluation and used focus groups to understand how and why faculty feedback changed. EVALUATION: Comparison of the initial evaluation to the post-SOAP Feedback Training Program intervention data using a Wilcoxon signed rank test showed statistically significant improvement in six of eight feedback items on the DOCS-FBS. Causal coding of focus group transcripts revealed that the SOAP Feedback Training Program evoked affective responses, reinforced prior practice in feedback delivery, improved feedback organisation and increased feedback delivery preparation. IMPLICATIONS: The SOAP Feedback Training Program is an effective intervention to teach the SOAP feedback model and improve faculty feedback quality in an OSCE setting. It is efficient and low resource, facilitating its potential use in settings beyond the OSCE.


Subject(s)
Clinical Competence , Educational Measurement , Humans , Child , Feedback , Retrospective Studies , Program Development , Faculty, Nursing
4.
J Perinat Neonatal Nurs ; 37(2): 92-95, 2023.
Article in English | MEDLINE | ID: mdl-37102550

ABSTRACT

This commentary examines evidence demonstrating how simulations have been used in the clinical setting to improve perinatal and neonatal clinical care, including simulations implemented to address select patient presentations, novel patient presentations, and those employed to test new clinical environments or renovated patient units. The underlying reasons these interventions support interprofessional collaboration, organizational learning, and problem solving are also discussed alongside common challenges associated with implementation.


Subject(s)
Delivery of Health Care , Problem Solving , Infant, Newborn , Humans , Interprofessional Relations , Cooperative Behavior , Patient Care Team
5.
Med Teach ; 45(6): 585-587, 2023 06.
Article in English | MEDLINE | ID: mdl-37098156
6.
AEM Educ Train ; 7(2): e10848, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36936085

ABSTRACT

Background: Over the past decade, the use of technology-enhanced simulation in emergency medicine (EM) education has grown, yet we still lack a clear understanding of its effectiveness. This systematic review aims to identify and synthesize studies evaluating the comparative effectiveness of technology-enhanced simulation in EM. Methods: We searched MEDLINE, EMBASE, PsycINFO, CINAHL, ERIC, Web of Science, and Scopus to identify EM simulation research that compares technology-enhanced simulation with other instructional modalities. Two reviewers screened articles for inclusion and abstracted information on learners, clinical topics, instructional design features, outcomes, cost, and study quality. Standardized mean difference (SMD) effect sizes were pooled using random effects. Results: We identified 60 studies, enrolling at least 5279 learners. Of these, 23 compared technology-enhanced simulation with another instructional modality (e.g., living humans, lecture, small group), and 37 compared two forms of technology-enhanced simulation. Compared to lecture or small groups, we found simulation to have nonsignificant differences for time skills (SMD 0.33, 95% confidence interval [CI] -0.23 to 0.89, n = 3), but a large, significant effect for non-time skills (SMD 0.82, 95% CI 0.18 to 1.46, n = 8). Comparison of alternative types of technology-enhanced simulation found favorable associations with skills acquisition, of moderate magnitude, for computer-assisted guidance (compared to no computer-assisted guidance), for time skills (SMD 0.50, 95% CI -1.66 to 2.65, n = 2) and non-time skills (SMD 0.57, 95% CI 0.33 to 0.80, n = 6), and for more task repetitions (time skills SMD 1.01, 95% CI 0.16 to 1.86, n = 2) and active participation (compared to observation) for time skills (SMD 0.85, 95% CI 0.25 to 1.45, n = 2) and non-time skills (SMD 0.33 95% CI 0.08 to 0.58, n = 3). Conclusions: Technology-enhanced simulation is effective for EM learners for skills acquisition. Features such as computer-assisted guidance, repetition, and active learning are associated with greater effectiveness.

7.
Mil Med ; 188(3-4): 817-823, 2023 03 20.
Article in English | MEDLINE | ID: mdl-35043957

ABSTRACT

BACKGROUND: Military general surgeons commonly perform urologic procedures, yet, there are no required urologic procedural minimums during general surgery residency training. Additionally, urologists are not included in the composition of forward operating surgical units. Urologic Care Army/Air Force/Navy Provider Education was created to provide military general surgeons with training to diagnose and treat frequently encountered urologic emergencies when practicing in environments without a urologist present. STUDY DESIGN: A literature review and needs assessment were conducted to identify diagnoses and procedures to feature in the course. The course included a 1-hour didactic session and then a 2-hour hands-on simulated skills session using small, lightweight, cost-effective simulators. Using a pretest-posttest design, participants completed confidence and knowledge assessments before and after the course. The program was granted educational exemption by the institutional review board. RESULTS: Twenty-seven learners participated. They demonstrated statistically significant improvement on the knowledge assessment (45.4% [SD 0.15] to 83.6% [SD 0.10], P < .01). On the confidence assessment, there were statistically significant (P ≤ .001) improvements for identifying phimosis, paraphimosis, and testicular torsion, as well as identifying indications for suprapubic catheterization, retrograde urethrogram, and cystogram. There were also statistically significant (P < .001) improvements for performing: suprapubic catheterization, dorsal penile block, dorsal slit, scrotal exploration, orchiopexy, orchiectomy, retrograde urethrogram, and cystogram. CONCLUSION: We created the first-ever urologic emergencies simulation curriculum for military general surgeons that has demonstrated efficacy in improving the diagnostic confidence, procedural confidence, and topic knowledge for the urologic emergencies commonly encountered by military general surgeons.


Subject(s)
Internship and Residency , Military Personnel , Simulation Training , Male , Humans , Education, Medical, Graduate/methods , Emergencies , Curriculum , Clinical Competence
8.
Mil Med ; 188(9-10): e2874-e2879, 2023 08 29.
Article in English | MEDLINE | ID: mdl-36537656

ABSTRACT

INTRODUCTION: Trainees (e.g., residents) are an obvious and common source of feedback for faculty; however, gaps exist in our understanding of their experiences and practices of providing such feedback. To gain a deeper understanding, this study examined residents' beliefs about what feedback is important to provide, the kinds of feedback they report giving, and the feedback they actually gave. MATERIALS AND METHODS: Descriptive statistics were used to analyze residents' perceptions and feedback behaviors (n = 42/96). Thematic analysis was used to analyze end-of-rotation faculty assessments from 2018 to 2019 (n = 559) to explore the actual written feedback residents provided to the faculty. RESULTS: The findings suggest that residents experience workload constraints (e.g., too many feedback requests), feel that their feedback is not valuable or relevant, and place conditions on when and what feedback is given (e.g., faculty agreeableness, prefer giving positively oriented feedback, and uncomfortable giving negative feedback). When comparing what feedback residents rated as important with the kinds of feedback they reported giving and actually gave, the findings also suggest that there were consistencies (e.g., clinical instruction and professionalism) and inconsistencies (e.g., evidence-based practice and medical knowledge) that may limit constructive feedback for faculty. CONCLUSIONS: Taken together, the findings suggest that trainee assessments of faculty may be insufficient as a primary source of feedback to support the improvement of faculty performance. Potential solutions are discussed.


Subject(s)
Internship and Residency , Military Personnel , Humans , Feedback , Clinical Competence , Faculty , Faculty, Medical
9.
MedEdPublish (2016) ; 13: 64, 2023.
Article in English | MEDLINE | ID: mdl-38440148

ABSTRACT

Chatbots powered by artificial intelligence have revolutionized many industries and fields of study, including medical education. Medical educators are increasingly asked to perform more administrative, written, and assessment functions with less time and resources. Safe use of chatbots, like ChatGPT, can help medical educators efficiently perform these functions. In this article, we provide medical educators with tips for the implementation of ChatGPT in medical education. Through creativity and careful construction of prompts, medical educators can use these and other implementations of chatbots, like ChatGPT, in their practice.

10.
Diagnosis (Berl) ; 9(4): 437-445, 2022 11 01.
Article in English | MEDLINE | ID: mdl-35924305

ABSTRACT

OBJECTIVES: Management reasoning has not been widely explored but likely requires broader abilities than diagnostic reasoning. An enhanced understanding of management reasoning could improve medical education and patient care. We conducted a novel exploratory study to gain further insights into procedure-based management reasoning. METHODS: Participant physicians managed a simulated patient who acutely decompensates in a team-based, time-pressured, live scenario. Immediately following the scenario, physicians perform a think-aloud protocol by watching video recordings of their performance and narrating their reflections in real-time. Verbatim transcripts of the think-aloud protocol were inductively coded using a constant comparative method and evaluated for themes. RESULTS: We recruited 19 physicians (15 internal medicine, one family medicine, and three general surgery) for this study. Recognizing that diagnostic and management reasoning intertwine, this paper focuses on management reasoning's characteristics. We developed three categories of management reasoning factors with eight subthemes. These are Patient factors: Acuity and Preferences; Physician factors: Recognized Errors, Anxiety, Metacognition, Monitoring, and Threshold to Treat; and one Environment factor: Resources. CONCLUSIONS: Our findings on procedure-based management reasoning are consistent with Situation Awareness and Situated Cognition models and the extant work on management reasoning, demonstrating that management is inherently complex and contextually bound. Unique to this study, all physicians focused on prognosis, indicating that attaining competency in procedural management may require planning and prediction abilities. Physicians also expressed concerns about making mistakes, potentially resulting from the scenario's emphasis on a procedure and our physicians' having less expertise in the treatment of tension pneumothorax.


Subject(s)
Education, Medical , Pneumothorax , Humans , Clinical Competence , Pneumothorax/diagnosis , Pneumothorax/therapy , Problem Solving , Internal Medicine/education , Education, Medical/methods
11.
J Grad Med Educ ; 14(2): 201-209, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35463179

ABSTRACT

Background: Since the Accreditation Council for Graduate Medical Education (ACGME) introduced the Milestones in 2013, the body of validity evidence supporting their use has grown, but there is a gap with regard to response process. Objective: The purpose of this study is to qualitatively explore validity evidence pertaining to the response process of individual Clinical Competency Committee (CCC) members when assigning Milestone ratings to a resident. Methods: Using a constructivist paradigm, we conducted a thematic analysis of semi-structured interviews with 8 Transitional Year (TY) CCC members from 4 programs immediately following a CCC meeting between November and December 2020. Participants were queried about their response process in their application of Milestone assessment. Analysis was iterative, including coding, constant comparison, and theming. Results: Participant interviews identified an absence of formal training and a perception that Milestones are a tool for resident assessment without recognizing their role in program evaluation. In describing their thought process, participants reported comparing averaged assessment data to peers and time in training to generate Milestone ratings. Meaningful narrative comments, when available, differentiated resident performance from peers. When assessment data were absent, participants assumed an average performance. Conclusions: Our study found that the response process used by TY CCC members was not always consistent with the dual purpose of the Milestones to improve educational outcomes at the levels of residents and the program.


Subject(s)
Internship and Residency , Accreditation , Clinical Competence , Education, Medical, Graduate , Educational Measurement , Humans
12.
FASEB Bioadv ; 3(7): 490-496, 2021 Jul.
Article in English | MEDLINE | ID: mdl-34258518

ABSTRACT

Clinical reasoning, a complex process that involves gathering and synthesizing information to make diagnostic and treatment decisions, is a topic researchers frequently study to mitigate errors. Scientific reasoning has several similarities with clinical reasoning, including the need to generate hypotheses; observe, gather, and interpret evidence; engage in the process of elimination; draw conclusions; and refine and test new hypotheses. However, researchers have only recently begun to take into consideration the role that situational factors (also known as contextual factors), such as language barriers or the lack of diagnostic test results, can play in diagnostic error. Additionally, questions remain about the best ways to teach these complex processes.

13.
Acad Psychiatry ; 45(2): 150-158, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33169304

ABSTRACT

OBJECTIVE: This retrospective study compares differences in clinical performance on the psychiatry clerkship Objective Structured Clinical Examination (OSCE) between students receiving traditional repeated clinical simulation with those receiving repeated clinical simulation using the Kolb Cycle. METHODS: Psychiatry clerkship OSCE scores from 321 students who completed their psychiatry clerkship in 2016 and 2017 were compared. Specific performance measures included communication skills as determined by the Essential Elements of Communication, gathering a history, documenting a history and mental status exam, defending a differential diagnosis, and proposing a treatment plan. Results were calculated using repeated two-way analysis of variance between students receiving no simulation and traditional repeated simulation training (TRS) as compared to students receiving no simulation and repeated simulation utilizing the Kolb cycle (KRS). RESULTS: Students who received KRS performed significantly better in three of the five components of the clerkship OSCE as compared to students who received TRS. Specifically, students who received KRS performed better on gathering a history (+ 14.1%, p < 0.001), documenting a history (+ 13.4%, p < 0.001), and developing a treatment plan (+ 16.7%, p < 0.001). There were no significant differences in communication skills or in developing and defending a differential diagnosis. CONCLUSIONS: Psychiatry clerkship students engaged in repeated simulations explicitly integrated with the Kolb cycle demonstrate improved clinical skills as measured by OSCE performance. Integration of the Kolb cycle in designing simulation experiences should be carefully considered and may serve as a model for individualized coaching in programs of assessment.


Subject(s)
Clinical Clerkship , Psychiatry , Students, Medical , Clinical Competence , Educational Measurement , Humans , Problem-Based Learning , Retrospective Studies
14.
Adv Simul (Lond) ; 5: 17, 2020.
Article in English | MEDLINE | ID: mdl-32760598

ABSTRACT

INTRODUCTION: In recent years, researchers have recognized the need to examine the relative effectiveness of different simulation approaches and the experiences of physicians operating within such environments. The current study experimentally examined the reflective judgments, cognitive processing, and clinical reasoning performance of physicians across live and video simulation environments. METHODS: Thirty-eight physicians were randomly assigned to a live scenario or video case condition. Both conditions encompassed two components: (a) patient encounter and (b) video reflection activity. Following the condition-specific patient encounter (i.e., live scenario or video), the participants completed a Post Encounter Form (PEF), microanalytic questions, and a mental effort question. Participants were then instructed to re-watch the video (i.e., video condition) or a video recording of their live patient encounter (i.e., live scenario) while thinking aloud about how they came to the diagnosis and management plan. RESULTS: Although significant differences did not emerge across all measures, physicians in the live scenario condition exhibited superior performance in clinical reasoning (i.e., PEF) and a distinct profile of reflective judgments and cognitive processing. Generally, the live condition participants focused more attention on aspects of the clinical reasoning process and demonstrated higher level cognitive processing than the video group. CONCLUSIONS: The current study sheds light on the differential effects of live scenario and video simulation approaches. Physicians who engaged in live scenario simulations outperformed and showed a distinct pattern of cognitive reactions and judgments compared to physicians who practiced their clinical reasoning via video simulation. Additionally, the current study points to the potential advantages of video self-reflection following live scenarios while also shedding some light on the debate regarding whether video-guided reflection, specifically, is advantageous. The utility of context-specific, micro-level assessments that incorporate multiple methods as physicians complete different parts of clinical tasks is also discussed.

15.
Diagnosis (Berl) ; 7(3): 291-297, 2020 08 27.
Article in English | MEDLINE | ID: mdl-32651977

ABSTRACT

Objectives Diagnostic error is a growing concern in U.S. healthcare. There is mounting evidence that errors may not always be due to knowledge gaps, but also to context specificity: a physician seeing two identical patient presentations from a content perspective (e.g., history, labs) yet arriving at two distinct diagnoses. This study used the lens of situated cognition theory - which views clinical reasoning as interconnected with surrounding contextual factors - to design and test an instructional module to mitigate the negative effects of context specificity. We hypothesized that experimental participants would perform better on the outcome measure than those in the control group. Methods This study divided 39 resident and attending physicians into an experimental group receiving an interactive computer training and "think-aloud" exercise and a control group, comparing their clinical reasoning. Clinical reasoning performance in a simulated unstable angina case with contextual factors (i.e., diagnostic suggestion) was determined using performance on a post-encounter form (PEF) as the outcome measure. The participants who received the training and did the reflection were compared to those who did not using descriptive statistics and a multivariate analysis of covariance (MANCOVA). Results Descriptive statistics suggested slightly better performance for the experimental group, but MANCOVA results revealed no statistically significant differences (Pillai's Trace=0.20, F=1.9, df=[4, 29], p=0.15). Conclusions While differences were not statistically significant, this study suggests the potential utility of strategies that provide education and awareness of contextual factors and space for reflective practice.


Subject(s)
Clinical Competence , Clinical Reasoning , Cognition , Diagnostic Errors , Humans , Physician-Patient Relations
16.
Cureus ; 12(5): e8111, 2020 May 14.
Article in English | MEDLINE | ID: mdl-32542164

ABSTRACT

The construct of reliability in health professions education serves as a measure of the congruence of interpretations across assessment tools. When used as an assessment strategy, healthcare simulation serves to elicit specific participant behaviors sought by medical educators. In healthcare simulation, reliability often refers to the ability to consistently reproduce a simulation and that reproducing a simulation setting can consistently expose participants to the same conditions, thus achieving simulation reliability. However, some articles have expressed that simulations are vulnerable to error stemming from design conceptualization to implementation, and the impact of social factors when participants interact and engage with others during participation. The purpose of this definitional review is to examine how reliability has been conceptualized and defined in healthcare simulation, and how the attributes of simulations may present challenges for the traditional concept of reliability in health professions education. Data collection and analysis was approached through a constructivist perspective and grounded theory strategies. Articles between 2009-2019 were filtered applying keywords related to simulation development and design. Data winnowing was structured around a framework viewing simulation as a social practice where participants interact with simulation setting attributes. Healthcare simulation setting reliability is not directly defined but described as errors introduced by the interactions between simulation design attributes and tasks performed by simulated participants. Based on the ontology of simulation's design attributes believed to introduce setting errors, lexical terms related to reliability suggest how simulated participants are trained to refine or maintain their performance tasks that aim to mitigate errors. To achieve reliability in health professions education (HPE) and healthcare simulation, both domains seek to assess the consistency of a construct being measured. In HPE, reliability refers to the consistency of quality measures across a range of psychometric tests used to assess a participant's medical aptitude. In healthcare simulation setting, reliability refers to the consistency of a simulated participant (SP) performing a task that is tailored to mitigate errors introduced by simulation design attributes. Consequently, inconsistencies in SP performance subject participants to setting errors exposing them to unequal conditions that influence competency achievement. What is already known on this subject: Performance competency assessment using healthcare simulation is increasingly common. The types of design attributes incorporated into a simulation setting. The use of incorporating simulated participants into a simulation setting. Simulated participants require training prior to simulation setting implementation. What this paper adds: Identifies attributes of a simulation setting that are most commonly thought to interfere with setting reliability. Identifies the relationships among setting attributes and simulated participant performances that influence setting reliability. Identifies terms tied to the achievement of simulation setting reliability. Examines simulated participant training processes aimed to mitigate errors introduced by simulation design attributes.

17.
Diagnosis (Berl) ; 7(3): 299-305, 2020 08 27.
Article in English | MEDLINE | ID: mdl-32589596

ABSTRACT

Objectives Uncertainty is common in clinical reasoning given the dynamic processes required to come to a diagnosis. Though some uncertainty is expected during clinical encounters, it can have detrimental effects on clinical reasoning. Likewise, evidence has established the potentially detrimental effects of the presence of distracting contextual factors (i.e., factors other than case content needed to establish a diagnosis) in a clinical encounter on clinical reasoning. The purpose of this study was to examine how linguistic markers of uncertainty overlap with different clinical reasoning tasks and how distracting contextual factors might affect physicians' clinical reasoning process. Methods In this descriptive exploratory study, physicians participated in a live or video recorded simulated clinical encounter depicting a patient with unstable angina with and without contextual factors. Transcribed think-aloud reflections were coded using Goldszmidt's clinical reasoning task typology (26 tasks encompassing the domains of framing, diagnosis, management, and reflection) and then those coded categories were examined using linguistic markers of uncertainty (e.g., probably, possibly, etc.). Results Thirty physicians with varying levels of experience participated. Consistent with expectations, descriptive analysis revealed that physicians expressed more uncertainty in cases with distracting contextual factors compared to those without. Across the four domains of reasoning tasks, physicians expressed the most uncertainty in diagnosis and least in reflection. Conclusions These results highlight how linguistic markers of uncertainty can shed light on the role contextual factors might play in uncertainty which can lead to error and why it is essential to find ways of managing it.


Subject(s)
Clinical Reasoning , Physicians , Clinical Competence , Humans , Internal Medicine/education , Uncertainty
18.
Mil Med ; 185(7-8): e1277-e1283, 2020 08 14.
Article in English | MEDLINE | ID: mdl-32372081

ABSTRACT

INTRODUCTION: Gender disparity in medicine has drawn increased attention in the form of root cause analysis and programmatic solutions with the goal of equity. Research indicates that mentoring, guidance, and support, which include the provision of social and academic guidance and support from more experienced practitioners, can mitigate challenges associated with gender disparity. The purpose of this study was to explore women medical students' self-reports of mentorship during their time at Uniformed Services University (USU), if women report similar levels of mentorship as compared to men, and if levels of characteristics associated with mentoring (eg, social support, academic guidance) changed over time. MATERIALS AND METHOD: Using data from the American Association of Medical College's Graduate Questionnaire, a survey sent to all medical students prior to graduation, items were coded as related to mentorship, guidance, and support and analyzed to compare responses of female and male students from graduating USU classes of 2010-2017. RESULTS: No significant difference was found between experiences of female and male survey respondents. Equitable experiences were consistent across time for the 8 years of the study. CONCLUSIONS: Although mentorship is cited as a key factor in mediating gender disparity in medicine, other STEM fields, and the military, the findings suggest that there is equity at the USU undergraduate medical education level. Further studies are needed to understand if disparities in mentorship experiences occur at other stages of a military physician's career, such as graduate medical education, faculty and academic promotion levels.


Subject(s)
Students, Medical , Education, Medical, Graduate , Education, Medical, Undergraduate , Faculty, Medical , Female , Humans , Male , Mentors , United States , Universities
19.
Diagnosis (Berl) ; 7(3): 257-264, 2020 08 27.
Article in English | MEDLINE | ID: mdl-32364516

ABSTRACT

Background Situated cognition theory argues that thinking is inextricably situated in a context. In clinical reasoning, this can lead to context specificity: a physician arriving at two different diagnoses for two patients with the same symptoms, findings, and diagnosis but different contextual factors (something beyond case content potentially influencing reasoning). This paper experimentally investigates the presence of and mechanisms behind context specificity by measuring differences in clinical reasoning performance in cases with and without contextual factors. Methods An experimental study was conducted in 2018-2019 with 39 resident and attending physicians in internal medicine. Participants viewed two outpatient clinic video cases (unstable angina and diabetes mellitus), one with distracting contextual factors and one without. After viewing each case, participants responded to six open-ended diagnostic items (e.g. problem list, leading diagnosis) and rated their cognitive load. Results Multivariate analysis of covariance (MANCOVA) results revealed significant differences in angina case performance with and without contextual factors [Pillai's trace = 0.72, F = 12.4, df =(6, 29), p < 0.001, η p 2 = 0.72 $\eta _{\rm p}^2 = 0.72$ ], with follow-up univariate analyses indicating that participants performed statistically significantly worse in cases with contextual factors on five of six items. There were no significant differences in diabetes cases between conditions. There was no statistically significant difference in cognitive load between conditions. Conclusions Using typical presentations of common diagnoses, and contextual factors typical for clinical practice, we provide ecologically valid evidence for the theoretically predicted negative effects of context specificity (i.e. for the angina case), with large effect sizes, offering insight into the persistence of diagnostic error.


Subject(s)
Clinical Reasoning , Clinical Competence , Cognition , Humans , Internal Medicine/education , Problem Solving
20.
Simul Healthc ; 15(6): 432-437, 2020 Dec.
Article in English | MEDLINE | ID: mdl-32371751

ABSTRACT

STATEMENT: This article presents reflections of career pathways of simulation researchers as well as a discussion of the themes found in the stories presented. It is the intent of the authors to present and foster a discussion around the ways in which we as a simulation community wish to promote recognition of scholarship among simulation researchers and help support newcomers find success as simulation researchers in academia. We also present recommendations for those considering entering the field based on tactics that were successful and not successful among the scholars who shared their stories.


Subject(s)
Career Mobility , Research Personnel , Simulation Training , Fellowships and Scholarships , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...