Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
West J Emerg Med ; 19(3): 585-592, 2018 May.
Article in English | MEDLINE | ID: mdl-29760860

ABSTRACT

INTRODUCTION: Effective communication between clinicians and patients has been shown to improve patient outcomes, reduce malpractice liability, and is now being tied to reimbursement. Use of a communication strategy known as "scripting" has been suggested to improve patient satisfaction in multiple hospital settings, but the frequency with which medical students use this strategy and whether this affects patient perception of medical student care is unknown. Our objective was to measure the use of targeted communication skills after an educational intervention as well as to further clarify the relationship between communication element usage and patient satisfaction. METHODS: Medical students were block randomized into the control or intervention group. Those in the intervention group received refresher training in scripted communication. Those in the control group received no instruction or other intervention related to communication. Use of six explicit communication behaviors were recorded by trained study observers: 1) acknowledging the patient by name, 2) introducing themselves as medical students, 3) explaining their role in the patient's care, 4) explaining the care plan, 5) providing an estimated duration of time to be spent in the emergency department (ED), and 6) notifying the patient that another provider would also be seeing them. Patients then completed a survey regarding their satisfaction with the medical student encounter. RESULTS: We observed 474 medical student-patient encounters in the ED (231 in the control group and 243 in the intervention group). We were unable to detect a statistically significant difference in communication element use between the intervention and control groups. One of the communication elements, explaining steps in the care plan, was positively associated with patient perception of the medical student's overall communication skills. Otherwise, there was no statistically significant association between element use and patient satisfaction. CONCLUSION: We were unable to demonstrate any improvement in student use of communication elements or in patient satisfaction after refresher training in scripted communication. Furthermore, there was little variation in patient satisfaction based on the use of scripted communication elements. Effective communication with patients in the ED is complicated and requires further investigation on how to provide this skill set.


Subject(s)
Communication , Emergency Service, Hospital , Patient Satisfaction , Students, Medical/psychology , Female , Humans , Male , Patient Care Planning/statistics & numerical data , Physician-Patient Relations , Surveys and Questionnaires
3.
MedEdPORTAL ; 14: 10717, 2018 05 14.
Article in English | MEDLINE | ID: mdl-30800917

ABSTRACT

Introduction: Preparing residents for supervision of medical students in the clinical setting is important to provide high-quality education for the next generation of physicians and is mandated by the Liaison Committee on Medical Education as well as the Accreditation Council for Graduate Medical Education. This requirement is met in variable ways depending on the specialty, school, and setting where teaching takes place. This educational intervention was designed to allow residents to practice techniques useful while supervising medical students in simulated encounters in the emergency department and increase their comfort level with providing feedback to students. Methods: The four role-playing scenarios described here were developed for second-year residents in emergency medicine at the Indiana University School of Medicine. Residents participated in the scenarios prior to serving as a supervisor for fourth-year medical students rotating on the emergency medicine clerkship. For each scenario, a faculty member observed the simulated interaction between the resident and the simulated student. The residents were surveyed before and after participating in the scenarios to determine the effectiveness of the instruction. Results: Residents reported that they were more comfortable supervising students, evaluating their performance, and giving feedback after participating in the scenarios. Discussion: Participation in these clinical teaching scenarios was effective at making residents more comfortable with their role as supervisors of fourth-year students taking an emergency medicine clerkship. These scenarios may be useful as part of a resident-as-teacher curriculum for emergency medicine residents.


Subject(s)
Emergency Medicine/education , Faculty, Medical/education , Teaching/education , Curriculum/trends , Education, Medical/methods , Emergency Medicine/methods , Feedback , Humans , Indiana , Internship and Residency/methods , Role Playing
4.
BMC Med Educ ; 16: 150, 2016 May 21.
Article in English | MEDLINE | ID: mdl-27209065

ABSTRACT

BACKGROUND: Effective communication with patients impacts clinical outcome and patient satisfaction. We measure the rate at which medical students use six targeted communication elements with patients and association of element use with patient satisfaction. METHODS: Participants included fourth year medical students enrolled in an emergency medicine clerkship. A trained observer measured use of six communication elements: acknowledging the patient by name, introducing themselves by name, identifying their role, explaining the care plan, explaining that multiple providers would see the patient, and providing an estimated duration of time in the emergency department. The observer then conducted a survey of patient satisfaction with the medical student encounter. RESULTS: A total of 246 encounters were documented among forty medical student participants. For the six communication elements evaluated, in 61% of encounters medical students acknowledged the patient, in 91% they introduced themselves, in 58 % they identified their role as a student, in 64% they explained the care plan, in 80% they explained that another provider would see the patient, and in only 6% they provided an estimated duration of care. Only 1 encounter (0.4%) contained all six elements. Patients' likelihood to refer a loved one to that ED was increased when students acknowledged the patient and described that other providers would be involved in patient care (P = 0.016 and 0.015 respectively, Chi Square). Likewise, patients' likelihood to return to the ED was increased when students described their role in patient care (P = 0.035, Chi Square). CONCLUSIONS: This pilot study demonstrates that medical students infrequently use all targeted communication elements. When they did use certain elements, patient satisfaction increased. These data imply potential benefit to additional training for students in patient communication.


Subject(s)
Clinical Clerkship , Communication , Education, Medical, Undergraduate , Emergency Medicine/education , Patient Satisfaction , Female , Humans , Male , Physician-Patient Relations , Pilot Projects , Prospective Studies
5.
Med Educ Online ; 21: 29279, 2016.
Article in English | MEDLINE | ID: mdl-26925540

ABSTRACT

BACKGROUND: When ratings of student performance within the clerkship consist of a variable number of ratings per clinical teacher (rater), an important measurement question arises regarding how to combine such ratings to accurately summarize performance. As previous G studies have not estimated the independent influence of occasion and rater facets in observational ratings within the clinic, this study was designed to provide estimates of these two sources of error. METHOD: During 2 years of an emergency medicine clerkship at a large midwestern university, 592 students were evaluated an average of 15.9 times. Ratings were performed at the end of clinical shifts, and students often received multiple ratings from the same rater. A completely nested G study model (occasion: rater: person) was used to analyze sampled rating data. RESULTS: The variance component (VC) related to occasion was small relative to the VC associated with rater. The D study clearly demonstrates that having a preceptor rate a student on multiple occasions does not substantially enhance the reliability of a clerkship performance summary score. CONCLUSIONS: Although further research is needed, it is clear that case-specific factors do not explain the low correlation between ratings and that having one or two raters repeatedly rate a student on different occasions/cases is unlikely to yield a reliable mean score. This research suggests that it may be more efficient to have a preceptor rate a student just once. However, when multiple ratings from a single preceptor are available for a student, it is recommended that a mean of the preceptor's ratings be used to calculate the student's overall mean performance score.


Subject(s)
Clinical Clerkship/standards , Educational Measurement/methods , Educational Measurement/standards , Clinical Competence , Emergency Medicine/education , Humans , Observer Variation , Reproducibility of Results
6.
Acad Med ; 89(7): 1046-50, 2014 Jul.
Article in English | MEDLINE | ID: mdl-24979174

ABSTRACT

PURPOSE: Medical students develop clinical reasoning skills throughout their training. The Script Concordance Test (SCT) is a standardized instrument that assesses clinical reasoning; test takers with more clinical experience consistently outperform those with less experience. SCT studies to date have been cross-sectional, with no studies examining same-student longitudinal performance gains. METHOD: This four-year observational study took place between 2008 and 2011 at the Indiana University School of Medicine. Students in two different cohorts took the same SCT as second-year medical students and then again as fourth-year medical students. The authors matched and analyzed same-student data from the two SCT administrations for the classes of 2011 and 2012. They used descriptive statistics, correlation coefficients, and paired t tests. RESULTS: Matched data were available for 260 students in the class of 2011 (of 303, 86%) and 264 students in the class of 2012 (of 289, 91%). The mean same-student gain for the class of 2011 was 8.6 (t[259] = 15.9; P < .0001) and for the class of 2012 was 11.3 (t[263] = 21.4; P < .0001). Each cohort gained more than one standard deviation. CONCLUSIONS: Medical students made statistically significant gains in their performance on an SCT over a two-year period. These findings demonstrate same-student gains in clinical reasoning over time as measured by the SCT and suggest that the SCT as a standardized instrument can help to evaluate growth in clinical reasoning skills.


Subject(s)
Clinical Competence , Education, Medical, Undergraduate , Students, Medical , Cohort Studies , Educational Measurement , Female , Humans , Longitudinal Studies , Male
7.
Teach Learn Med ; 26(2): 135-45, 2014.
Article in English | MEDLINE | ID: mdl-24702549

ABSTRACT

BACKGROUND: A battery of various psychometric assessments has been conducted on script concordance tests (SCTs) that are purported to measure data interpretation, an essential component of clinical reasoning. Although the breadth of published SCT research is broad, best practice controversies and evidentiary gaps remain. PURPOSES: In this study, SCT data were used to test the psychometric properties of 6 scoring methods. In addition, this study explored whether SCT items clustered by difficulty and type were able to discriminate between medical training levels. METHODS: SCT scores from a problem-solving SCT (SCT-PS; n = 522) and emergency medicine SCT (SCT-EM; n = 1,040) were collected at a large institution of medicine. Item analyses were performed to optimize each dataset. Items were categorized into difficulty levels and organized into types. Correlational analyses, one-way multivariate analysis of variance (MANOVA), repeated measures analysis of variance (ANOVA), and one-way ANOVA were conducted to explore study aims. RESULTS: All 6 scoring methods differentiated between training levels. Longitudinal analysis of SCT-PS data reported that MS4s significantly (p < .001) outperformed their scores as MS2s in all difficulty categories. Cross-sectional analysis of SCT-EM data reported significant differences (p < .001) between experienced EM physicians, EM residents, and MS4s at each level of difficulty. Items categorized by type were also able to detect training level disparities. CONCLUSIONS: Of the 6 scoring methods, 5-point scoring solutions generated more reliable measures of data interpretation than 3-point scoring methods. Data interpretation abilities were a function of experience at every level of item difficulty. Items categorized by type exhibited discriminatory power providing modest evidence toward the construct validity of SCTs.


Subject(s)
Curriculum , Data Interpretation, Statistical , Education, Medical, Undergraduate , Educational Measurement/methods , Clinical Clerkship , Clinical Competence , Cross-Sectional Studies , Humans , Indiana , Psychometrics
8.
Acad Emerg Med ; 19(12): 1454-61, 2012 Dec.
Article in English | MEDLINE | ID: mdl-23279251

ABSTRACT

Assessment of an emergency physician (EP)'s diagnostic reasoning skills is essential for effective training and patient safety. This article summarizes the findings of the diagnostic reasoning assessment track of the 2012 Academic Emergency Medicine consensus conference "Education Research in Emergency Medicine: Opportunities, Challenges, and Strategies for Success." Existing theories of diagnostic reasoning, as they relate to emergency medicine (EM), are outlined. Existing strategies for the assessment of diagnostic reasoning are described. Based on a review of the literature, expert thematic analysis, and iterative consensus agreement during the conference, this article summarizes current assessment gaps and prioritizes future research questions concerning the assessment of diagnostic reasoning in EM.


Subject(s)
Clinical Competence/standards , Educational Measurement/methods , Emergency Medicine/education , Consensus Development Conferences as Topic , Humans
9.
J Grad Med Educ ; 4(4): 486-9, 2012 Dec.
Article in English | MEDLINE | ID: mdl-24294426

ABSTRACT

BACKGROUND: Simulation can enhance undergraduate medical education. However, the number of faculty facilitators needed for observation and debriefing can limit its use with medical students. The goal of this study was to compare the effectiveness of emergency medicine (EM) residents with that of EM faculty in facilitating postcase debriefings. METHODS: The EM clerkship at Indiana University School of Medicine requires medical students to complete one 2-hour mannequin-based simulation session. Groups of 5 to 6 students participated in 3 different simulation cases immediately followed by debriefings. Debriefings were led by either an EM faculty volunteer or EM resident volunteer. The Debriefing Assessment for Simulation in Healthcare (DASH) participant form was completed by students to evaluate each individual providing the debriefing. RESULTS: In total, 273 DASH forms were completed (132 EM faculty evaluations and 141 EM resident evaluations) for 7 faculty members and 9 residents providing the debriefing sessions. The mean total faculty DASH score was 32.42 and mean total resident DASH score was 32.09 out of a possible 35. There were no statistically significant differences between faculty and resident scores overall (P  =  .36) or by case type (P trauma  =  .11, P medical  =  .19, P pediatrics  =  .48). CONCLUSIONS: EM residents were perceived to be as effective as EM faculty in debriefing medical students in a mannequin-based simulation experience. The use of residents to observe and debrief students may allow additional simulations to be incorporated into undergraduate curricula and provide valuable teaching opportunities for residents.

10.
Acad Emerg Med ; 18(6): 627-34, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21676061

ABSTRACT

OBJECTIVES: The Script Concordance Test (SCT) is a new method of assessing clinical reasoning in the face of uncertainty. An SCT item consists of a short clinical vignette followed by an additional piece of information and asks how this new information affects the learner's decision regarding a possible diagnosis, investigational study, or therapy. Scoring is based on the item responses of a panel of experts in the field. This study attempts to provide additional validity evidence in the realm of emergency medicine (EM). METHODS: This observational study examined the performance of medical students, EM residents, and expert emergency physicians (EPs) on an SCT in the area of general EM (SCT-EM) at one of the largest medical schools in the United States. The 59-item SCT-EM was developed for a fourth-year required clerkship in EM. The results on the SCT-EM were compared between different levels of clinical experience. Results were also compared to performance on other measures to evaluate convergent validity. RESULTS: The SCT-EM was given to 314 fourth-year medical students (MS4), 40 EM residents, and 13 EPs during the study period. Mean differences between the three different groups of test takers was statistically significant (p < 0.0001). The range of scores for the MS4s was 42% to 77% and followed a normal distribution. Among the residents, performance on the SCT-EM and the EM in-training examination were significantly correlated (r = 0.69, p < 0.001); among the MS4s who later matched into EM residency programs, performance on the SCT-EM and United States Medical Licensing Examination (USMLE) Step 2-Clinical Knowledge (Step 2-CK) exam was also significantly correlated (r = 0.56, p < 0.001). CONCLUSIONS: The SCT-EM shows promise as an assessment that can be used to measure clinical reasoning skills in the face of uncertainty. Future research will compare performance on the SCT to other measures of clinical reasoning abilities.


Subject(s)
Clinical Clerkship , Clinical Competence , Decision Making , Emergency Medicine/education , Internship and Residency , Adult , Educational Measurement , Humans
11.
Med Teach ; 33(6): 472-7, 2011.
Article in English | MEDLINE | ID: mdl-21609176

ABSTRACT

BACKGROUND: The Script Concordance test (SCT) measures clinical reasoning in the context of uncertainty by comparing the responses of examinees and expert clinicians. It uses the level of agreement with a panel of experts to assign credit for the examinee's answers. AIM: This study describes the development and validation of a SCT for pre-clinical medical students. METHODS: Faculty from two US medical schools developed SCT items in the domains of anatomy, biochemistry, physiology, and histology. Scoring procedures utilized data from a panel of 30 expert physicians. Validation focused on internal reliability and the ability of the SCT to distinguish between different cohorts. RESULTS: The SCT was administered to an aggregate of 411 second-year and 70 fourth-year students from both schools. Internal consistency for the 75 test items was satisfactory (Cronbach's alpha = 0.73). The SCT successfully differentiated second- from fourth-year students and both student groups from the expert panel in a one-way analysis of variance (F(2,508) = 120.4; p < 0.0001). Mean scores for students from the two schools were not significantly different (p = 0.20). CONCLUSION: This SCT successfully differentiated pre-clinical medical students from fourth-year medical students and both cohorts of medical students from expert clinicians across different institutions and geographic areas. The SCT shows promise as an easy-to-administer measure of "problem-solving" performance in competency evaluation even in the beginning years of medical education.


Subject(s)
Clinical Competence , Educational Measurement/methods , Educational Measurement/standards , Problem Solving , Students, Medical/psychology , Cognition , Cooperative Behavior , Decision Making , Education, Medical, Undergraduate/standards , Humans , Indiana , New York , Schools, Medical , Surveys and Questionnaires , United States
12.
Acad Emerg Med ; 14(3): 283-6, 2007 Mar.
Article in English | MEDLINE | ID: mdl-17242385

ABSTRACT

OBJECTIVES: The medical education literature contains few publications about the phenomenon of grade inflation. The authors' clinical clerkship grading scale suffered from apparent inflation relative to the recommended university distribution. The investigators hypothesized that a simple change of the shift grading cards, using explicit criteria, would decrease this grade inflation and aid to redistribute the shift evaluations. METHODS: This was a before-and-after study examining medical student shift evaluation grades. Evaluators and students were blinded to the purpose of the card change and were unaware that a study was being conducted. Beginning June 1, 2005, the authors altered the shift evaluation cards from the previous four choices of honors, high pass, pass, or fail to five choices of upper 5%, upper 25%, expected, below expected, or far below expected, and explicit grading criteria were provided. No other interventions to alter the grade distribution occurred. Data were collected on all evaluations from June 1, 2004, to March 31, 2005 (before change), and compared with data on all evaluations from June 1, 2005, to March 31, 2006 (after change). RESULTS: A total of 3,349 evaluations were analyzed: 1,612 before the card change and 1,737 after the change. The grade distribution before the card change was as follows: honors, 22.6%; high pass, 49.0%; pass, 28.4%; and fail, 0%. This compared with the following ratings after the card change: upper 5%, 9.8%; upper 25%, 41.2%; expected, 46.2%; below expected, 2.8%; and far below expected, 0% (p < 0.001). CONCLUSIONS: A simple change in shift evaluation cards to include more explicit grading criteria resulted in a significant change in grade distribution and greatly decreased grade inflation.


Subject(s)
Clinical Clerkship/methods , Educational Measurement/methods , Emergency Medicine/education , Humans , Indiana , Task Performance and Analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...