Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 87
Filter
1.
Acad Med ; 98(11S): S90-S97, 2023 11 01.
Article in English | MEDLINE | ID: mdl-37983401

ABSTRACT

PURPOSE: Scoring postencounter patient notes (PNs) yields significant insights into student performance, but the resource intensity of scoring limits its use. Recent advances in natural language processing (NLP) and machine learning allow application of automated short answer grading (ASAG) for this task. This retrospective study evaluated psychometric characteristics and reliability of an ASAG system for PNs and factors contributing to implementation, including feasibility and case-specific phrase annotation required to tune the system for a new case. METHOD: PNs from standardized patient (SP) cases within a graduation competency exam were used to train the ASAG system, applying a feed-forward neural networks algorithm for scoring. Using faculty phrase-level annotation, 10 PNs per case were required to tune the ASAG system. After tuning, ASAG item-level ratings for 20 notes were compared across ASAG-faculty (4 cases, 80 pairings) and ASAG-nonfaculty (2 cases, 40 pairings). Psychometric characteristics were examined using item analysis and Cronbach's alpha. Inter-rater reliability (IRR) was examined using kappa. RESULTS: ASAG scores demonstrated sufficient variability in differentiating learner PN performance and high IRR between machine and human ratings. Across all items the ASAG-faculty scoring mean kappa was .83 (SE ± .02). The ASAG-nonfaculty pairings kappa was .83 (SE ± .02). The ASAG scoring demonstrated high item discrimination. Internal consistency reliability values at the case level ranged from a Cronbach's alpha of .65 to .77. Faculty time cost to train and supervise nonfaculty raters for 4 cases was approximately $1,856. Faculty cost to tune the ASAG system was approximately $928. CONCLUSIONS: NLP-based automated scoring of PNs demonstrated a high degree of reliability and psychometric confidence for use as learner feedback. The small number of phrase-level annotations required to tune the system to a new case enhances feasibility. ASAG-enabled PN scoring has broad implications for improving feedback in case-based learning contexts in medical education.


Subject(s)
Clinical Competence , Education, Medical, Undergraduate , Humans , Reproducibility of Results , Retrospective Studies , Feasibility Studies
2.
Am J Pharm Educ ; 87(5): 100066, 2023 05.
Article in English | MEDLINE | ID: mdl-37288696

ABSTRACT

OBJECTIVES: To conduct a pilot investigation about the alignment between didactic multimedia materials utilized by pharmacy faculty, with Mayer's Principles for Multimedia Learning and faculty characteristics associated with greater alignment. METHODS: An investigatory systematic process was used which included a modified Learning Object Review Instrument (LORI) to evaluate the faculty video-recorded lectures for alignment with Mayer's Principles of Multimedia Learning, hence capturing the number and type of misalignments. Correlations were performed to evaluate the association between faculty characteristics; and ratings and proportions of misalignments. RESULTS: Five hundred fifty-five PowerPoint slides of 13 lectures from 13 faculty members were reviewed. The average (SD) LORI score per slide was 4.44 (0.84) out of 5 with an average score per lecture ranging from 3.83 (0.96) to 4.95 (0.53). Across all lecture slides, misalignments with multimedia principles were captured in 20.2% of slides. For each lecture, the average percentage of misalignments was 27.6% ranging from 0% to 49%. Principal misalignments included violation of the principles of coherence (66.1%), signaling (15.2%), and segmenting (8%). No faculty characteristics were significantly associated with LORI ratings or proportion of misalignments within lectures. CONCLUSIONS: Faculty had high LORI ratings for their multimedia material but these varied significantly between lectures. Misalignments with multimedia principles were identified and were related primarily to extraneous processing. These misalignments, when addressed, have the potential to improve learning, thus suggesting an opportunity for the faculty to develop ways to optimize multimedia educational delivery. Future investigation is needed to clarify how clinical pharmacy faculty can develop multimedia material and the impact of faculty development on the application of multimedia principles and learning outcomes.


Subject(s)
Education, Pharmacy , Multimedia , Humans , Faculty, Pharmacy , Learning , Educational Measurement
3.
J Educ Perioper Med ; 25(1): E699, 2023.
Article in English | MEDLINE | ID: mdl-36960032

ABSTRACT

Background: The move toward telemedicine has markedly accelerated with the COVID-19 pandemic. Anesthesia residents must learn to provide preoperative assessments on a virtual platform. We created a pilot telemedicine curriculum for postgraduate year-2 (PGY2) anesthesiology. Methods: The curriculum included a virtual didactic session and a simulated virtual preoperative assessment with a standardized patient (SP). A faculty member and the SP provided feedback using a checklist based on the American Medical Association Telehealth Visit Etiquette Checklist and the American Board of Anesthesiology Applied Examination Objective Structured Clinical Examination content outline. Residents completed surveys assessing their perceptions of the effectiveness and helpfulness of the didactic session and simulated encounter, as well as the cognitive workload of the encounter. Results: A total of 12 PGY2 anesthesiology residents in their first month of clinical anesthesia residency training participated in this study. Whereas most (11/12) residents felt confident, very confident, or extremely confident in being able to conduct a telemedicine preoperative assessment after the didactic session, only 42% ensured adequate lighting and only 33% ensured patient privacy before conducting the visit. Postencounter survey comments indicated that the SP encounter was of greater value (more effective and helpful) than the didactic session. Residents perceived the encounter as demanding, but they felt successful in accomplishing it and did not feel rushed. Faculty and SP indicated that the checklist guided them in providing clear and useful formative feedback. Conclusions: A virtual SP encounter can augment didactics to help residents learn and practice essential telemedicine skills for virtual preoperative assessments.

4.
Med Educ ; 57(4): 349-358, 2023 04.
Article in English | MEDLINE | ID: mdl-36454138

ABSTRACT

INTRODUCTION: Engaging learners in continuing medical education (CME) is challenging. Recently, CME courses have transitioned to livestreamed CME, with learners viewing live, in-person courses online. The authors aimed to (1) compare learner engagement and teaching effectiveness in livestreamed with in-person CME and (2) determine how livestream engagement and teaching effectiveness is associated with (A) interactivity metrics, (B) presentation characteristics and (C) medical knowledge. METHODS: A 3-year, non-randomised study of in-person and livestream CME was performed. The course was in-person for 2018 but transitioned to livestream for 2020 and 2021. Learners completed the Learner Engagement Inventory and Teaching Effectiveness Instrument after each presentation. Both instruments were supported by content, internal structure and relations to other variables' validity evidence. Interactivity metrics included learner use of audience response, questions asked by learners and presentation views. Presentation characteristics included presentations using audience response, using pre/post-test format, time of day and words per slide. Medical knowledge was assessed by audience response. A repeated measures analysis of variance (anova) was used for comparisons and a mixed model approach for correlations. RESULTS: A total of 159 learners (response rate 27%) completed questionnaires. Engagement did not significantly differ between in-person or livestream CME. (4.56 versus 4.53, p = 0.64, maximum 5 = highly engaged). However, teacher effectiveness scores were higher for in-person compared with livestream (4.77 versus 4.71 p = 0.01, maximum 5 = highly effective). For livestreamed courses, learner engagement was associated with presentation characteristics, including presentation using of audience response (yes = 4.57, no = 4.45, p < .0001), use of a pre/post-test (yes = 4.62, no = 4.54, p < .0001) and time of presentation (morning = 4.58, afternoon = 4.53, p = .0002). Significant associations were not seen for interactivity metrics or medical knowledge. DISCUSSION: Livestreaming may be as engaging as in-person CME. Although teaching effectiveness in livestreaming was lower, this difference was small. CME course planners should consider offering livestream CME while exploring strategies to enhance teaching effectiveness in livestreamed settings.


Subject(s)
Education, Medical, Continuing , Teaching , Humans , Surveys and Questionnaires
5.
Simul Healthc ; 18(6): 351-358, 2023 Dec 01.
Article in English | MEDLINE | ID: mdl-36111989

ABSTRACT

INTRODUCTION: Simulation-based education is a recognized way of developing medical competencies, and there is overwhelming scientific evidence to support its efficacy. However, it is still underused, which can often be related to poor implementation process. In addition, best practices for implementation of simulation-based courses based on implementation science are not widely known nor applied. The purpose of this study was to develop a rubric, the Implementation Quality Rubric for Simulation (IQR-SIM), to evaluate the implementation quality of simulation-based courses. METHODS: A 3-round, modified Delphi process involving international simulation and implementation experts was initiated to gather and converge opinions regarding criteria for evaluating the implementation quality of simulation-based courses. Candidate items for Round 1 were developed based on the Adapted Implementation Model for Simulation. Items were revised and expanded to include descriptive anchors for evaluation in Round 2. Criterion for inclusion was 70% of respondents selecting an importance rating of 4 or 5/5. Round 3 provided refinement and final approval of items and anchors. RESULTS: Thirty-three experts from 9 countries participated. The initial rubric of 32 items was reduced to 18 items after 3 Delphi rounds, resulting in the IQR-SIM: a 3-point rating scale, with nonscored options "Don't know/can't assess" and "Not applicable," and a comments section. CONCLUSIONS: The IQR-SIM is an operational tool that can be used to evaluate the implementation quality of simulation-based courses and aid in the implementation process to identify gaps, monitor the process, and promote the achievement of desired implementation and learning outcomes.


Subject(s)
Learning , Humans , Delphi Technique , Consensus
6.
Acad Med ; 97(11S): S15-S21, 2022 11 01.
Article in English | MEDLINE | ID: mdl-35947475

ABSTRACT

PURPOSE: Post-standardized patient (SP) encounter patient notes used to assess students' clinical reasoning represent a significant time burden for faculty who traditionally score them. To reduce this burden, the authors previously reported a complex faculty-developed scoring method to assess patient notes rated by nonclinicians. The current study explored whether a simplified scoring procedure for nonclinician raters could further optimize patient note assessments by reducing time, cost, and creating additional opportunities for formative feedback. METHOD: Ten nonclinician raters scored patient notes of 141 students across 5 SP cases by identifying case-specific patient note checklist items. The authors identified the bottom quintile of students using the proportion of correct items identified in the note (percent-scores) and case-specific faculty-generated scoring formulas (formula-scores). Five faculty raters scored a subset of notes from low, borderline, and high-performing students (n = 30 students) using a global rating scale. The authors performed analyses to gather validity evidence for percent-scores (i.e., relationship to other variables), investigate its reliability (i.e., generalizability study), and evaluate its costs (i.e., faculty time). RESULTS: Nonclinician percent- and formula-scores were highly correlated ( r = .88) and identified similar lists of low-performing students. Both methods demonstrated good agreement for pass-fail determinations with each other (Kappa = .68) and with faculty global ratings (Kappa percent =.61; Kappa formula =.66). The G-coefficient of percent-scores was .52, with 38% of variability attributed to checklist items nested in cases. Using percent-scores saved an estimated $746 per SP case (including 6 hours of faculty time) in development costs over formula-scores. CONCLUSIONS: Nonclinician percent-scores reliably identified low-performing students without the need for complex faculty-developed scoring formulas. Combining nonclinician analytic and faculty holistic ratings can reduce the time and cost of patient note scoring and afford faculty more time to coach at-risk students and provide targeted assessment input for high-stakes summative exams.


Subject(s)
Clinical Reasoning , Educational Measurement , Humans , Educational Measurement/methods , Clinical Competence , Reproducibility of Results , Problem Solving
7.
J Surg Educ ; 79(5): 1270-1281, 2022.
Article in English | MEDLINE | ID: mdl-35688704

ABSTRACT

OBJECTIVES: Well-developed mental representations of a task are fundamental to proficient performance. 'Video Commentary' (VC) is a novel assessment intended to measure mental representations of surgical tasks that would reflect an important aspect of task proficiency. Whether examinees' actual response processes align with this intent remains unknown. As part of ongoing validation of the assessment, we sought to understand examinees' response processes in VC. DESIGN: Grounded theory qualitative study. In 2019, residents were interviewed about their understanding of and approach to VC. Using grounded theory, we created a theoretical model explaining relationships among factors that influence residents' response processes and performance. Residents' perceived purpose of VC was also explored using Likert-type questions. SETTING: Academic surgical residency program. PARTICIPANTS: Forty-eight surgical residents (PGY-1 to PGY-5). RESULTS: Analysis of narrative comments indicated that residents' perceived purposes of VC generally align with the educator's intent. Resident response processes are influenced by test characteristics, residents' perception and understanding of VC, and residents' personal characteristics. Four strategies seem to guide how residents respond, namely a focus on speed, points, logic, and relevance. Quantitative results indicated residents believe VC scores reflect their ability to speak quickly, ability to think quickly, and knowledge of anatomy (mean = 5.0, 4.5, and 4.4 respectively [1 = strongly disagree, 6 = strongly agree]). PGY-1 and PGY-2 residents tend to focus on naming facts whereas PGY-4 and PGY-5 residents focus on providing comprehensive descriptions. CONCLUSIONS: Residents generally have an accurate understanding of the purpose of VC. However, their use of different approaches could represent a threat to validity. The response strategies of speed, points, logic, and relevance may inform other clinical skills assessments.


Subject(s)
General Surgery , Internship and Residency , Clinical Competence , Educational Measurement/methods , General Surgery/education , Humans , Longitudinal Studies , Qualitative Research
8.
Acad Med ; 97(4): 477-478, 2022 04 01.
Article in English | MEDLINE | ID: mdl-35353732
10.
Am J Surg ; 223(5): 905-911, 2022 05.
Article in English | MEDLINE | ID: mdl-34399979

ABSTRACT

BACKGROUND: A formative hepato-pancreato-biliary (HPB) ultrasound (US) skills practicum is offered annually to graduating HPB fellows, using entrustment assessments for open (IOUS) and laparoscopic (LAPUS) US. It is hypothesized that validity evidence will support the use of these assessments to determine if graduating fellows are well prepared to perform HPB US independently. METHODS: Expert faculty were surveyed to set Mastery Entrustment standards for fellow performance. Standards were applied to fellow performances during two annual US skills practicums. RESULTS: 11 faculty questionnaires were included. Mean Entrustment cut scores across all items were 4.9/5.0 and 4.8/5.0 and Global Entrustment cut scores were 5.0/5.0 and 4.8/5.0 for IOUS and LAPUS, respectively. 78.5% (29/37) fellows agreed to have their de-identified data evaluated. Mean fellow Entrustments (across all skills) were 4.1 (SD 0.6; 2.6-4.9) and 3.9 (SD 0.7; 2.7-5), while the Global Entrustments were 3.6 (SD 0.8; 2-5) and 3.5 (SD 1.0; 2-5) for IOUS and LAPUS, respectively. CONCLUSIONS: Two cohorts of graduating HPB fellows are not meeting Mastery Standards for HPB US performance determined by a panel of expert faculty.


Subject(s)
Biliary Tract Surgical Procedures , Biliary Tract , Digestive System Surgical Procedures , Laparoscopy , Humans
11.
Acad Med ; 96(11S): S151-S157, 2021 11 01.
Article in English | MEDLINE | ID: mdl-34348372

ABSTRACT

PURPOSE: With the growing importance of professionalism in medical education, it is imperative to develop professionalism assessments that demonstrate robust validity evidence. The Professionalism Mini-Evaluation Exercise (P-MEX) is an assessment that has demonstrated validity evidence in the authentic clinical setting. Identifying the factorial structure of professionalism assessments determines professionalism constructs that can be used to provide diagnostic and actionable feedback. This study examines validity evidence for the P-MEX, a focused and standardized assessment of professionalism, in a simulated patient setting. METHOD: The P-MEX was administered to 275 pediatric residency applicants as part of a 3-station standardized patient encounter, pooling data over an 8-year period (2012 to 2019 residency admission years). Reliability and construct validity for the P-MEX were evaluated using Cronbach's alpha, exploratory factor analysis (EFA), and confirmatory factor analysis (CFA). RESULTS: Cronbach's alpha for the P-MEX was 0.91. The EFA yielded 4 factors: doctor-patient relationship skills, interprofessional skills, professional demeanor, and reflective skills. The CFA demonstrated good model fit with a root-mean-square error of approximation of .058 and a comparative fit index of .92, confirming the reproducibility of the 4-factor structure of professionalism. CONCLUSIONS: The P-MEX demonstrates construct validity as an assessment of professionalism, with 4 underlying subdomains in doctor-patient relationship skills, interprofessional skills, professional demeanor, and reflective skills. These results yield new confidence in providing diagnostic and actionable subscores within the P-MEX assessment. Educators may wish to integrate the P-MEX assessment into their professionalism curricula.


Subject(s)
Education, Medical, Undergraduate/methods , Pediatrics/education , Professionalism , Adult , Education, Medical, Graduate , Educational Measurement , Female , Humans , Internship and Residency , Male , Patient Simulation , Reproducibility of Results
12.
J Am Coll Surg ; 233(4): 545-553, 2021 10.
Article in English | MEDLINE | ID: mdl-34384872

ABSTRACT

BACKGROUND: Professionalism is a core competency that is difficult to assess. We examined the incidence of publication inaccuracies in Electronic Residency Application Service applications to our training program as potential indicators of unprofessional behavior. STUDY DESIGN: We reviewed all 2019-2020 National Resident Matching Program applicants being considered for interview. Applicant demographic characteristics recorded included standardized examination scores, gender, medical school, and medical school ranking (2019 US News & World Report). Publication verification by a medical librarian was performed for peer-reviewed journal articles/abstracts, peer-reviewed book chapters, and peer-reviewed online publications. Inaccuracies were classified as "nonserious" (eg incorrect author order without author rank promotion) or "serious" (eg miscategorization, non-peer-reviewed journal, incorrect author order with author rank promotion, nonauthorship of cited existing publication, and unverifiable publication). Multivariate logistic regression analysis was performed for demographic characteristics to identify predictors of overall inaccuracy and serious inaccuracy. RESULTS: Of 319 applicants, 48 (15%) had a total of 98 inaccuracies; after removing nonserious inaccuracies, 37 (12%) with serious inaccuracies remained. Seven publications were reported in predatory open access journals. In the regression model, none of the variables (US vs non-US medical school, gender, or medical school ranking) were significantly associated with overall inaccuracy or serious inaccuracy. CONCLUSIONS: One in 8 applicants (12%) interviewing at a general surgery residency program were found to have a serious inaccuracy in publication reporting on their Electronic Residency Application Service application. These inaccuracies might represent inattention to detail or professionalism transgressions.


Subject(s)
Data Accuracy , General Surgery/education , Internship and Residency/statistics & numerical data , Job Application , Female , Humans , Male , Professionalism , Publications/statistics & numerical data
13.
Acad Med ; 96(9): 1250-1253, 2021 09 01.
Article in English | MEDLINE | ID: mdl-34133347

ABSTRACT

The unexpected discontinuation of the United States Medical Licensing Examination Step 2 Clinical Skills (CS) exam in January 2021 carries both risks and opportunities for medical education in the United States. Step 2 CS had far-reaching effects on medical school curricula and school-based clinical skills assessments. Absent the need to prepare students for this high-stakes exam, will the rigor of foundational clinical skills instruction and assessment remain a priority at medical schools? In this article, the authors consider the potential losses and gains from the elimination of Step 2 CS and explore opportunities to expand local summative assessments beyond the narrow bounds of Step 2 CS. The responsibility for implementing a rigorous and credible summative assessment of clinical skills that are critical for patient safety as medical students transition to residency now lies squarely with medical schools. Robust human simulation (standardized patient) programs, including regional and virtual simulation consortia, can provide infrastructure and expertise for innovative and creative local assessments to meet this need. Novel applications of human simulation and traditional formative assessment methods, such as workplace-based assessments and virtual patients, can contribute to defensible summative decisions about medical students' clinical skills. The need to establish validity evidence for decisions based on these novel assessment methods comprises a timely and relevant focus for medical education research.


Subject(s)
Clinical Competence/standards , Curriculum/standards , Education, Medical/standards , Educational Measurement/standards , Schools, Medical/standards , Humans , Internship and Residency/standards , United States
14.
Virchows Arch ; 479(4): 803-813, 2021 Oct.
Article in English | MEDLINE | ID: mdl-33966099

ABSTRACT

Competency-based medical education (CBME) is being implemented worldwide. In CMBE, residency training is designed around competencies required for unsupervised practice and use entrustable professional activities (EPAs) as workplace "units of assessment". Well-designed workplace-based assessment (WBA) tools are required to document competence of trainees in authentic clinical environments. In this study, we developed a WBA instrument to assess residents' performance of intra-operative pathology consultations and conducted a validity investigation. The entrustment-aligned pathology assessment instrument for intra-operative consultations (EPA-IC) was developed through a national iterative consultation and used clinical supervisors to assess residents' performance at an anatomical pathology program. Psychometric analyses and focus groups were conducted to explore the sources of evidence using modern validity theory: content, response process, internal structure, relations to other variables, and consequences of assessment. The content was considered appropriate, the assessment was feasible and acceptable by residents and supervisors, and it had a positive educational impact by improving performance of intra-operative consultations and feedback to learners. The results had low reliability, which seemed to be related to assessment biases, and supervisors were reluctant to fully entrust trainees due to cultural issues. With CBME implementation, new workplace-based assessment tools are needed in pathology. In this study, we showcased the development of the first instrument for assessing resident's performance of a prototypical entrustable professional activity in pathology using modern education principles and validity theory.


Subject(s)
Competency-Based Education/methods , Education, Medical/methods , Employee Performance Appraisal/methods , Clinical Competence , Education, Medical, Graduate/methods , Humans , Learning , Referral and Consultation , Reproducibility of Results , Workplace
15.
JAMA Surg ; 156(6): 535-540, 2021 06 01.
Article in English | MEDLINE | ID: mdl-33759997

ABSTRACT

Importance: The sociopolitical and cultural context of graduate surgical education has changed considerably over the past 2 decades. Although new structures of graduate surgical training programs have been developed in response and the comparative value of formats are continually debated, it remains unclear how different time-based structural paradigms are preparing trainees for independent practice after program completion. Objective: To investigate the factors associated with trainees' and program directors' perception of trainee preparedness for independent surgical practice. Design, Setting, and Participants: This qualitative study used an instrumental case study approach and obtained information through semistructured interviews, which were analyzed using open-and-focused coding. Participants were recent graduates and program directors of vascular surgery training programs in the United States. The 2 training paradigms analyzed were the integrated vascular surgery residency program (0 + 5, with 0 indicating that the general surgery training experiences are fully integrated into the 5 years of overall training and 5 indicating the total number of years of training) and the traditional vascular surgery fellowship program (5 + 2, with 5 indicating the number of years of general surgery training and 2 indicating the number of years of vascular surgery training). All graduates completed their training in 2018. All interviews were conducted between July 1, 2018, and September 30, 2018. Main Outcomes and Measures: A conceptual framework to inform current and ongoing efforts to optimize graduate surgical training programs across specialties. Results: A total of 22 semistructured interviews were completed, involving 7 graduates of 5 + 2 programs, 9 graduates of 0 + 5 programs, and 6 vascular surgery program directors. Of the 22 participants, 15 were men (68%). Participants described 4 interconnected domains that were associated with trainees' perceived preparedness for practice: structural, individual, relational, and organizational. Structural factors included the overall and vascular surgery-specific time spent in training, whereas individual factors included innate technical skills, confidence, maturity, and motivation. Faculty-trainee relationships (or relational factors) were deemed important for building trust and granting of autonomy. Organizational factors included features of the local organization, including patient population, case volume, and case mix. Conclusions and Relevance: Findings suggest that restructuring training paradigms alone is insufficient to address the issue of trainees' perceived preparedness for practice. A framework was created from the results for evaluating and improving residency and fellowship programs as well as for developing graduate surgical training paradigms that incorporate all 4 domains associated with preparedness.


Subject(s)
Clinical Competence , Education, Medical, Graduate/organization & administration , Internship and Residency/organization & administration , Specialties, Surgical/education , Attitude of Health Personnel , Career Choice , Humans , Qualitative Research , Self Concept , United States
16.
Acad Med ; 96(8): 1079-1080, 2021 Aug 01.
Article in English | MEDLINE | ID: mdl-36047866
17.
West J Nurs Res ; 43(3): 250-260, 2021 03.
Article in English | MEDLINE | ID: mdl-33073733

ABSTRACT

Health care errors are a national concern. Although considerable attention has been placed on reducing errors since a 2000 Institute of Medicine report, adverse events persist. The purpose of this pilot study was to evaluate the effect of mindfulness training, employing the standardized approach of an eight-week mindfulness-based, stress reduction program on reduction of nurse errors in simulated clinical scenarios. An experimental, pre- and post-test control group design was employed with 20 staff nurses and senior nursing students. Although not statistically significant, there were numerical differences in clinical performance scores from baseline when comparing mindfulness and control groups immediately following mindfulness training and after three months. A number of benefits of mindfulness training, such as improved listening skills, were identified. This pilot study supports the benefits of mindfulness training in improving nurse clinical performance and illustrates a novel approach to employ in future research.


Subject(s)
Mental Disorders , Mindfulness , Students, Nursing , Humans , Pilot Projects , Stress, Psychological/prevention & control
18.
Acad Med ; 94(11S Association of American Medical Colleges Learn Serve Lead: Proceedings of the 58th Annual Research in Medical Education Sessions): S21-S27, 2019 11.
Article in English | MEDLINE | ID: mdl-31663941

ABSTRACT

PURPOSE: Clinical reasoning is often assessed through patient notes (PNs) following standardized patient (SP) encounters. While nonclinicians can score PNs using analytic tools such as checklists, these do not sufficiently encompass the holistic judgments of clinician faculty. To better model faculty judgments, the authors developed checklists with faculty-specified scoring formulas embedded in spreadsheets and studied the resulting interrater reliability (IRR) of nonclinician raters (SPs and medics) and student pass/fail status. METHOD: In Study 1, nonclinician and faculty raters rescored PNs of 55 third-year medical students across 5 cases of the 2017 Graduation Competency Examination (GCE) to determine IRR. In Study 2, nonclinician raters scored all notes of the 5-case 2018 GCE (178 students). Faculty rescored all notes of failing students and could modify formula-derived scores if faculty felt appropriate. Faculty also rescored and corrected scores of additional notes for a total of 90 notes (3 cases, including failing notes). RESULTS: Mean overall percent exact agreement between nonclinician and faculty ratings was 87% (weighted kappa, 0.86) and 83% (weighted kappa, 0.88) for Study 1 and Study 2, respectively. SP and medic IRRs did not differ significantly. Four students failed the note section in 2018; 3 passed after faculty corrections. Few corrections were made to nonfailing students' notes. CONCLUSIONS: Nonclinician PN raters using checklists and scoring rules may provide a feasible alternative to faculty raters for low-stakes assessments and for the bulk of well-performing students. Faculty effort can be targeted strategically at rescoring notes of low-performing students and providing more detailed feedback.


Subject(s)
Clinical Competence/standards , Clinical Decision-Making , Documentation/standards , Education, Medical, Undergraduate/methods , Medical History Taking/statistics & numerical data , Medical History Taking/standards , Students, Medical/statistics & numerical data , Adult , Checklist , Clinical Competence/statistics & numerical data , Educational Measurement , Female , Humans , Male , Middle Aged , Problem Solving , Reproducibility of Results
19.
Acad Med ; 94(11S Association of American Medical Colleges Learn Serve Lead: Proceedings of the 58th Annual Research in Medical Education Sessions): S57-S63, 2019 11.
Article in English | MEDLINE | ID: mdl-31365408

ABSTRACT

PURPOSE: The residency admissions process is a high-stakes assessment system with the purpose of identifying applicants who best meet standards of the residency program and the medical specialty. Prior studies have found that professionalism issues contribute significantly to residents in difficulty during training. This study examines the reliability (internal structure) and predictive (relations to other variables) validity evidence for a standardized patient (SP)-based Professionalism Mini-Evaluation Exercise (P-MEX) using longitudinal data from pediatrics candidates from admission to the end of the first year of postgraduate training. METHOD: Data from 5 cohorts from 2012 to 2016 (195 invited applicants) were analyzed from the University of Geneva (Switzerland) Pediatrics Residency Program. Generalizability theory was used to examine the reliability and variance components of the P-MEX scores, gathered across 3 cases. Correlations and mixed-effects regression analyses were used to examine the predictive utility of SP-based P-MEX scores (gathered as part of the admissions process) with rotation evaluation scores (obtained during the first year of residency). RESULTS: Generalizability was moderate (G coefficient = 0.52). Regression analyses predicting P-MEX scores to first-year rotation evaluations indicated significant standardized effect sizes for attitude and personality (ß = 0.36, P = .02), global evaluation (ß = 0.27, P = .048), and total evaluation scores (ß = 0.34, P = .04). CONCLUSIONS: Validity evidence supports the use of P-MEX scores as part of the admissions process to assess professionalism. P-MEX scores provide a snapshot of an applicant's level of professionalism and may predict performance during the first year of residency.


Subject(s)
Clinical Competence/standards , Educational Measurement/standards , Internship and Residency/standards , Pediatrics/standards , Professionalism/standards , School Admission Criteria , Adult , Cohort Studies , Female , Humans , Male , Reproducibility of Results , Switzerland , Young Adult
20.
AEM Educ Train ; 3(1): 39-49, 2019 Jan.
Article in English | MEDLINE | ID: mdl-30680346

ABSTRACT

BACKGROUND: The Emergency Medicine (EM) Milestone Project provides guidance for assessment of resident trainee airway management proficiency (PC10). Although milestones provide a general structure for assessment, they do not define performance standards. The objective of this project was to establish comprehensive airway management performance standards for EM trainees at both novice and mastery levels of proficiency. METHODS: Comprehensive airway management standards were derived using standard-setting procedures. A panel of residency education and airway management experts was convened to determine how trainees would be expected to perform on 51 individual tasks in a standardized airway management simulation encompassing preparation, endotracheal intubation, backup airway use, and ventilation. Experts participated in facilitated exercises in which they were asked to 1) define which items were critical for patient safety, 2) predict the performance of a "novice" learner, and 3) predict the performance of a "mastery" learner nearing independent practice. Experts were given a worksheet to complete and descriptive statistics were calculated using STATA 14. RESULTS: Experts identified 39 of 51 (76%) airway management items as critical for patient safety. Experts also noted that novice trainees do not need to complete all the items deemed to be critical prior to starting practice since they will be supervised by a board-certified EM physician. In contrast, mastery-level trainees would be expected to successfully complete not only the critical tasks, but also nearly all the items in the assessment (49/51, 96%) since they are nearing independent practice. CONCLUSION: In this study, we established EM resident performance standards for comprehensive airway management during a simulation scenario. Future work will focus on validating these performance standards in current resident trainees as they move from simulation to actual patient care.

SELECTION OF CITATIONS
SEARCH DETAIL
...