Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
Med Teach ; 46(3): 349-358, 2024 03.
Article in English | MEDLINE | ID: mdl-37688773

ABSTRACT

PURPOSE: The purpose of this study was to enrich understanding about the perceived benefits and drawbacks of constructed response short-answer questions (CR-SAQs) in preclerkship assessment using Norcini's criteria for good assessment as a framework. METHODS: This multi-institutional study surveyed students and faculty at three institutions. A survey using Likert scale and open-ended questions was developed to evaluate faculty and student perceptions of CR-SAQs using the criteria of good assessment to determine the benefits and drawbacks. Descriptive statistics and Chi-square analyses are presented, and open responses were analyzed using directed content analysis to describe benefits and drawbacks of CR-SAQs. RESULTS: A total of 260 students (19%) and 57 faculty (48%) completed the survey. Students and faculty report that the benefits of CR-SAQs are authenticity, deeper learning (educational effect), and receiving feedback (catalytic effect). Drawbacks included feasibility, construct validity, and scoring reproducibility. Students and faculty found CR-SAQs to be both acceptable (can show your reasoning, partial credit) and unacceptable (stressful, not USMLE format). CONCLUSIONS: CR-SAQs are a method of aligning innovative curricula with assessment and could enrich the assessment toolkit for medical educators.


Subject(s)
Education, Medical, Undergraduate , Students, Medical , Humans , Curriculum , Faculty , Learning , Reproducibility of Results
2.
Med Sci Educ ; 33(5): 1197-1204, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37886271

ABSTRACT

Purpose: Given the significance of the US Medical Licensing Exam (USMLE) Step 1 score moving from a 3-digit value to pass/fail, the authors investigated the impact of the change on students' anxiety, approach to learning, and curiosity. Method: Two cohorts of pre-clerkship medical students at three medical schools completed a composite of four instruments: the State-Trait Anxiety Inventory, the revised two-factor Study Process Questionnaire, the Interest/Deprivation Type Epistemic Curiosity Scale, and the Short Grit Scale prior to taking the last 3-digit scored Step 1 in 2021 or taking the first pass/fail scored Step 1 in 2022. Responses of 3-digit and pass/fail exam takers were compared (Mann-Whitney U) and multiple regression path analysis was performed to determine the factors that significantly impacted learning strategies. Results: There was no difference between 3-digit (n = 86) and pass/fail exam takers (n = 154) in anxiety (STA-I scores, 50 vs. 49, p = 0.85), shallow learning strategies (22 vs. 23, p = 0.84), or interest curiosity scores (median scores 15 vs. 15, p = 0.07). However, pass/fail exam takers had lower deprivation curiosity scores (median 12 vs. 11, p = 0.03) and showed a decline in deep learning strategies (30 vs. 27, p = 0.0012). Path analysis indicated the decline in deep learning strategies was due to the change in exam scoring (ß = - 2.0428, p < 0.05). Conclusions: Counter to the stated hypothesis and intentions, the initial impact of the change to pass/fail grading for USMLE Step 1 failed to reduce learner anxiety, and reduced curiosity and deep learning strategies. Supplementary Information: The online version contains supplementary material available at 10.1007/s40670-023-01878-w.

3.
Teach Learn Med ; 35(5): 609-622, 2023.
Article in English | MEDLINE | ID: mdl-35989668

ABSTRACT

PROBLEM: Some medical schools have incorporated constructed response short answer questions (CR-SAQs) into their assessment toolkits. Although CR-SAQs carry benefits for medical students and educators, the faculty perception that the amount of time required to create and score CR-SAQs is not feasible and concerns about reliable scoring may impede the use of this assessment type in medical education. INTERVENTION: Three US medical schools collaborated to write and score CR-SAQs based on a single vignette. Study participants included faculty question writers (N = 5) and three groups of scorers: faculty content experts (N = 7), faculty non-content experts (N = 6), and fourth-year medical students (N = 7). Structured interviews were performed with question writers and an online survey was administered to scorers to gather information about their process for creating and scoring CR-SAQs. A content analysis was performed on the qualitative data using Bowen's model of feasibility as a framework. To examine inter-rater reliability between the content expert and other scorers, a random selection of fifty student responses from each site were scored by each site's faculty content experts, faculty non-content experts, and student scorers. A holistic rubric (6-point Likert scale) was used by two schools and an analytic rubric (3-4 point checklist) was used by one school. Cohen's weighted kappa (κw) was used to evaluate inter-rater reliability. CONTEXT: This research study was implemented at three US medical schools that are nationally dispersed and have been administering CR-SAQ summative exams as part of their programs of assessment for at least five years. The study exam question was included in an end-of-course summative exam during the first year of medical school. IMPACT: Five question writers (100%) participated in the interviews and twelve scorers (60% response rate) completed the survey. Qualitative comments revealed three aspects of feasibility: practicality (time, institutional culture, teamwork), implementation (steps in the question writing and scoring process), and adaptation (feedback, rubric adjustment, continuous quality improvement). The scorers' described their experience in terms of the need for outside resources, concern about lack of expertise, and value gained through scoring. Inter-rater reliability between the faculty content expert and student scorers was fair/moderate (κw=.34-.53, holistic rubrics) or substantial (κw=.67-.76, analytic rubric), but much lower between faculty content and non-content experts (κw=.18-.29, holistic rubrics; κw=.59-.66, analytic rubric). LESSONS LEARNED: Our findings show that from the faculty perspective it is feasible to include CR-SAQs in summative exams and we provide practical information for medical educators creating and scoring CR-SAQs. We also learned that CR-SAQs can be reliably scored by faculty without content expertise or senior medical students using an analytic rubric, or by senior medical students using a holistic rubric, which provides options to alleviate the faculty burden associated with grading CR-SAQs.


Subject(s)
Educational Measurement , Students, Medical , Humans , Reproducibility of Results , Feasibility Studies , Learning
4.
Med Educ Online ; 27(1): 2114864, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36062838

ABSTRACT

Despite the many clerkship models of medical education, all can be considered a form of experiential learning. Experiential learning is a complex pedagogical approach involving the development of cognitive skills in an environment with a unique culture with multiple stakeholders, which may impact learner motivation, confidence, and other noncognitive drivers of success. Students may delay the transition to the clerkship year for myriad reasons, and the intricate nature of experiential learning suggested this may impact student performance. This retrospective, observational study investigated the impact of clerkship postponement by measuring subsequent clerkship performance. Pre-clerkship and third-year clerkship performance were analyzed for three cohorts of students (classes of 2018, 2019, and 2020, N = 274) where students had the option to delay the start of their clerkship year. A mixed analysis of variance (ANOVA) and paired t-tests were conducted to compare academic performance over time among students who did and did not delay. Across three cohorts of students, 12% delayed the start of the clerkship year (N = 33). Regardless of prior academic performance, these students experienced a significant reduction in clerkship grades compared to their non-delaying peers. Delaying the start of the clerkship year may have negative durable effects on future academic performance. This information should be kept in mind for student advisement.


Subject(s)
Clinical Clerkship , Students, Medical , Clinical Competence , Humans , Problem-Based Learning , Retrospective Studies , Students, Medical/psychology
5.
Adv Med Educ Pract ; 13: 939-944, 2022.
Article in English | MEDLINE | ID: mdl-36039184

ABSTRACT

Introduction: The elimination of the USMLE Step 1 three-digit score has created a deficit in standardized performance metrics for undergraduate medical educators and residency program directors. It is likely that there will be greater emphasis on USMLE Step 2 CK, an exam found to be associated with later clinical performance in residents and physicians. Because many previous models relied on Step 1 scores to predict student performance on Step 2 CK, we developed a model using other metrics. Materials and Methods: Assessment data for 228 students in three cohorts (classes of 2018, 2019, and 2020) were collected, including the Medical College Admission Test (MCAT), NBME Customized Assessment Service (CAS) exams and NBME Subject exams. A linear regression model was conducted to predict Step 2 CK scores at five time-points: at the end of years one and two and at three trimester intervals in year three. An additional cohort (class of 2021) was used to validate the model. Results: Significant models were found at 5 time-points in the curriculum and increased in predictability as students progressed: end of year 1 (adj R2 = 0.29), end of year 2 (adj R2 = 0.34), clerkship trimester 1 (adj R2 = 0.52), clerkship trimester 2 (adj R2 = 0.58), clerkship trimester 3 (adj R2 = 0.62). Including Step 1 scores did not significantly improve the final model. Using metrics from the class of 2021, the model predicted Step 2 CK performance within a mean square error (MSE) of 8.3 points (SD = 6.8) at the end of year 1 increasing predictability incrementally to within a mean of 5.4 points (SD = 4.1) by the end of year 3. Conclusion: This model is highly generalizable and enables medical educators to predict student performance on Step 2 CK in the absence of Step 1 quantitative data as early as the end of the first year of medical education with increasingly stronger predictions as students progressed through the clerkship year.

7.
Med Sci Educ ; 31(1): 17-18, 2021 Feb.
Article in English | MEDLINE | ID: mdl-34457857

ABSTRACT

In response to the need for physician leaders, the Donald and Barbara Zucker School of Medicine at Hofstra/Northwell developed the Klar Leadership Development and Innovation Management program. This novel program leverages its partnership with a large Northeast health system to longitudinally provide students with leadership fundamentals and mentored experiences.

8.
J Grad Med Educ ; 13(4): 576-580, 2021 Aug.
Article in English | MEDLINE | ID: mdl-34434519

ABSTRACT

BACKGROUND: The Medical Student Performance Evaluation (MSPE) provides important information to residency programs. Despite recent recommendations for standardization, it is not clear how much variation exists in MSPE content among schools. OBJECTIVES: We describe the current section content of the MSPE in US allopathic medical schools, with a particular focus on variations in the presentation of student performance. METHODS: A representative MSPE was obtained from 95.3% (143 of 150) of allopathic US medical schools through residency applications to the Zucker School of Medicine at Hofstra/Northwell in select programs for the 2019-2020 academic year. A manual data abstraction tool was piloted in 2018-2019. After training, it was used to code all portions of the MSPE in this study. The results were analyzed, and descriptive statistics were reported. RESULTS: In preclinical years, 30.8% of MSPEs reported data regarding performance of students beyond achieving "passes" in a pass/fail curriculum. Only half referenced performance in the fourth year including electives, acting internships, or both. About two-thirds of schools included an overall descriptor of comparative performance in the final paragraph. Among these schools, a majority provided adjectives such as "outstanding/excellent/very good/good," while one-quarter reported numerical data categories. Regarding clerkship grades, there were numerous nomenclature systems used. CONCLUSIONS: This analysis demonstrates the existence of extreme variability in the content of MSPEs submitted by US allopathic medical schools in the 2019-2020 cycle, including the components and nomenclature of grades and descriptors of comparative performance, display of data, and inclusion of data across all years of the medical education program.


Subject(s)
Internship and Residency , Students, Medical , Clinical Competence , Educational Measurement , Humans , Schools, Medical
9.
Teach Learn Med ; 33(3): 334-342, 2021.
Article in English | MEDLINE | ID: mdl-33706632

ABSTRACT

Issue: Calls to change medical education have been frequent, persistent, and generally limited to alterations in content or structural re-organization. Self-imposed barriers have prevented adoption of more radical pedagogical approaches, so recent predictions of the 'inevitability' of medical education transitioning to online delivery seemed unlikely. Then in March 2020 the COVID-19 pandemic forced medical schools to overcome established barriers overnight and make the most rapid curricular shift in medical education's history. We share the collated reports of nine medical schools and postulate how recent responses may influence future medical education. Evidence: While extraneous pandemic-related factors make it impossible to scientifically distinguish the impact of the curricular changes, some themes emerged. The rapid transition to online delivery was made possible by all schools having learning management systems and key electronic resources already blended into their curricula; we were closer to online delivery than anticipated. Student engagement with online delivery varied with different pedagogies used and the importance of social learning and interaction along with autonomy in learning were apparent. These are factors known to enhance online learning, and the student-centered modalities (e.g. problem-based learning) that included them appeared to be more engaging. Assumptions that the new online environment would be easily adopted and embraced by 'technophilic' students did not always hold true. Achieving true distance medical education will take longer than this 'overnight' response, but adhering to best practices for online education may open a new realm of possibilities. Implications: While this experience did not confirm that online medical education is really 'inevitable,' it revealed that it is possible. Thoughtfully blending more online components into a medical curriculum will allow us to take advantage of this environment's strengths such as efficiency and the ability to support asynchronous and autonomous learning that engage and foster intrinsic learning in our students. While maintaining aspects of social interaction, online learning could enhance pre-clinical medical education by allowing integration and collaboration among classes of medical students, other health professionals, and even between medical schools. What remains to be seen is whether COVID-19 provided the experience, vision and courage for medical education to change, or whether the old barriers will rise again when the pandemic is over.


Subject(s)
COVID-19 , Education, Distance , Education, Medical, Undergraduate/organization & administration , Schools, Medical , Humans , SARS-CoV-2 , Students, Medical
10.
Med Educ Online ; 26(1): 1876315, 2021 Dec.
Article in English | MEDLINE | ID: mdl-33606615

ABSTRACT

The Medical Student Performance Evaluation (MSPE) is an important tool of communication used by program directors to make decisions in the residency application process. To understand the perspective and usage of the MSPE across multiple medical specialties now and in anticipation of the planned changes in USMLE Step 1 score-reporting. A survey instrument including quantitative and qualitative measures was developed and piloted. The final survey was distributed to residency programs across 28 specialties in 2020 via the main contact on the ACGME listserv. Of the 28 specialties surveyed, at least one response was received from 26 (93%). Eight percent of all programs (364/4675) responded to the survey, with most respondents being program directors. Usage of the MSPE varied among specialties. Approximately 1/3 of end-users stated that the MSPE is very or extremely influential in their initial screening process. Slightly less than half agreed or strongly agreed that they trust the information to be an accurate representation of applicants, though slightly more than half agree that the MSPE will become more influential once USMLE Step 1 becomes pass/fail. Professionalism was rated as the most important component and noteworthy characteristics among the least important in the decision-making process. Performance in the internal medicine clerkship was rated as the most influential while neurology and psychiatry performances were rated as less influential. Overwhelmingly, respondents suggested that including comparative performance and/or class rank would make the MSPE more useful once USMLE Step 1 becomes pass/fail. MSPE end-users across a variety of specialties utilize this complex document in different ways and value it differentially in their decision-making processes. Despite this, continued mistrust of the MSPE persists. A better understanding of end-users' perceptions of the MSPE offers the UME community an opportunity to transform the MSPE into a highly valued, trusted document of communication.


Subject(s)
Educational Measurement/methods , Internship and Residency/organization & administration , School Admission Criteria/statistics & numerical data , Communication , Humans , Internship and Residency/standards , Specialization
11.
J Med Educ Curric Dev ; 7: 2382120520976957, 2020.
Article in English | MEDLINE | ID: mdl-33294621

ABSTRACT

BACKGROUND: COVID-19 exposed undergraduate medical education curricular gaps in exploring historical pandemics, how to critically consume scientific literature and square it with the lay press, and how to grapple with emerging ethical issues. In addition, as medical students were dismissed from clinical environments, their capacity to build community and promote professional identity formation was compromised. METHODS: A synchronous, online course entitled Life Cycle of a Pandemic was developed using a modified guided inquiry approach. Students met daily for 2 weeks in groups of 15 to 18 with a process facilitator. During the first week, students reported on lessons learned from past pandemics; in the second week, students discussed ethical concerns surrounding COVID-19 clinical trials, heard from physicians who provided patient care in the HIV and COVID-19 pandemics, and concluded with an opportunity for reflection. Following the course, students were asked to complete an anonymous, voluntary survey to assess their perceptions of the course. RESULTS: With a response rate of 69%, an overwhelming majority of students agreed or strongly agreed that learning about historical pandemics helped them understand COVID-19 (72, 99%). The course successfully helped students understand current and potential COVID-19 management strategies as 66 (90%) agreed or strongly agreed they developed a better understanding of nonpharmacological interventions and new pharmacological treatments. Students also gained insight into the experiences of healthcare providers who cared for patients with HIV and COVID-19. Qualitative analysis of the open-ended comments yielded 5 main themes: critical appraisal of resources, responsibility of the physician, humanism, knowledge related to pandemics, and learning from history. CONCLUSIONS: The onset of the COVID-19 crisis illustrated curricular gaps that could be remedied by introducing the history and biology of pandemics earlier in the curriculum. It was also apparent that learners need more practice in critically reviewing literature and comparing scientific literature with lay press. The flexible format of the course promotes the development of future iterations that could cover evolving topics related to COVID-19. The course could also be repurposed for a graduate or continuing medical education audience.

12.
Med Teach ; 42(8): 880-885, 2020 08.
Article in English | MEDLINE | ID: mdl-31282798

ABSTRACT

Medical knowledge examinations employing open-ended (constructed response) items can be useful to assess medical students' factual and conceptual understanding. Modern day curricula that emphasize active learning in small groups and other interactive formats lend themselves to an assessment format that prompts students to share conceptual understanding, explain, and elaborate. The open-ended question examination format can provide faculty with insights into learners' abilities to apply information to clinical or scientific problems, and reveal learners' misunderstandings about essential content. To implement formative or summative assessments with open-ended questions in a rigorous manner, educators must design systems for exam creation and scoring. This includes systems for constructing exam blueprints, items and scoring rubrics, and procedures for scoring and standard setting. Information gained through review of students' responses can guide future educational sessions and curricular changes in a cycle of continuous improvement.


Subject(s)
Education, Medical, Undergraduate , Education, Medical , Students, Medical , Curriculum , Educational Measurement , Faculty , Humans , Problem-Based Learning
13.
J Grad Med Educ ; 11(4): 475-478, 2019 Aug.
Article in English | MEDLINE | ID: mdl-31440345

ABSTRACT

BACKGROUND: The Medical School Performance Evaluation (MSPE) is an important factor for application to residency programs. Many medical schools are incorporating recent recommendations from the Association of American Medical Colleges MSPE Task Force into their letters. To date, there has been no feedback from the graduate medical education community on the impact of this effort. OBJECTIVE: We surveyed individuals involved in residency candidate selection for internal medicine programs to understand their perceptions on the new MSPE format. METHODS: A survey was distributed in March and April 2018 using the Association of Program Directors in Internal Medicine listserv, which comprises 4220 individuals from 439 residency programs. Responses were analyzed, and themes were extracted from open-ended questions. RESULTS: A total of 140 individuals, predominantly program directors and associate program directors, from across the United States completed the survey. Most were aware of the existence of the MSPE Task Force. Respondents read a median of 200 to 299 letters each recruitment season. The majority reported observing evidence of adoption of the new format in more than one quarter of all medical schools. Among respondents, nearly half reported the new format made the MSPE more important in decision-making about a candidate. Within the MSPE, respondents recognized the following areas as most influential: academic progress, summary paragraph, graphic representation of class performance, academic history, and overall adjective of performance indicator (rank). CONCLUSIONS: The internal medicine graduate medical education community finds value in many components of the new MSPE format, while recognizing there are further opportunities for improvement.


Subject(s)
Academic Performance/standards , Clinical Competence/standards , Internal Medicine/education , Internship and Residency/organization & administration , Schools, Medical/standards , Education, Medical , Humans , Physician Executives/organization & administration , Students, Medical , Surveys and Questionnaires
14.
Ann Intern Med ; 170(9): SS1, 2019 May 07.
Article in English | MEDLINE | ID: mdl-30977766
15.
Acad Med ; 92(11S Association of American Medical Colleges Learn Serve Lead: Proceedings of the 56th Annual Research in Medical Education Sessions): S21-S25, 2017 11.
Article in English | MEDLINE | ID: mdl-29065019

ABSTRACT

PURPOSE: The Hofstra Northwell School of Medicine (HNSOM) uses an essay-based assessment system. Recognizing the emphasis graduate medical education places on the United States Medical Licensing Examination (USMLE) Step exams, the authors developed a method to predict students at risk for lower performance on USMLE Step 1. METHOD: Beginning with the inaugural class (2015), HNSOM administered National Board of Medical Examiners (NBME) Customized Assessment Service (CAS) examinations as formative assessment at the end of each integrated course in the first two years of medical school. Using preadmission data, the first two courses in the educational program, and NBME score deviation from the national test takers' mean, a statistical model was built to predict students who scored below the Step 1 national mean. RESULTS: A regression equation using the highest Medical College Admission Test (MCAT) score and NBME score deviation predicted student Step 1 scores. The MCAT alone accounted for 21% of the variance. Adding the NBME score deviation from the first and second courses increased the variance to 40% and 50%, respectively. Adding NBME exams from later courses increased the variance to 52% and 64% by the end of years one and two, respectively. Cross-validation demonstrated the model successfully predicted 63% of at-risk students by the end of the fifth month of medical school. CONCLUSIONS: The model identified students at risk for lower performance on Step 1 using the NBME CAS. This model is applicable to schools reforming their curriculum delivery and assessment programs toward an integrated model.


Subject(s)
Clinical Competence , Curriculum , Education, Medical, Undergraduate , Licensure, Medical , Risk Assessment , Adult , College Admission Test , Female , Humans , Male , Regression Analysis , Schools, Medical , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...