Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
1.
Acad Med ; 95(9S A Snapshot of Medical Student Education in the United States and Canada: Reports From 145 Schools): S240-S244, 2020 Sep.
Article in English | MEDLINE | ID: mdl-33626691
2.
Acad Med ; 93(5): 724-728, 2018 05.
Article in English | MEDLINE | ID: mdl-29116975

ABSTRACT

PROBLEM: Progress testing of medical knowledge has advantages over traditional medical school examination strategies. However, little is known about its use in assessing medical students' clinical skills or their integration of clinical skills with necessary science knowledge. The authors previously reported on the feasibility of the Progress Clinical Skills Examination (PCSE), piloted with a group of early learners. This subsequent pilot test studied the exam's validity to determine whether the PCSE is sensitive to the growth in students' clinical skills across the four years of medical school. APPROACH: In 2014, 38 medical student volunteers (years 1-4) in the traditional 2 + 2 curriculum at Michigan State University College of Human Medicine participated in the eight-station PCSE. Faculty and standardized patients assessed students' clinical skills, and faculty assessed students' responses to postencounter necessary science questions. Students performed pretest self-assessment across multiple measures and completed a posttest evaluation of their PCSE experience. OUTCOMES: Student performance generally increased by year in medical school for communication, history-taking, and physical examination skills. Necessary science knowledge increased substantially from first-year to second-year students, with less change thereafter. Students felt the PCSE was a fair test of their clinical skills and provided an opportunity to demonstrate their understanding of the related necessary science. NEXT STEPS: The authors have been piloting a wider pool of cases. In 2016, they adopted the PCSE as part of the summative assessment strategy for the medical school's new integrated four-year curriculum. Continued assessment of student performance trajectories is planned.


Subject(s)
Clinical Competence/standards , Educational Measurement/methods , Students, Medical/psychology , Adult , Female , Humans , Male , Pilot Projects , Reproducibility of Results
3.
Acad Med ; 93(2): 306-313, 2018 02.
Article in English | MEDLINE | ID: mdl-28678097

ABSTRACT

PURPOSE: To assess the effect of community-based medical education as implemented by Michigan State University College of Human Medicine (MSU-CHM), which has immersed students in diverse communities across Michigan since its founding, on the physician workforce in the six communities in which clinical campuses were initially established. METHOD: The authors used American Medical Association Masterfile data from 2011 to obtain practice locations and specialty data for all MSU-CHM graduates from 1972 through 2006. They classified physicians as either practicing primary care or practicing in a high-need specialty. Using Geographic Information Systems software, the authors geocoded practice locations to the ZIP Code level, evaluated whether the practice was within a Health Professional Shortage Area, and determined rurality, using 2006 Rural-Urban Commuting Area Code data. They visually compared maps of the footprints of each campus to glean insights. RESULTS: The authors analyzed 3,107 of 3,309 graduates (94%). Of these, 635 (20%) practiced within 50 miles of their medical school campus. Saginaw and Flint graduates were more likely to practice in Detroit and its surrounding suburbs, reflecting these communities' urban character. Grand Rapids, the community with the strongest tertiary medical care focus, had the lowest proportions of rural and high-need specialty graduates. CONCLUSIONS: This case study suggests that distributed medical education campuses can have a significant effect on the long-term regional physician workforce. Students' long-term practice choices may also reflect the patient populations and specialty patterns of the communities where they learn.


Subject(s)
Education, Medical, Undergraduate/methods , Health Workforce , Medically Underserved Area , Physicians/supply & distribution , Primary Health Care , Professional Practice Location , Schools, Medical , Career Choice , Humans , Michigan
4.
J Vet Med Educ ; 42(4): 353-63, 2015.
Article in English | MEDLINE | ID: mdl-26421517

ABSTRACT

Veterinary medical school challenges students academically and personally, and some students report depression and anxiety at rates higher than the general population and other medical students. This study describes changes in veterinary medical student self-esteem (SE) over four years of professional education, attending to differences between high and low SE students and the characteristics specific to low SE veterinary medical students. The study population was students enrolled at the Michigan State University College of Veterinary Medicine from 2006 to 2012. We used data from the annual anonymous survey administered college-wide that is used to monitor the curriculum and learning environment. The survey asked respondents to rate their knowledge and skill development, learning environment, perceptions of stress, skill development, and SE. Participants also provided information on their academic performance and demographics. A contrasting groups design was used: high and low SE students were compared using logistic regression to identify factors associated with low SE. A total of 1,653 respondents met inclusion criteria: 789 low SE and 864 high SE students. The proportion of high and low SE students varied over time, with the greatest proportion of low SE students during the second-year of the program. Perceived stress was associated with low SE, whereas perceived supportive learning environment and skill development were associated with high SE. These data have provided impetus for curricular and learning environment changes to enhance student support. They also provide guidance for additional research to better understand various student academic trajectories and their implications for success.


Subject(s)
Adaptation, Psychological , Depressive Disorder/psychology , Education, Veterinary , Students, Medical/psychology , Adult , Curriculum , Depressive Disorder/prevention & control , Female , Humans , Male , Michigan , Pilot Projects , Psychometrics , Self Concept , Young Adult
5.
J Health Care Poor Underserved ; 26(3): 631-47, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26320900

ABSTRACT

UNLABELLED: The National Postbaccalaureate Collaborative (NPBC) is a partnership of Postbaccalaureate Programs (PBPs) dedicated to helping promising college graduates from disadvantaged and underrepresented backgrounds get into and succeed in medical school. This study aims to determine long-term program outcomes by looking at PBP graduates, who are now practicing physicians, in terms of health care service to the poor and underserved and contribution to health care workforce diversity. METHODS: We surveyed the PBP graduates and a randomly drawn sample of non-PBP graduates from the affiliated 10 medical schools stratified by the year of medical school graduation (1996-2002). RESULTS: The PBP graduates were more likely to be providing care in federally designated underserved areas and practicing in institutional settings that enable access to care for vulnerable populations. CONCLUSION: The NPBC graduates serve a critical role in providing access to care for underserved populations and serve as a source for health care workforce diversity.


Subject(s)
Cultural Diversity , Education, Medical, Undergraduate/organization & administration , Education, Premedical/organization & administration , Health Services Accessibility , Physicians/statistics & numerical data , Humans , Program Evaluation , United States
6.
Med Educ Online ; 18: 20598, 2013 Jul 22.
Article in English | MEDLINE | ID: mdl-23880149

ABSTRACT

INTRODUCTION: We operationalized the taxonomy developed by Hauer and colleagues describing common clinical performance problems. Faculty raters pilot tested the resulting worksheet by observing recordings of problematic simulated clinical encounters involving third-year medical students. This approach provided a framework for structured feedback to guide learner improvement and curricular enhancement. METHODS: Eighty-two problematic clinical encounters from M3 students who failed their clinical competency examination were independently rated by paired clinical faculty members to identify common problems related to the medical interview, physical examination, and professionalism. RESULTS: Eleven out of 26 target performance problems were present in 25% or more encounters. Overall, 37% had unsatisfactory medical interviews, with 'inadequate history to rule out other diagnoses' most prevalent (60%). Seventy percent failed because of physical examination deficiencies, with missing elements (69%) and inadequate data gathering (69%) most common. One-third of the students did not introduce themselves to their patients. Among students failing based on standardized patient (SP) ratings, 93% also failed to demonstrate competency based on the faculty ratings. CONCLUSIONS: Our review form allowed clinical faculty to validate pass/fail decisions based on standardized patient ratings. Detailed information about performance problems contributes to learner feedback and curricular enhancement to guide remediation planning and faculty development.


Subject(s)
Clinical Competence/standards , Curriculum , Documentation , Feedback , Students, Medical , Checklist , Education, Medical, Undergraduate , Faculty, Medical , Humans , Michigan , Pilot Projects
7.
Adv Health Sci Educ Theory Pract ; 18(2): 279-89, 2013 May.
Article in English | MEDLINE | ID: mdl-22484965

ABSTRACT

This study sought to determine the academic and professional outcomes of medical school graduates who failed the United States Licensing Examination Step 1 on the first attempt. This retrospective cohort study was based on pooled data from 2,003 graduates of six Midwestern medical schools in the classes of 1997-2002. Demographic, academic, and career characteristics of graduates who failed Step 1 on the first attempt were compared to graduates who initially passed. Fifty medical school graduates (2.5 %) initially failed Step 1. Compared to graduates who initially passed Step 1, a higher proportion of graduates who initially failed Step 1 became primary care physicians (26/49 [53 %] vs. 766/1,870 [40.9 %]), were more likely at graduation to report intent to practice in underserved areas (28/50 [56 %] vs. 419/1,939 [ 21.6 %]), and more likely to take 5 or more years to graduate (11/50 [22.0 %] vs. 79/1,953 [4.0 %]). The relative risk of first attempt Step 1 failure for medical school graduates was 13.4 for African Americans, 7.4 for Latinos, 3.6 for matriculants >22 years of age, 3.2 for women, and 2.3 for first generation college graduates. The relative risk of not being specialty board certified for those graduates who initially failed Step 1 was 2.2. Our observations regarding characteristics of graduates in our study cohort who initially failed Step 1 can inform efforts by medical schools to identify and assist students who are at particular risk of failing Step 1.


Subject(s)
Licensure, Medical/statistics & numerical data , Physicians/statistics & numerical data , Age Factors , Career Choice , Educational Status , Female , Humans , Male , Medically Underserved Area , Minority Groups/statistics & numerical data , Physicians/standards , Physicians, Primary Care/statistics & numerical data , Retrospective Studies , Risk Factors , Sex Factors , United States , Young Adult
8.
Med Educ Online ; 162011 Jan 14.
Article in English | MEDLINE | ID: mdl-21249172

ABSTRACT

When our school organized the curriculum around a core set of medical student competencies in 2004, it was clear that more numerous and more varied student assessments were needed. To oversee a systematic approach to the assessment of medical student competencies, the Office of College-wide Assessment was established, led by the Associate Dean of College-wide Assessment. The mission of the Office is to 'facilitate the development of a seamless assessment system that drives a nimble, competency-based curriculum across the spectrum of our educational enterprise.' The Associate Dean coordinates educational initiatives, developing partnerships to solve common problems, and enhancing synergy within the College. The Office also works to establish data collection and feedback loops to guide rational intervention and continuous curricular improvement. Aside from feedback, implementing a systems approach to assessment provides a means for identifying performance gaps, promotes continuity from undergraduate medical education to practice, and offers a rationale for some assessments to be located outside of courses and clerkships. Assessment system design, data analysis, and feedback require leadership, a cooperative faculty team with medical education expertise, and institutional support. The guiding principle is 'Better Data for Teachers, Better Data for Learners, Better Patient Care.' Better data empowers faculty to become change agents, learners to create evidence-based improvement plans and increases accountability to our most important stakeholders, our patients.


Subject(s)
Clinical Competence/statistics & numerical data , Faculty, Medical , Learning , Patient Care , Schools, Medical , Students, Medical/statistics & numerical data , Curriculum , Data Collection/methods , Education, Medical , Educational Measurement/methods , Educational Status , Feedback , Humans , Leadership , Michigan , Teaching
9.
Acad Med ; 85(5): 829-36, 2010 May.
Article in English | MEDLINE | ID: mdl-20520036

ABSTRACT

As the definition of scholarship is clarified, each specialty should develop a cadre of medical education researchers who can design, test, and optimize educational interventions. In 2004, the Association for American Medical Colleges' Group on Educational Affairs developed the Medical Education Research Certificate (MERC) program to provide a curriculum to help medical educators acquire or enhance skills in medical education research, to promote effective collaboration with seasoned researchers, and to create better consumers of medical education scholarship. MERC courses are offered to individuals during educational meetings. Educational leaders in emergency medicine (EM) identified a disparity between the "scholarship of teaching" and medical education research skills, and they collaborated with the MERC steering committee to develop a mentored faculty development program in medical education research. A planning committee comprising experienced medical education researchers who are also board-certified, full-time EM faculty members designed a novel approach to the MERC curriculum: a mentored team approach to learning, grounded in collaborative medical education research projects. The planning committee identified areas of research interest among participants and formed working groups to collaborate on research projects during standard MERC workshops. Rather than focusing on individual questions during the course, each mentored group identified a single study hypothesis. After completing the first three workshops, group members worked under their mentors' guidance on their multiinstitutional research projects. The expected benefits of this approach to MERC include establishing a research community network, creating projects whose enrollments offer a multiinstitutional dimension, and developing a cadre of trained education researchers in EM.


Subject(s)
Curriculum , Education, Medical , Faculty, Medical , Research/education , Congresses as Topic , Cooperative Behavior , Humans , Mentors , Needs Assessment , Professional Competence , Program Evaluation , United States
10.
Acad Emerg Med ; 16 Suppl 2: S37-41, 2009 Dec.
Article in English | MEDLINE | ID: mdl-20053208

ABSTRACT

Medical educators are increasingly charged with the development of outcomes-based "best practices" in medical student and resident education and patient care. To fulfill this mission, a cadre of well-trained, experienced medical education researchers is required. The experienced medical educator is in a prime position to fill this need but often lacks the training needed to successfully contribute to such a goal. Towards this end, the Association of American Medical Colleges (AAMC) Group on Educational Affairs developed a series of content-based workshops that have resulted in Medical Education Research Certification (MERC), promoting skills development and a better understanding of research by educators. Subsequently, the Council of Emergency Medicine Residency Directors (CORD) partnered with the AAMC to take MERC a step further, in the MERC at CORD Scholars Program (MCSP). This venture integrates a novel, mentored, specialty-specific research project with the traditional MERC workshops. Collaborative groups, based on a common area of interest, each develop a multi-institutional project by exploring and applying the concepts learned through the MERC workshops. Participants in the inaugural MCSP have completed three MERC workshops and initiated a project. Upon program completion, each will have completed MERC certification (six workshops) and gained experience as a contributing author on a mentored education research project. Not only does this program serve as a multi-dimensional faculty development opportunity, it is also intended to act as a catalyst in developing a network of education scholars and infrastructure for educational research within the specialty of emergency medicine.


Subject(s)
Emergency Medicine/education , Faculty, Medical , Congresses as Topic , Humans , Mentors , Program Development , Research
11.
Med Educ Online ; 11(1): 4586, 2006 Dec.
Article in English | MEDLINE | ID: mdl-28253785

ABSTRACT

BACKGROUND: The purpose of this study was to determine how individuals providing reference letters framed the task and the specific attributes used to describe applicants. METHODS: Participants were letter writers (N=106) for accepted or alternate applicants. Participants received a brief anonymous survey and a return postcard to release their past letter for content analysis. RESULTS: Seventy-six percent of letter writers (N=81) returned a survey. Most (64%) intended to describe applicants' positive accomplishments. According to respondents' they were most likely to write about academic accomplishments (85%), work ethic (78%), dependability (70%) and motivation (70%). Seventy-four respondents (70%) released their letter for content analysis. Academic accomplishments (77%), motivation (41%) and leadership (41%) were the attributes most frequently mentioned in the letters. CONCLUSIONS: Most letter writers see their role as supportive rather than evaluative. Academic accomplishments, though often mentioned, are available from other sources. Many non-cognitive attributes of most interest to admissions committees are least likely to appear in reference letters.

12.
Med Educ Online ; 11(1): 4599, 2006 Dec.
Article in English | MEDLINE | ID: mdl-28253787

ABSTRACT

PURPOSE: This research study highlights the relationship between study aid use and exam performance of second year medical students. It also discusses how students used study aids in preparing for PBL exams and whether students who used others' study aids performed as well as students who created their own. METHODS: A questionnaire was distributed to second-year medical students after completion of their exam. The data from the questionnaire were linked to students' examination scores and other academic indicators. RESULTS: The study habits were more similar than different when compared by exam performance. A majority of students used study aids as a memory aid or for review, but students who performed in the top third of the class were less likely to use them at all. Pre-existing differences related to academic achievement and study strategies were found when students at the top, middle and bottom of exam performance were compared. CONCLUSIONS: A better understanding of the differences in study habits and study aid use in relation to examination performance can help in providing future students with appropriate academic support and advising.

13.
BMC Med Educ ; 5(1): 12, 2005 Apr 25.
Article in English | MEDLINE | ID: mdl-15850482

ABSTRACT

BACKGROUND: This paper describes a pilot survey of faculty involved in medical education. The questionnaire focuses on their understanding of IRB policies at their institution, specifically in relation to the use of student assessment and curriculum evaluation information for scholarship. METHODS: An anonymous survey was distributed to medical educators in a variety of venues. Two brief scenarios of typical student assessment or curriculum evaluation activities were presented and respondents were asked to indicate their likely course of action related to IRB approval. The questionnaire also asked respondents about their knowledge of institutional policies related to IRB approval. RESULTS: A total of 121 completed surveys were obtained; 59 (50%) respondents identified themselves as from community-based medical schools. For the first scenario, 78 respondents (66%) would have contact with the IRB; this increased to 97 respondents (82%) for the second scenario. For both scenarios, contact with the IRB was less likely among respondents from research-intensive institutions. Sixty respondents (55%) were unsure if their institutions had policies addressing evaluation data used for scholarship. Fifty respondents (41%) indicated no prior discussions at their institutions regarding IRB requirements. CONCLUSION: Many faculty members are unaware of IRB policies at their medical schools related to the use of medical student information. To the extent that policies are in place, they are highly variable across schools suggesting little standardization in faculty understanding and/or institutional implementation. Principles to guide faculty decision-making are provided.


Subject(s)
Awareness , Education, Medical, Undergraduate/methods , Ethics Committees, Research/statistics & numerical data , Faculty, Medical/standards , Informed Consent/standards , Organizational Policy , Program Evaluation/standards , Schools, Medical/organization & administration , Curriculum , Humans , Pilot Projects , Students, Medical , Surveys and Questionnaires , United States
14.
J Vet Med Educ ; 29(3): 147-52, 2002.
Article in English | MEDLINE | ID: mdl-12378431

ABSTRACT

The curricula for both veterinary and human medicine are undergoing review and change as a new and highly competitive practice environment influences what abilities graduates require to be successful. Concerning many is the contention that some graduates lack skills and aptitudes necessary for economic success. If significant changes are to be considered for the curricula of either profession, it will be difficult to plan for meaningful change in the absence of high-quality information about the needs of graduates and related curriculum gaps. The purpose of this article is to argue why educators should design more effective systems of evaluation that are responsive to the needs of educational program planning. One example from a medical school is described. In this case, the authors discuss how their institution's evaluations were insufficient for answering new and important questions that go beyond traditional cognitive measures: specifically, no data set was available that allowed the institution to answer questions about practice environment and curricular innovations. More recently, institutions have become interested in learning to what extent their broad missions are accomplished or not. Similarly, academic leaders are not simply interested in performance of learners on tests of competence; they want to know more about how their graduates are doing in the practice setting. To answer questions such as these, educators must expand their systems of evaluations to address these broader themes. The authors conclude by identifying several lessons learned from their experiences in developing a new system of educational program evaluation.


Subject(s)
Education, Veterinary , Program Evaluation , Schools, Veterinary/standards , Animals , Curriculum , Faculty , Humans , Students , United States
16.
Med Educ ; 36(2): 135-40, 2002 Feb.
Article in English | MEDLINE | ID: mdl-11869440

ABSTRACT

PURPOSE: The use of medical students as standardized patients in a performance assessment of pain evaluation was studied. METHODS: Fifty-two pairs of second-year medical students participated. One student portrayed a patient presenting with cancer pain and was interviewed by the other medical student. The student-patient then rated the interview using a checklist of pain assessment and general interviewing skills. The interviews were audiotaped and also rated independently. RESULTS: Based on student-patient ratings, 36 (69%) students demonstrated 9 or more of the 11 pain-specific checklist items, compared to 34 (65%) students according to the trained rater. Highly specific pain-related items had higher agreement than broader interviewing skill items. There would be differences in the summary assessments of students depending on which rating data were used. DISCUSSION: Medical students represent a readily accessible resource as patients for clinical simulations. Students tended to overestimate the performance of fellow students, but acting as a standardized patient had educational value, and can be used to extend simulated patient encounters within the curriculum. Further investigation is needed to improve the reliability of the feedback provided by student-patients.


Subject(s)
Clinical Competence , Education, Medical, Undergraduate/methods , Pain Measurement/methods , Pain/etiology , Educational Measurement , Feedback, Psychological , Humans , Patient Simulation , Students, Medical
17.
Adv Health Sci Educ Theory Pract ; 3(3): 165-176, 1998.
Article in English | MEDLINE | ID: mdl-12386438

ABSTRACT

When second year medical students were less successful than expected in solving an OSCE neurologic case, a subsequent performance-based assessment was modified to permit testing of four hypotheses related to knowledge application, ability decay over time, skill performance, and case complexity. Two cohorts of second year medical students were presented with neurologic cases in the context of performance-based assessment. Although many students demonstrated that they had the requisite knowledge, few were able to access the knowledge in less structured testing formats. Students had the skills necessary to conduct a physical examination but were unable to appropriately focus the examination. Case complexity also was related to some performance domains. There was evidence of knowledge and skill decay over time. In summary, it is apparent that multiple factors influence students' performance and are important considerations in designing performance assessments to evaluate competence.

SELECTION OF CITATIONS
SEARCH DETAIL
...