Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
Article in English | MEDLINE | ID: mdl-38872249

ABSTRACT

Despite explicit expectations and accreditation requirements for integrated curriculum, there needs to be more clarity around an accepted common definition, best practices for implementation, and criteria for successful curriculum integration. To address the lack of consensus surrounding integration, we reviewed the literature and herein propose a definition for curriculum integration for the medical education audience. We further believe that medical education is ready to move beyond "horizontal" (1-dimensional) and "vertical" (2-dimensional) integration and propose a model of "6 degrees of curriculum integration" to expand the 2-dimensional concept for future designs of medical education programs and best prepare learners to meet the needs of patients. These 6 degrees include: interdisciplinary, timing and sequencing, instruction and assessment, incorporation of basic and clinical sciences, knowledge and skills-based competency progression, and graduated responsibilities in patient care. We encourage medical educators to look beyond 2-dimensional integration to this holistic and interconnected representation of curriculum integration.


Subject(s)
Clinical Competence , Curriculum , Education, Medical , Humans , Education, Medical/methods , Clinical Competence/standards , Accreditation , Models, Educational
2.
Acad Med ; 97(11S): S63-S70, 2022 11 01.
Article in English | MEDLINE | ID: mdl-35947463

ABSTRACT

PURPOSE: Educational program objectives (EPOs) provide the foundation for a medical school's curriculum. In recent years, the Liaison Committee on Medical Education (LCME) endorsed an outcomes-based approach to objectives, to embrace the movement toward competency-based medical education (CBME). The purpose of this study was to explore the CBME frameworks used by medical schools in formulating their EPOs. A secondary aim was to determine factors related to the selection of specific frameworks. METHOD: The authors performed a quantitative content analysis of entries to the 2020 Academic Medicine Snapshot. Publicly available data gathered included demographic features of each program (e.g., year founded, accreditation status, affiliation, etc.), participation in national medical education consortia, and presence of specific CBME frameworks identified in EPOs. Descriptive statistics were used to examine trends in frameworks used by medical schools. Bivariate comparisons between factors and frameworks were conducted using chi-square tests. Logistic regression was used to examine factors predicting use of more recently developed CBME frameworks. RESULTS: A total of 135 institutions submitted Snapshots (RR = 88%). All institutions endorsed 1 or more CBME frameworks, with 37% endorsing 2 and 20% endorsing 3 or more. The most common was the Accreditation Council for Graduate Medical Education core competencies (63%). In addition to published frameworks, 36% of institutions developed their own competencies. Schools with pending LCME visits were 2.61 times more likely to use a more recently developed curricular framework, P = .022. CONCLUSIONS: Medical schools in the United States have embraced the CBME movement through incorporation of competency-based frameworks in their EPOs. While it is encouraging that CBME frameworks have been integrated in medical school EPOs, the variability and use of multiple frameworks identifies the pressing need for a unified CBME framework in undergraduate medical education.


Subject(s)
Education, Medical, Undergraduate , Education, Medical , Humans , United States , Schools, Medical , Competency-Based Education , Curriculum , Clinical Competence
3.
Pediatr Emerg Care ; 37(10): e645-e652, 2021 Oct 01.
Article in English | MEDLINE | ID: mdl-31305500

ABSTRACT

INTRODUCTION: Resuscitation skills decay as early as 4 months after course acquisition. Gaps in research remain regarding ideal educational modalities, timing, and frequency of curricula required to optimize skills retention. Our objective was to evaluate the impact on retention of resuscitation skills 8 months after the Pediatric Advanced Life Support (PALS) course when reinforced by an adjunct simulation-based curriculum 4 months after PALS certification. We hypothesized there would be improved retention in the intervention group. METHODS: This is a partial, double-blind, randomized controlled study. First-year pediatric residents were randomized to an intervention or control group. The intervention group participated in a simulation-based curriculum grounded in principles of deliberate practice and debriefing. The control group received no intervention. T-tests were used to compare mean percent scores (M) from simulation-based assessments and multiple-choice tests immediately following the PALS course and after 8 months. RESULTS: Intervention group (n = 12) had overall improved retention of resuscitation skills at 8 months when compared with the control group (n = 12) (mean, 0.57 ± 0.05 vs 0.52 ± 0.06; P = 0.037). No significant difference existed between individual skills stations. The intervention group had greater retention of cognitive knowledge (mean, 0.78 ± 0.09 vs 0.68 ± 0.14; P = 0.049). Residents performed 61% of assessment items correctly immediately following the PALS course. CONCLUSIONS: Resuscitation skills acquisition from the PALS course and retention are suboptimal. These findings support the use of simulation-based curricula as course adjuncts to extend retention beyond 4 months.


Subject(s)
Internship and Residency , Child , Clinical Competence , Computer Simulation , Curriculum , Humans , Resuscitation
4.
Health Aff (Millwood) ; 39(7): 1263-1266, 2020 07.
Article in English | MEDLINE | ID: mdl-32634363

ABSTRACT

A medical error in the emergency department causes emotional trauma for a patient, who seeks compassion in the aftermath.


Subject(s)
Emergency Service, Hospital , Empathy , Humans , Surveys and Questionnaires
5.
Acad Med ; 95(9S A Snapshot of Medical Student Education in the United States and Canada: Reports From 145 Schools): S5-S14, 2020 09.
Article in English | MEDLINE | ID: mdl-33626633

ABSTRACT

Medical school curricula have evolved from 2010 to 2020. Numerous pressures and influences affect medical school curricula, including those from external sources, academic medical institutions, clinical teaching faculty, and undergraduate medical students. Using data from the AAMC Curriculum Inventory and the LCME Annual Medical School Questionnaire Part II, the nature of curriculum change is illuminated. Most medical schools are undertaking curriculum change, both in small cycles of continuous quality improvement and through significant change to curricular structure and content. Four topic areas are explored: cost consciousness, guns and firearms, nutrition, and opioids and addiction medicine. The authors examine how these topic areas are taught and assessed, where in the curriculum they are located, and how much time is dedicated to them in relation to the curriculum as a whole. When examining instructional methods overall, notable findings include (1) the decrease of lecture, although lecture remains the most used instructional method, (2) the increase of collaborative instructional methods, (3) the decrease of laboratory, and (4) the prevalence of clinical instructional methods in academic levels 3 and 4. Regarding assessment methods overall, notable findings include (1) the recent change of the USMLE Step 1 examination to a pass/fail reporting system, (2) a modest increase in narrative assessment, (3) the decline of practical labs, and (4) the predominance of institutionally developed written/computer-based examinations and participation. Among instructional and assessment methods, the most used methods tend to cluster by academic level. It is critical that faculty development evolves alongside curricula. Continued diversity in the use of instructional and assessment methods is necessary to adequately prepare tomorrow's physicians. Future research into the life cycle of a curriculum, as well optional curriculum content, is warranted.


Subject(s)
Curriculum/trends , Education, Medical, Undergraduate/methods , Faculty, Medical/standards , Schools, Medical/history , Academic Medical Centers/organization & administration , Addiction Medicine/education , Addiction Medicine/statistics & numerical data , Analgesics, Opioid , Canada/epidemiology , Costs and Cost Analysis/economics , Education, Medical, Undergraduate/trends , Educational Measurement/methods , Firearms , History, 21st Century , Humans , Nutritional Sciences/education , Nutritional Sciences/statistics & numerical data , Schools, Medical/trends , Students, Medical/statistics & numerical data , Surveys and Questionnaires , United States/epidemiology
6.
Acad Med ; 94(1): 129-134, 2019 01.
Article in English | MEDLINE | ID: mdl-30157090

ABSTRACT

PURPOSE: To assess current approaches to teaching the physical exam to preclerkship students at U.S. medical schools. METHOD: The Directors of Clinical Skills Courses developed a 49-question survey addressing the approach, pedagogical methods, and assessment methods of preclerkship physical exam curricula. The survey was administered to all 141 Liaison Committee on Medical Education-accredited U.S. medical schools in October 2015. Results were aggregated across schools, and survey weights were used to adjust for response rate and school size. RESULTS: One hundred six medical schools (75%) responded. Seventy-nine percent of schools (84) began teaching the physical exam within the first two months of medical school. Fifty-six percent of schools (59) employed both a "head-to-toe" comprehensive approach and a clinical reasoning approach. Twenty-three percent (24) taught a portion of the physical exam interprofessionally. Videos, online modules, and simulators were used widely, and 39% of schools (41) used bedside ultrasonography. Schools reported a median of 4 formative assessments and 3 summative assessments, with 16% of schools (17) using criterion-based standard-setting methods for physical exam assessments. Results did not vary significantly by school size. CONCLUSIONS: There was wide variation in how medical schools taught the physical exam to preclerkship students. Common pedagogical approaches included early initiation of physical exam instruction, use of technology, and methods that support clinical reasoning and competency-based medical education. Approaches used by a minority of schools included interprofessional education, ultrasound, and criterion-based standard-setting methods for assessments. Opportunities abound for research into the optimal methods for teaching the physical exam.


Subject(s)
Clinical Clerkship/methods , Clinical Competence , Competency-Based Education/organization & administration , Curriculum , Education, Medical/organization & administration , Physical Examination/methods , Teaching , Adult , Female , Humans , Male , Schools, Medical/statistics & numerical data , Students, Medical/statistics & numerical data , Surveys and Questionnaires , United States , Young Adult
7.
Acad Med ; 93(5): 736-741, 2018 05.
Article in English | MEDLINE | ID: mdl-29116985

ABSTRACT

PURPOSE: To examine resources used in teaching the physical exam to preclerkship students at U.S. medical schools. METHOD: The Directors of Clinical Skills Courses developed a 49-question survey addressing resources and pedagogical methods employed in preclerkship physical exam curricula. The survey was sent to all 141 Liaison Committee on Medical Education-accredited medical schools in October 2015. Results were averaged across schools, and data were weighted by class size. RESULTS: Results from 106 medical schools (75% response rate) identified a median of 59 hours devoted to teaching the physical exam. Thirty-eight percent of time spent teaching the physical exam involved the use of standardized patients, 30% used peer-to-peer practice, and 25% involved examining actual patients. Approximately half of practice time with actual patients was observed by faculty. At 48% of schools (51), less than 15% of practice time was with actual patients, and at 20% of schools (21) faculty never observed students practicing with actual patients. Forty-eight percent of schools (51) did not provide compensation for their outpatient clinical preceptors. CONCLUSIONS: There is wide variation in the resources used to teach the physical examination to preclerkship medical students. At some schools, the amount of faculty observation of students examining actual patients may not be enough for students to achieve competency. A significant percentage of faculty teaching the physical exam remain uncompensated for their effort. Improving faculty compensation and increasing use of senior students as teachers might allow for greater observation and feedback and improved physical exam skills among students.


Subject(s)
Clinical Clerkship/methods , Clinical Competence/statistics & numerical data , Physical Examination/methods , Schools, Medical/statistics & numerical data , Teaching/statistics & numerical data , Curriculum , Humans , Surveys and Questionnaires
8.
J Neurol Sci ; 372: 506-509, 2017 Jan 15.
Article in English | MEDLINE | ID: mdl-27838003

ABSTRACT

Much of the care provided by practicing neurologists takes place in outpatient clinics. However, neurology trainees often have limited exposure to this setting. Adequate incorporation of outpatient care in neurology training is vital; however it is often hampered by numerous challenges. We detail a number of these challenges and suggest potential means for improvement.


Subject(s)
Ambulatory Care Facilities , Ambulatory Care , Education, Medical, Graduate , Humans , Neurology/education
9.
Surgery ; 160(3): 552-64, 2016 09.
Article in English | MEDLINE | ID: mdl-27206333

ABSTRACT

BACKGROUND: We systematically reviewed the literature concerning simulation-based teaching and assessment of the Accreditation Council for Graduate Medical Education professionalism competencies to elucidate best practices and facilitate further research. METHODS: A systematic review of English literature for "professionalism" and "simulation(s)" yielded 697 abstracts. Two independent raters chose abstracts that (1) focused on graduate medical education, (2) described the simulation method, and (3) used simulation to train or assess professionalism. Fifty abstracts met the criteria, and seven were excluded for lack of relevant information. The raters, 6 professionals with medical education, simulation, and clinical experience, discussed 5 of these articles as a group; they calibrated coding and applied further refinements, resulting in a final, iteratively developed evaluation form. The raters then divided into 2 teams to read and assess the remaining articles. Overall, 15 articles were eliminated, and 28 articles underwent final analysis. RESULTS: Papers addressed a heterogeneous range of professionalism content via multiple methods. Common specialties represented were surgery (46.4%), pediatrics (17.9%), and emergency medicine (14.3%). Sixteen articles (57%) referenced a professionalism framework; 14 (50%) incorporated an assessment tool; and 17 (60.7%) reported debriefing participants, though in limited detail. Twenty-three (82.1%) articles evaluated programs, mostly using subjective trainee reports. CONCLUSION: Despite early innovation, reporting of simulation-based professionalism training and assessment is nonstandardized in methods and terminology and lacks the details required for replication. We offer minimum standards for reporting of future professionalism-focused simulation training and assessment as well as a basic framework for better mapping proper simulation methods to the targeted domain of professionalism.


Subject(s)
Education, Medical, Graduate , Professionalism/education , Simulation Training , Humans
10.
J Clin Neurosci ; 28: 20-3, 2016 Jun.
Article in English | MEDLINE | ID: mdl-26896906

ABSTRACT

This study examined how volume in certain patient case types and breadth across patient case types in the outpatient clinic setting are related to Neurology Clerkship student performance. Case logs from the outpatient clinic experience of 486 students from The University of Chicago Pritzker School of Medicine, USA, participating in the 4week Neurology Clerkship from July 2008 to June 2013 were reviewed. A total of 12,381 patient encounters were logged and then classified into 13 diagnostic categories. How volume of cases within categories and the breadth of cases across categories relate to the National Board of Medical Examiners Clinical Subject Examination for Neurology and a Neurology Clerkship Objective Structured Clinical Examination was analyzed. Volume of cases was significantly correlated with the National Board of Medical Examiners Clinical Subject Examination for Neurology (r=.290, p<.001), the Objective Structured Clinical Examination physical examination (r=.236, p=.011), and the Objective Structured Clinical Examination patient note (r=.238, p=.010). Breadth of cases was significantly correlated with the National Board of Medical Examiners Clinical Subject Examination for Neurology (r=.231, p=.017), however was not significantly correlated with any component of the Objective Structured Clinical Examination. Volume of cases correlated with higher performance on measures of specialty knowledge and clinical skill. Fewer relationships emerged correlating breadth of cases and performance on the same measures. This study provides guidance to educators who must decide how much emphasis to place on volume versus breadth of cases in outpatient clinic learning experiences.


Subject(s)
Ambulatory Care Facilities/standards , Clinical Clerkship/standards , Curriculum , Education, Medical/standards , Neurology/education , Ambulatory Care Facilities/statistics & numerical data , Clinical Clerkship/statistics & numerical data , Education, Medical/statistics & numerical data , Humans , Neurology/statistics & numerical data
11.
Neurology ; 85(18): 1623-9, 2015 Nov 03.
Article in English | MEDLINE | ID: mdl-26432851

ABSTRACT

OBJECTIVES: This study examines factors affecting reliability, or consistency of assessment scores, from an objective structured clinical examination (OSCE) in neurology through generalizability theory (G theory). METHODS: Data include assessments from a multistation OSCE taken by 194 medical students at the completion of a neurology clerkship. Facets evaluated in this study include cases, domains, and items. Domains refer to areas of skill (or constructs) that the OSCE measures. G theory is used to estimate variance components associated with each facet, derive reliability, and project the number of cases required to obtain a reliable (consistent, precise) score. RESULTS: Reliability using G theory is moderate (Φ coefficient = 0.61, G coefficient = 0.64). Performance is similar across cases but differs by the particular domain, such that the majority of variance is attributed to the domain. Projections in reliability estimates reveal that students need to participate in 3 OSCE cases in order to increase reliability beyond the 0.70 threshold. CONCLUSIONS: This novel use of G theory in evaluating an OSCE in neurology provides meaningful measurement characteristics of the assessment. Differing from prior work in other medical specialties, the cases students were randomly assigned did not influence their OSCE score; rather, scores varied in expected fashion by domain assessed.


Subject(s)
Clinical Competence/standards , Education, Medical, Undergraduate , Neurologic Examination/standards , Neurology/education , Patient Simulation , Statistics as Topic , Clinical Clerkship , Humans , Reproducibility of Results , Students, Medical
SELECTION OF CITATIONS
SEARCH DETAIL
...