Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 42
Filter
1.
Med Teach ; : 1-11, 2024 May 20.
Article in English | MEDLINE | ID: mdl-38766754

ABSTRACT

Curriculum change is relatively frequent in health professional education. Formal, planned curriculum review must be conducted periodically to incorporate new knowledge and skills, changing teaching and learning methods or changing roles and expectations of graduates. Unplanned curriculum evolution arguably happens continually, usually taking the form of "minor" changes that in combination over time may produce a substantially different programme. However, reviewing assessment practices is less likely to be a major consideration during curriculum change, overlooking the potential for unintended consequences for learning. This includes potentially undermining or negating the impact of even well-designed and important curriculum changes. Changes to any component of the curriculum "ecosystem "- graduate outcomes, content, delivery or assessment of learning - should trigger an automatic review of the whole ecosystem to maintain constructive alignment. Consideration of potential impact on assessment is essential to support curriculum change. Powerful contextual drivers of a curriculum include national examinations and programme accreditation, so each assessment programme sits within its own external context. Internal drivers are also important, such as adoption of new learning technologies and learning preferences of students and faculty. Achieving optimal and sustainable outcomes from a curriculum review requires strong governance and support, stakeholder engagement, curriculum and assessment expertise and internal quality assurance processes. This consensus paper provides guidance on managing assessment during curriculum change, building on evidence and the contributions of previous consensus papers.

2.
Med Educ ; 58(5): 576-577, 2024 May.
Article in English | MEDLINE | ID: mdl-38618715
3.
Acad Med ; 99(3): 325-330, 2024 Mar 01.
Article in English | MEDLINE | ID: mdl-37816217

ABSTRACT

PURPOSE: The United States Medical Licensing Examination (USMLE) comprises a series of assessments required for the licensure of U.S. MD-trained graduates as well as those who are trained internationally. Demonstration of a relationship between these examinations and outcomes of care is desirable for a process seeking to provide patients with safe and effective health care. METHOD: This was a retrospective cohort study of 196,881 hospitalizations in Pennsylvania over a 3-year period (January 1, 2017 to December 31, 2019) for 5 primary diagnoses: heart failure, acute myocardial infarction, stroke, pneumonia, or chronic obstructive pulmonary disease. The 1,765 attending physicians for these hospitalizations self-identified as family physicians or general internists. A converted score based on USMLE Step 1, Step 2 Clinical Knowledge, and Step 3 scores was available, and the outcome measures were in-hospital mortality and log length of stay (LOS). The research team controlled for characteristics of patients, hospitals, and physicians. RESULTS: For in-hospital mortality, the adjusted odds ratio was 0.94 (95% confidence interval [CI] = 0.90, 0.99; P < .02). Each standard deviation increase in the converted score was associated with a 5.51% reduction in the odds of in-hospital mortality. For log LOS, the adjusted estimate was 0.99 (95% CI = 0.98, 0.99; P < .001). Each standard deviation increase in the converted score was associated with a 1.34% reduction in log LOS. CONCLUSIONS: Better provider USMLE performance was associated with lower in-hospital mortality and shorter log LOS for patients, although the magnitude of the latter is unlikely to be of practical significance. These findings add to the body of evidence that examines the validity of the USMLE licensure program.


Subject(s)
Educational Measurement , Internship and Residency , Humans , United States , Retrospective Studies , Licensure, Medical , Hospitalization , Pennsylvania , Physicians, Family
4.
Med Educ ; 54(12): 1086-1087, 2020 12.
Article in English | MEDLINE | ID: mdl-33210353
5.
Med Educ ; 54(11): 979-980, 2020 11.
Article in English | MEDLINE | ID: mdl-32895986
8.
Med Teach ; 40(11): 1102-1109, 2018 11.
Article in English | MEDLINE | ID: mdl-30299187

ABSTRACT

Introduction: In 2010, the Ottawa Conference produced a set of consensus criteria for good assessment. These were well received and since then the working group monitored their use. As part of the 2010 report, it was recommended that consideration be given in the future to preparing similar criteria for systems of assessment. Recent developments in the field suggest that it would be timely to undertake that task and so the working group was reconvened, with changes in membership to reflect broad global representation.Methods: Consideration was given to whether the initially proposed criteria continued to be appropriate for single assessments and the group believed that they were. Consequently, we reiterate the criteria that apply to individual assessments and duplicate relevant portions of the 2010 report.Results and discussion: This paper also presents a new set of criteria that apply to systems of assessment and, recognizing the challenges of implementation, offers several issues for further consideration. Among these issues are the increasing diversity of candidates and programs, the importance of legal defensibility in high stakes assessments, globalization and the interest in portable recognition of medical training, and the interest among employers and patients in how medical education is delivered and how progression decisions are made.


Subject(s)
Educational Measurement/methods , Educational Measurement/standards , Health Personnel/education , Consensus , Humans , Reproducibility of Results
9.
Med Educ ; 52(5): 548-549, 2018 05.
Article in English | MEDLINE | ID: mdl-29672938
11.
Med Educ ; 51(5): 533-534, 2017 05.
Article in English | MEDLINE | ID: mdl-28394057
SELECTION OF CITATIONS
SEARCH DETAIL
...