ABSTRACT
BACKGROUND: This review, which focused on faculty development initiatives designed to improve teaching effectiveness, synthesized findings related to intervention types, study characteristics, individual and organizational outcomes, key features, and community building. METHODS: This review included 111 studies (between 2002 and 2012) that met the review criteria. FINDINGS: Overall satisfaction with faculty development programs was high. Participants reported increased confidence, enthusiasm, and awareness of effective educational practices. Gains in knowledge and skills, and self-reported changes in teaching behaviors, were frequently noted. Observed behavior changes included enhanced teaching practices, new educational initiatives, new leadership positions, and increased academic output. Organizational changes were infrequently explored. Key features included evidence-informed educational design, relevant content, experiential learning, feedback and reflection, educational projects, intentional community building, longitudinal program design, and institutional support. CONCLUSION: This review holds implications for practice and research. Moving forward, we should build on current success, broaden the focus beyond individual teaching effectiveness, develop programs that extend over time, promote workplace learning, foster community development, and secure institutional support. We should also embed studies in a theoretical framework, conduct more qualitative and mixed methods studies, assess behavioral and organizational change, evaluate transfer to practice, analyse key features, and explore the role of faculty development within the larger organizational context.
Subject(s)
Faculty, Medical , Staff Development/methods , Teaching , Guidelines as Topic , Professional Competence , Teaching/standardsABSTRACT
In this article, we outline criteria for good assessment that include: (1) validity or coherence, (2) reproducibility or consistency, (3) equivalence, (4) feasibility, (5) educational effect, (6) catalytic effect, and (7) acceptability. Many of the criteria have been described before and we continue to support their importance here. However, we place particular emphasis on the catalytic effect of the assessment, which is whether the assessment provides results and feedback in a fashion that creates, enhances, and supports education. These criteria do not apply equally well to all situations. Consequently, we discuss how the purpose of the test (summative versus formative) and the perspectives of stakeholders (examinees, patients, teachers-educational institutions, healthcare system, and regulators) influence the importance of the criteria. Finally, we offer a series of practice points as well as next steps that should be taken with the criteria. Specifically, we recommend that the criteria be expanded or modified to take account of: (1) the perspectives of patients and the public, (2) the intimate relationship between assessment, feedback, and continued learning, (3) systems of assessment, and (4) accreditation systems.