Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
Teach Learn Med ; 13(3): 153-60, 2001.
Article in English | MEDLINE | ID: mdl-11475658

ABSTRACT

BACKGROUND: The assessment of the effectiveness of faculty development programs is increasingly important in medical schools and academic medical centers but is difficult to accomplish. PURPOSE: We investigated the usefulness of retrospective self-assessments by program participants in combination with independent ratings of teaching performance by their trainees. METHODS: We used a single sample, prepost intervention design using multiple measures. Our assessment instruments were based on our institution's accepted teaching competencies. We measured participants' self-assessments of their teaching competencies before the program and their retrospective self-assessed improvements in these competencies after the program. We also used independent ratings of the participants' teaching competencies before and after their involvement in the program, as rated by their own trainees (fellows, residents, and medical students). Selected teaching competencies comprised the intended learning outcomes of the faculty development program. RESULTS: Participants' preprogram self-assessments showed that the program was appropriately matched to several topics identified as needy, but also included topics that participants did not identify as needs. The retrospective self-assessments showed improvements in teaching skills that previously were identified as needs, as well as those in which participants originally felt quite competent. The independent ratings by trainees showed overall positive improvements (some significantly). The retrospective self-assessed improvements correlated positively with the independent ratings by their trainees (p < .01). CONCLUSIONS: This evaluation strategy showed that the faculty development program improved the teaching competencies of the participants. Both the program participants' retrospective self-assessments and the independent ratings by their trainees showed postprogram improvements and were positively intercorrelated. The use of these multiple measures is a viable approach to evaluate the impact of a faculty development program. Potentially either approach could be used, but in combination, they provide a feasible, valid, and reliable evaluation.


Subject(s)
Faculty, Medical/standards , Professional Competence/standards , Program Evaluation/methods , Staff Development , Education, Medical/standards , Female , Humans , Male , Statistics, Nonparametric
2.
J Gen Intern Med ; 15(6): 366-71, 2000 Jun.
Article in English | MEDLINE | ID: mdl-10886470

ABSTRACT

OBJECTIVE: In a study conducted over 3 large symposia on intensive review of internal medicine, we previously assessed the features that were most important to course participants in evaluating the quality of a lecture. In this study, we attempt to validate these observations by assessing prospectively the extent to which ratings of specific lecture features would predict the overall evaluation of lectures. MEASUREMENTS AND MAIN RESULTS: After each lecture, 143 to 355 course participants rated the overall lecture quality of 69 speakers involved in a large symposium on intensive review of internal medicine. In addition, 7 selected participants and the course directors rated specific lecture features and overall quality for each speaker. The relations among the variables were assessed through Pearson correlation coefficients and cluster analysis. Regression analysis was performed to determine which features would predict the overall lecture quality ratings. The features that most highly correlated with ratings of overall lecture quality were the speaker's abilities to identify key points (r =.797) and be engaging (r =.782), the lecture clarity (r =.754), and the slide comprehensibility (r =.691) and format (r =.660). The three lecture features of engaging the audience, lecture clarity, and using a case-based format were identified through regression as the strongest predictors of overall lecture quality ratings (R2 = 0.67, P = 0.0001). CONCLUSIONS: We have identified core lecture features that positively affect the success of the lecture. We believe our findings are useful for lecturers wanting to improve their effectiveness and for educators who design continuing medical education curricula.


Subject(s)
Education, Medical, Continuing , Internal Medicine/education , Teaching/methods , Humans , Prospective Studies , Regression Analysis
3.
Acad Med ; 75(2): 161-6, 2000 Feb.
Article in English | MEDLINE | ID: mdl-10693849

ABSTRACT

PURPOSE: Instruments that rate teaching effectiveness provide both positive and negative feedback to clinician-educators, helping them improve their teaching. The authors developed the Clinical Teaching Effectiveness Instrument, which was theory-based and generic across their entire academic medical center, The Cleveland Clinic Foundation. They tested it for reliability, validity, and usability. METHOD: In 1997, using an iterative qualitative development process involving key stakeholders, the authors developed an institution-wide instrument to routinely evaluate clinical faculty. The resulting instrument has 15 questions that use a five-point evaluation scale. The instrument, which was administered to medical students, residents, and fellows over a 20-month period, produced data that were rigorously tested for instrument characteristics, reliability, criterion-related and content validity, and usability. RESULTS: This instrument, implemented in all departments across the institution, produced data on a total of 711 clinician-educators. Correlation coefficients among the items were high (.57 to .77). The scores were reliable (g coefficient of 0.935), and the instrument had both content and criterion-related validity. CONCLUSIONS: The Cleveland Clinic's Clinical Teaching Effectiveness Instrument is reliable and valid, as well as usable. It can be used as an evaluation tool for a wide variety of clinical teaching settings.


Subject(s)
Academic Medical Centers , Education, Medical, Undergraduate , Internship and Residency , Teaching/standards , Faculty, Medical , Humans , Students, Medical/psychology , Surveys and Questionnaires
SELECTION OF CITATIONS
SEARCH DETAIL
...