Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
BMC Med Educ ; 20(1): 325, 2020 Sep 22.
Article in English | MEDLINE | ID: mdl-32962692

ABSTRACT

BACKGROUND: Medical faculty's teaching performance is often measured using residents' feedback, collected by questionnaires. Researchers extensively studied the psychometric qualities of resulting ratings. However, these studies rarely consider the number of response categories and its consequences for residents' ratings of faculty's teaching performance. We compared the variability of residents' ratings measured by five- and seven-point response scales. METHODS: This retrospective study used teaching performance data from Dutch anaesthesiology residency training programs. Questionnaires with five- and seven-point response scales from the extensively studied System for Evaluation of Teaching Qualities (SETQ) collected the ratings. We inspected ratings' variability by comparing standard deviations, interquartile ranges, and frequency (percentage) distributions. Relevant statistical tests were used to test differences in frequency distributions and teaching performance scores. RESULTS: We examined 3379 residents' ratings and 480 aggregated faculty scores. Residents used the additional response categories provided by the seven-point scale - especially those differentiating between positive performances. Residents' ratings and aggregated faculty scores showed a more even distribution on the seven-point scale compared to the five-point scale. Also, the seven-point scale showed a smaller ceiling effect. After rescaling, the mean scores and (most) standard deviations of ratings from both scales were comparable. CONCLUSIONS: Ratings from the seven-point scale were more evenly distributed and could potentially yield more nuanced, specific and user-friendly feedback. Still, both scales measured (almost) similar teaching performance outcomes. In teaching performance practice, residents and faculty members should discuss whether response scales fit their preferences and goals.


Subject(s)
Anesthesiology , Internship and Residency , Faculty, Medical , Humans , Retrospective Studies , Surveys and Questionnaires , Teaching
2.
J Contin Educ Health Prof ; 37(1): 9-18, 2017.
Article in English | MEDLINE | ID: mdl-28212117

ABSTRACT

INTRODUCTION: Multisource feedback (MSF) instruments are used to and must feasibly provide reliable and valid data on physicians' performance from multiple perspectives. The "INviting Co-workers to Evaluate Physicians Tool" (INCEPT) is a multisource feedback instrument used to evaluate physicians' professional performance as perceived by peers, residents, and coworkers. In this study, we report on the validity, reliability, and feasibility of the INCEPT. METHODS: The performance of 218 physicians was assessed by 597 peers, 344 residents, and 822 coworkers. Using explorative and confirmatory factor analyses, multilevel regression analyses between narrative and numerical feedback, item-total correlations, interscale correlations, Cronbach's α and generalizability analyses, the psychometric qualities, and feasibility of the INCEPT were investigated. RESULTS: For all respondent groups, three factors were identified, although constructed slightly different: "professional attitude," "patient-centeredness," and "organization and (self)-management." Internal consistency was high for all constructs (Cronbach's α ≥ 0.84 and item-total correlations ≥ 0.52). Confirmatory factor analyses indicated acceptable to good fit. Further validity evidence was given by the associations between narrative and numerical feedback. For reliable total INCEPT scores, three peer, two resident and three coworker evaluations were needed; for subscale scores, evaluations of three peers, three residents and three to four coworkers were sufficient. DISCUSSION: The INCEPT instrument provides physicians performance feedback in a valid and reliable way. The number of evaluations to establish reliable scores is achievable in a regular clinical department. When interpreting feedback, physicians should consider that respondent groups' perceptions differ as indicated by the different item clustering per performance factor.


Subject(s)
Clinical Competence/standards , Feedback , Physicians/standards , Adult , Attitude of Health Personnel , Factor Analysis, Statistical , Female , Humans , Male , Middle Aged , Patient-Centered Care/standards , Peer Review, Health Care/methods , Reproducibility of Results , Self-Management , Surveys and Questionnaires , Work Performance/standards
3.
Med Teach ; 38(5): 464-70, 2016 May.
Article in English | MEDLINE | ID: mdl-26166690

ABSTRACT

PURPOSE: The purpose of this study is to investigate how aspects of a teaching performance evaluation system may affect faculty's teaching performance improvement as perceived by residents over time. METHODS: Prospective multicenter cohort study conducted in The Netherlands between 1 September 2008 and 1 February 2013. Nine hundred and one residents and 1068 faculty of 65 teaching programs in 16 hospitals were invited to annually (self-) evaluate teaching performance using the validated, specialty-specific System for Evaluation of Teaching Qualities (SETQ). We used multivariable adjusted generalized estimating equations to analyze the effects of (i) residents' numerical feedback, (ii) narrative feedback, and (iii) faculty's participation in self-evaluation on residents' perception of faculty's teaching performance improvement. RESULTS: The average response rate over three years was 69% for faculty and 81% for residents. Higher numerical feedback scores were associated with residents rating faculty as having improved their teaching performance one year following the first measurement (regression coefficient, b: 0.077; 95% CI: 0.002-0.151; p = 0.045), but not after the second wave of receiving feedback and evaluating improvement. Receiving more suggestions for improvement was associated with improved teaching performance in subsequent years. CONCLUSIONS: Evaluation systems on clinical teaching performance appear helpful in enhancing teaching performance in residency training programs. High performing teachers also appear to improve in the perception of the residents.


Subject(s)
Faculty, Medical , Internship and Residency , Quality Improvement , Teaching/standards , Female , Humans , Male , Middle Aged , Netherlands , Prospective Studies , Self Report
4.
Acad Med ; 91(2): 215-20, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26200579

ABSTRACT

Assessments of clinicians' professional performance have become more entrenched in clinical practice globally. Systems and tools have been developed and implemented, and factors that impact performance in response to assessments have been studied. The validity and reliability of data yielded by assessment tools have been studied extensively. However, there are important methodological and statistical issues that can impact the assessment of performance and change that are often omitted or ignored by research and practice. In this article, the authors aim to address five of these issues and show how they can impact the validity of performance and change assessments, using empirical illustrations based on longitudinal data of clinicians' teaching performance. Specifically, the authors address the following: characteristics of a measurement scale that affect the performance data yielded by an assessment tool; different summary statistics of the same data that lead to opposing conclusions about performance and performance change; performance at the item level that does not easily translate to overall performance; how estimating performance change from two time-indexed measurements and assessing change retrospectively yield different results; and the context that affects performance and performance assessments. The authors explain how these issues affect the validity of performance assessments and offer suggestions for how to correct these issues.


Subject(s)
Clinical Competence/statistics & numerical data , Education, Medical, Graduate/statistics & numerical data , Educational Measurement , Clinical Competence/standards , Education, Medical, Graduate/standards , Humans , Reproducibility of Results
5.
Eval Health Prof ; 39(1): 21-32, 2016 Mar.
Article in English | MEDLINE | ID: mdl-25280728

ABSTRACT

The System for Evaluation of Teaching Qualities (SETQ) was developed as a formative system for the continuous evaluation and development of physicians' teaching performance in graduate medical training. It has been seven years since the introduction and initial exploratory psychometric analysis of the SETQ questionnaires. This study investigates the validity and reliability of the SETQ questionnaires across hospitals and medical specialties using confirmatory factor analyses (CFAs), reliability analysis, and generalizability analysis. The SETQ questionnaires were tested in a sample of 3,025 physicians and 2,848 trainees in 46 hospitals. The CFA revealed acceptable fit of the data to the previously identified five-factor model. The high internal consistency estimates suggest satisfactory reliability of the subscales. These results provide robust evidence for the validity and reliability of the SETQ questionnaires for evaluating physicians' teaching performance.


Subject(s)
Education, Medical, Graduate/standards , Faculty, Medical/education , Faculty, Medical/standards , Hospitals, Teaching/standards , Factor Analysis, Statistical , Formative Feedback , Humans , Netherlands , Professional Competence , Program Evaluation , Psychometrics , Reproducibility of Results
6.
Perspect Med Educ ; 4(5): 264-267, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26399537

ABSTRACT

Evaluations of clinicians' teaching performance are usually a preliminary, although essential, activity in quality management and improvement activities. This PhD project focused on testing the validity, reliability and impact of a performance evaluation system named the System of Evaluation of Teaching Qualities (SETQ) across specialities and centres in the Netherlands. The results of this project show that the SETQ questionnaires can provide clinicians with valid and reliable performance feedback that can enhance their teaching performance. Also, we tried to investigate the predictive validity of the SETQ. In conclusion, the SETQ appears to be a helpful tool for improving clinicians' teaching performance.

7.
Med Teach ; 37(11): 1043-50, 2015.
Article in English | MEDLINE | ID: mdl-26313815

ABSTRACT

The researchers' assumptions invariably influence research outcomes. This is true for both qualitative and quantitative studies. Assumptions or choices regarding underlying theories, causal relations, study setting and population, sampling strategies, participant non-response, data collection, data analysis, and researchers' perceptions and interpretations of results are among factors that can induce uncertainty in research findings. Researchers tend to treat these factors as potential study limitations, but how they may impact research findings is rarely explicated and, therefore, mostly unknown. In this article, we approach uncertainty as unavoidable in research and argue that communicating about uncertainty can inform researchers, policy makers and practitioners about the validity and applicability of the study findings for their interests and contexts. We illustrate approaches to address, interpret, and explicate uncertainty in medical education research in both qualitative and quantitative paradigms. Across research paradigms, we call on researchers to consider the uncertainty in their research findings, employ appropriate methods to explore its extent and effects in their work, and communicate it explicitly in their research papers. This will help to advance our understanding of the nature and implications of the emerging knowledge in our field.


Subject(s)
Health Occupations/education , Research , Uncertainty , Attitude , Humans , Research Design , Research Personnel/psychology
8.
Int J Behav Med ; 22(6): 683-98, 2015 Dec.
Article in English | MEDLINE | ID: mdl-25733349

ABSTRACT

BACKGROUND: It is widely held that the occupational well-being of physicians may affect the quality of their patient care. Yet, there is still no comprehensive synthesis of the evidence on this connection. PURPOSE: This systematic review studied the effect of physicians' occupational well-being on the quality of patient care. METHODS: We systematically searched PubMed, Embase, and PsychINFO from inception until August 2014. Two authors independently reviewed the studies. Empirical studies that explored the association between physicians' occupational well-being and patient care quality were considered eligible. Data were systematically extracted on study design, participants, measurements, and findings. The Medical Education Research Study Quality Instrument (MERSQI) was used to assess study quality. RESULTS: Ultimately, 18 studies were included. Most studies employed an observational design and were of average quality. Most studies reported positive associations of occupational well-being with patient satisfaction, patient adherence to treatment, and interpersonal aspects of patient care. Studies reported conflicting findings for occupational well-being in relation to technical aspects of patient care. One study found no association between occupational well-being and patient health outcomes. CONCLUSIONS: The association between physicians' occupational well-being and health care's ultimate goal-improved patient health-remains understudied. Nonetheless, research up till date indicated that physicians' occupational well-being can contribute to better patient satisfaction and interpersonal aspects of care. These insights may help in shaping the policies on physicians' well-being and quality of care.


Subject(s)
Job Satisfaction , Patient Care , Physicians/psychology , Humans , Patient Care/psychology , Patient Care/standards , Physician-Patient Relations , Quality Improvement
9.
PLoS One ; 9(11): e112805, 2014.
Article in English | MEDLINE | ID: mdl-25393006

ABSTRACT

BACKGROUND: Teamwork between clinical teachers is a challenge in postgraduate medical training. Although there are several instruments available for measuring teamwork in health care, none of them are appropriate for teaching teams. The aim of this study is to develop an instrument (TeamQ) for measuring teamwork, to investigate its psychometric properties and to explore how clinical teachers assess their teamwork. METHOD: To select the items to be included in the TeamQ questionnaire, we conducted a content validation in 2011, using a Delphi procedure in which 40 experts were invited. Next, for pilot testing the preliminary tool, 1446 clinical teachers from 116 teaching teams were requested to complete the TeamQ questionnaire. For data analyses we used statistical strategies: principal component analysis, internal consistency reliability coefficient, and the number of evaluations needed to obtain reliable estimates. Lastly, the median TeamQ scores were calculated for teams to explore the levels of teamwork. RESULTS: In total, 31 experts participated in the Delphi study. In total, 114 teams participated in the TeamQ pilot. The median team response was 7 evaluations per team. The principal component analysis revealed 11 factors; 8 were included. The reliability coefficients of the TeamQ scales ranged from 0.75 to 0.93. The generalizability analysis revealed that 5 to 7 evaluations were needed to obtain internal reliability coefficients of 0.70. In terms of teamwork, the clinical teachers scored residents' empowerment as the highest TeamQ scale and feedback culture as the area that would most benefit from improvement. CONCLUSIONS: This study provides initial evidence of the validity of an instrument for measuring teamwork in teaching teams. The high response rates and the low number of evaluations needed for reliably measuring teamwork indicate that TeamQ is feasible for use by teaching teams. Future research could explore the effectiveness of feedback on teamwork in follow up measurements.


Subject(s)
Education, Medical, Graduate/methods , Patient Care Team , Surveys and Questionnaires , Adult , Female , Humans , Male
10.
World J Surg ; 38(11): 2761-9, 2014 Nov.
Article in English | MEDLINE | ID: mdl-24867473

ABSTRACT

BACKGROUND: This study evaluates how residents' evaluations and self-evaluations of surgeon's teaching performance evolve after two cycles of evaluation, reporting, and feedback. Furthermore, the influence of over- and underestimating own performance on subsequent teaching performance was investigated. METHODS: In a multicenter cohort study, 351 surgeons evaluated themselves and were also evaluated by residents during annual evaluation periods for three subsequent years. At the end of each evaluation period, surgeons received a personal report summarizing the residents' feedback. Changes in each surgeon's teaching performance evaluated on a five-point scale were studied using growth models. The effect of surgeons over- or underestimating their own performance on the improvement of teaching performance was studied using adjusted multivariable regressions. RESULTS: Compared with the first (median score: 3.83, 20th to 80th percentile score: 3.46-4.16) and second (median: 3.82, 20th to 80th: 3.46-4.14) evaluation period, residents evaluated surgeon's teaching performance higher during the third evaluation period (median: 3.91, 20th to 80th: 3.59-4.27), p < 0.001. Surgeons did not alter self-evaluation scores over the three periods. Surgeons who overestimated their teaching performance received lower subsequent performance scores by residents (regression coefficient b: -0.08, 95 % confidence limits (CL): -0.18, 0.02) and self (b: -0.12, 95 % CL: -0.21, -0.02). Surgeons who underestimated their performance subsequently scored themselves higher (b: 0.10, 95 % CL: 0.03, 0.16), but were evaluated equally by residents. CONCLUSIONS: Residents' evaluation of surgeon's teaching performance was enhanced after two cycles of evaluation, reporting, and feedback. Overestimating own teaching performance could impede subsequent performance.


Subject(s)
General Surgery/education , Internship and Residency , Knowledge of Results, Psychological , Professional Competence , Surgeons , Teaching , Cohort Studies , Female , Humans
11.
Int J Qual Health Care ; 26(4): 426-81, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24845069

ABSTRACT

PURPOSE: To review systematically the impact of clinicians' personality and observed interpersonal behaviors on the quality of their patient care. DATA SOURCES: We searched MEDLINE, EMBASE and PsycINFO from inception through January 2014, using both free text words and subject headings, without language restriction. Additional hand-searching was performed. STUDY SELECTION: The PRISMA framework guided (the reporting of) study selection and data extraction. Eligible articles were selected by title, abstract and full text review subsequently. DATA EXTRACTION: Data on study setting, participants, personality traits or interpersonal behaviors, outcome measures and limitations were extracted in a systematic way. RESULTS OF DATA SYNTHESIS: Our systematic search yielded 10 476 unique hits. Ultimately, 85 studies met all inclusion criteria, 4 on clinicians' personality and 81 on their interpersonal behaviors. The studies on interpersonal behaviors reported instrumental (n = 45) and affective (n = 59) verbal behaviors or nonverbal behaviors (n = 20). Outcome measures in the studies were quality of processes of care (n = 68) and patient health outcomes (n = 35). The above categories were non-exclusive. The majority of the studies found little or no effect of clinicians' personality traits and their interpersonal behaviors on the quality of patient care. The few studies that found an effect were mostly observational studies that did not address possible uncontrolled confounding. CONCLUSIONS: There is no strong empirical evidence that specific interpersonal behaviors will lead to enhanced quality of care. These findings could imply that clinicians can adapt their interactions toward patients' needs and preferences instead of displaying certain specific behaviors per se.


Subject(s)
Behavior , Health Personnel/psychology , Personality , Quality of Health Care/statistics & numerical data , Attitude of Health Personnel , Humans , Patient Satisfaction , Professional-Patient Relations
12.
PLoS One ; 8(7): e69449, 2013.
Article in English | MEDLINE | ID: mdl-23936020

ABSTRACT

BACKGROUND: In fledgling areas of research, evidence supporting causal assumptions is often scarce due to the small number of empirical studies conducted. In many studies it remains unclear what impact explicit and implicit causal assumptions have on the research findings; only the primary assumptions of the researchers are often presented. This is particularly true for research on the effect of faculty's teaching performance on their role modeling. Therefore, there is a need for robust frameworks and methods for transparent formal presentation of the underlying causal assumptions used in assessing the causal effects of teaching performance on role modeling. This study explores the effects of different (plausible) causal assumptions on research outcomes. METHODS: This study revisits a previously published study about the influence of faculty's teaching performance on their role modeling (as teacher-supervisor, physician and person). We drew eight directed acyclic graphs (DAGs) to visually represent different plausible causal relationships between the variables under study. These DAGs were subsequently translated into corresponding statistical models, and regression analyses were performed to estimate the associations between teaching performance and role modeling. RESULTS: The different causal models were compatible with major differences in the magnitude of the relationship between faculty's teaching performance and their role modeling. Odds ratios for the associations between teaching performance and the three role model types ranged from 31.1 to 73.6 for the teacher-supervisor role, from 3.7 to 15.5 for the physician role, and from 2.8 to 13.8 for the person role. CONCLUSIONS: Different sets of assumptions about causal relationships in role modeling research can be visually depicted using DAGs, which are then used to guide both statistical analysis and interpretation of results. Since study conclusions can be sensitive to different causal assumptions, results should be interpreted in the light of causal assumptions made in each study.


Subject(s)
Faculty , Mentors/education , Models, Theoretical , Physicians , Professional Competence , Teaching , Humans , Models, Statistical
13.
J Surg Educ ; 69(4): 511-20, 2012.
Article in English | MEDLINE | ID: mdl-22677591

ABSTRACT

BACKGROUND: In surgical education, there is a need for educational performance evaluation tools that yield reliable and valid data. This paper describes the development and validation of robust evaluation tools that provide surgeons with insight into their clinical teaching performance. We investigated (1) the reliability and validity of 2 tools for evaluating the teaching performance of attending surgeons in residency training programs, and (2) whether surgeons' self evaluation correlated with the residents' evaluation of those surgeons. MATERIALS AND METHODS: We surveyed 343 surgeons and 320 residents as part of a multicenter prospective cohort study of faculty teaching performance in residency training programs. The reliability and validity of the SETQ (System for Evaluation Teaching Qualities) tools were studied using standard psychometric techniques. We then estimated the correlations between residents' and surgeons' evaluations. RESULTS: The response rate was 87% among surgeons and 84% among residents, yielding 2625 residents' evaluations and 302 self evaluations. The SETQ tools yielded reliable and valid data on 5 domains of surgical teaching performance, namely, learning climate, professional attitude towards residents, communication of goals, evaluation of residents, and feedback. The correlations between surgeons' self and residents' evaluations were low, with coefficients ranging from 0.03 for evaluation of residents to 0.18 for communication of goals. CONCLUSIONS: The SETQ tools for the evaluation of surgeons' teaching performance appear to yield reliable and valid data. The lack of strong correlations between surgeons' self and residents' evaluations suggest the need for using external feedback sources in informed self evaluation of surgeons.


Subject(s)
Education, Medical, Graduate/organization & administration , Faculty, Medical , Self-Assessment , Specialties, Surgical/education , Teaching/standards , Clinical Competence , Cohort Studies , Evaluation Studies as Topic , Female , Humans , Internship and Residency/organization & administration , Male , Netherlands , Program Evaluation , Prospective Studies , Reproducibility of Results , Task Performance and Analysis , Teaching/trends
14.
PLoS One ; 7(3): e32089, 2012.
Article in English | MEDLINE | ID: mdl-22427818

ABSTRACT

OBJECTIVE: Previous studies identified different typologies of role models (as teacher/supervisor, physician and person) and explored which of faculty's characteristics could distinguish good role models. The aim of this study was to explore how and to which extent clinical faculty's teaching performance influences residents' evaluations of faculty's different role modelling statuses, especially across different specialties. METHODS: In a prospective multicenter multispecialty study of faculty's teaching performance, we used web-based questionnaires to gather empirical data from residents. The main outcome measures were the different typologies of role modelling. The predictors were faculty's overall teaching performance and faculty's teaching performance on specific domains of teaching. The data were analyzed using multilevel regression equations. RESULTS: In total 219 (69% response rate) residents filled out 2111 questionnaires about 423 (96% response rate) faculty. Faculty's overall teaching performance influenced all role model typologies (OR: from 8.0 to 166.2). For the specific domains of teaching, overall, all three role model typologies were strongly associated with "professional attitude towards residents" (OR: 3.28 for teacher/supervisor, 2.72 for physician and 7.20 for the person role). Further, the teacher/supervisor role was strongly associated with "feedback" and "learning climate" (OR: 3.23 and 2.70). However, the associations of the specific domains of teaching with faculty's role modelling varied widely across specialties. CONCLUSION: This study suggests that faculty can substantially enhance their role modelling by improving their teaching performance. The amount of influence that the specific domains of teaching have on role modelling differs across specialties.


Subject(s)
Faculty, Medical/statistics & numerical data , Physician's Role/psychology , Students, Medical/psychology , Teaching/statistics & numerical data , Humans , Interdisciplinary Studies , Internet , Internship and Residency , Prospective Studies , Regression Analysis , Surveys and Questionnaires , Teaching/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...