Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
1.
Med Teach ; 41(7): 787-794, 2019 07.
Article in English | MEDLINE | ID: mdl-30912989

ABSTRACT

Purpose: Examiner training has an inconsistent impact on subsequent performance. To understand this variation, we explored how examiners think about changing the way they assess. Method: We provided comparative data to 17 experienced examiners about their assessments, captured their sense-making processes using a modified think-aloud protocol, and identified patterns by inductive thematic analysis. Results: We observed five sense-making processes: (1) testing personal relevance (2) interpretation (3) attribution (4) considering the need for change, and (5) considering the nature of change. Three observed meta-themes describe the manner of examiners' thinking: Guarded curiosity - where examiners expressed curiosity over how their judgments compared with others', but they also expressed guardedness about the relevance of the comparisons; Dysfunctional assimilation - where examiners' interpretation and attribution exhibited cognitive anchoring, personalization, and affective bias; Moderated conservatism - where examiners expressed openness to change, but also loyalty to their judgment-framing values and aphorisms. Conclusions: Our examiners engaged in complex processes as they considered changing their assessments. The 'stabilising' mechanisms some used resembled learners assimilating educational feedback. If these are typical examiner responses, they may well explain the variable impact of examiner training, and have significant implications for the pursuit of meaningful and defensible judgment-based assessment.


Subject(s)
Educational Measurement/methods , Educational Measurement/standards , Formative Feedback , Judgment , Professional Competence/standards , Staff Development/organization & administration , Humans , Reference Standards , Staff Development/standards
2.
J Contin Educ Health Prof ; 35(2): 91-8, 2015.
Article in English | MEDLINE | ID: mdl-26115108

ABSTRACT

INTRODUCTION: Nurse appraisal is well established in the Western world because of its obvious educational advantages. Appraisal works best with many sources of information on performance. Multisource feedback (MSF) is widely used in business and in other clinical disciplines to provide such information. It has also been incorporated into nursing appraisals, but, so far, none of the instruments in use for nurses has been validated. We set out to develop an instrument aligned with the UK Knowledge and Skills Framework (KSF) and to evaluate its reliability and feasibility across a wide hospital-based nursing population. METHODS: The KSF framework provided a content template. Focus groups developed an instrument based on consensus. The instrument was administered to all the nursing staff in 2 large NHS hospitals forming a single trust in London, England. We used generalizability analysis to estimate reliability, response rates and unstructured interviews to evaluate feasibility, and factor structure and correlation studies to evaluate validity. RESULTS: On a voluntary basis the response rate was moderate (60%). A failure to engage with information technology and employment-related concerns were commonly cited as reasons for not responding. In this population, 11 responses provided a profile with sufficient reliability to inform appraisal (G = 0.7). Performance on the instrument was closely and significantly correlated with performance on a KSF questionnaire. DISCUSSION: This is the first contemporary psychometric evaluation of an MSF instrument for nurses. MSF appears to be as valid and reliable as an assessment method to inform appraisal in nurses as it is in other health professional groups.


Subject(s)
Clinical Competence , Employee Performance Appraisal/methods , Feedback , Nursing Staff , Surveys and Questionnaires/standards , England , Focus Groups , Humans , Psychometrics , Reproducibility of Results , Staff Development
3.
Med Teach ; 36(8): 685-91, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24601877

ABSTRACT

This article describes the problem of disorientation in students as they become doctors. Disorientation arises because students have a poor or inaccurate understanding of what they are training to become. If they do not know what they are becoming it is hard for them to prioritise and contextualise their learning, to make sense of information about where they are now (assessment and feedback) or to determine the steps they need to take to develop (formative feedback and "feedforward"). It is also a barrier to the early development of professional identity. Using the analogy of a map, the paper describes the idea of a curriculum that is articulated as a developmental journey--a "roadmap curriculum". This is not incompatible with a competency-based curriculum, and certainly requires the same integration of knowledge, skills and attitudes. However, the semantic essence of a roadmap curriculum is fundamentally different; it must describe the pathway or pathways of development toward being a doctor in ways that are both authentic to qualified doctors and meaningful to learners. Examples from within and outside medicine are cited. Potential advantages and implications of this kind of curricular reform are discussed.


Subject(s)
Confusion/prevention & control , Education, Medical , Learning , Students, Medical/psychology , Anxiety , Curriculum , Humans , Physician's Role , Teaching
4.
Med Educ ; 42(4): 364-73, 2008 Apr.
Article in English | MEDLINE | ID: mdl-18338989

ABSTRACT

OBJECTIVES: To evaluate the reliability and feasibility of assessing the performance of medical specialist registrars (SpRs) using three methods: the mini-clinical evaluation exercise (mini-CEX), directly observed procedural skills (DOPS) and multi-source feedback (MSF) to help inform annual decisions about the outcome of SpR training. METHODS: We conducted a feasibility study and generalisability analysis based on the application of these assessment methods and the resulting data. A total of 230 SpRs (from 17 specialties) in 58 UK hospitals took part from 2003 to 2004. Main outcome measures included: time taken for each assessment, and variance component analysis of mean scores and derivation of 95% confidence intervals for individual doctors' scores based on the standard error of measurement. Responses to direct questions on questionnaires were analysed, as were the themes emerging from open-comment responses. RESULTS: The methods can provide reliable scores with appropriate sampling. In our sample, all trainees who completed the number of assessments recommended by the Royal Colleges of Physicians had scores that were 95% certain to be better than unsatisfactory. The mean time taken to complete the mini-CEX (including feedback) was 25 minutes. The DOPS required the duration of the procedure being assessed plus an additional third of this time for feedback. The mean time required for each rater to complete his or her MSF form was 6 minutes. CONCLUSIONS: This is the first attempt to evaluate the use of comprehensive workplace assessment across the medical specialties in the UK. The methods are feasible to conduct and can make reliable distinctions between doctors' performances. With adaptation, they may be appropriate for assessing the workplace performance of other grades and specialties of doctor. This may be helpful in informing foundation assessment.


Subject(s)
Clinical Competence/standards , Employee Performance Appraisal/methods , Medical Staff, Hospital/standards , Medicine , Specialization , Analysis of Variance , Feasibility Studies , Feedback , United Kingdom , Workplace
5.
Med Educ ; 38(8): 852-8, 2004 Aug.
Article in English | MEDLINE | ID: mdl-15271046

ABSTRACT

AIM: To improve the quality of outpatient letters used as communication between hospital and primary care doctors. METHODS: On 2 separate occasions, 15 unselected outpatient letters written by each of 7 hospital practitioners were rated by another hospital doctor and a general practitioner (GP) using the Sheffield Assessment Instrument for Letters (SAIL). Individualised feedback was provided to participants following the rating of the first set of letters. The audit cycle was completed 3 months later without forewarning by repeat assessment by the same hospital and GP assessors using the SAIL tool to see if there was any improvement in correspondence. SETTING: Single centre: general paediatric outpatient department in a large district general hospital. RESULTS: All 7 doctors available for reassessment completed the audit loop, each providing 15 outpatient letters per assessment. The mean of the quality scores, derived for each letter from the summation of a 20-point checklist and a global score, improved from 23.3 (95% CI 22.1-24.4) to 26.6 (95% CI 25.8-27.4) (P = 0.001). CONCLUSIONS: The SAIL provides a feasible and reliable method of assessing the quality and content of outpatient clinic letters. This study demonstrates that it can also provide feedback with a powerful educational impact. This approach holds real potential for appraisal and revalidation, providing an effective means for the quality improvement required by clinical governance.


Subject(s)
Correspondence as Topic , Medical Records/standards , Referral and Consultation/standards , Communication , Family Practice/organization & administration , Humans , Medical Staff, Hospital/organization & administration , Quality Control
SELECTION OF CITATIONS
SEARCH DETAIL
...