Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Acad Med ; 92(4): 441-443, 2017 04.
Article in English | MEDLINE | ID: mdl-28225463

ABSTRACT

The unprecedented demands of patient and population priorities created by globalization and escalating health and social inequities will not be met unless medical education changes. Educators have failed to move fast enough to create an education framework that meets current population needs. A new common set of professional values around global social accountability is necessary. Education borders must be broken down at three levels-societal-institutional, interpersonal, and individual.At a societal-institutional level, global health must be embraced as part of a philosophy of population needs, human rights, equity, and justice. A move from informative acquisition of knowledge and skills to formative learning where students socialize around values, develop leadership attributes, and become agents for change is needed. At an interpersonal level, radical changes in curriculum delivery, which move away from the well-defined borders of specialty rotations, are required. Students must develop an integrated understanding of the future of health care and the patient's journey through health care delivery, within the context of population needs. At an individual level, doctors need to understand the boundaries of the professional values they hold within themselves and develop a deeper understanding of their own internal prejudices and conflicts. Opening the borders between the sciences and humanities is essential. Fostering and mentoring that emphasize that resilience, leadership, flexibility, and the ability to cope with uncertainty are needed to tackle the complexities of current, as well as future, health care. Doctors need to understand the restraints within themselves to work effectively without borders.


Subject(s)
Education, Medical , Internationality , Leadership , Professional Competence , Social Responsibility , Social Values , Global Health , Humans
4.
BMC Med Educ ; 12: 20, 2012 Apr 17.
Article in English | MEDLINE | ID: mdl-22510502

ABSTRACT

BACKGROUND: An assessment programme, a purposeful mix of assessment activities, is necessary to achieve a complete picture of assessee competence. High quality assessment programmes exist, however, design requirements for such programmes are still unclear. We developed guidelines for design based on an earlier developed framework which identified areas to be covered. A fitness-for-purpose approach defining quality was adopted to develop and validate guidelines. METHODS: First, in a brainstorm, ideas were generated, followed by structured interviews with 9 international assessment experts. Then, guidelines were fine-tuned through analysis of the interviews. Finally, validation was based on expert consensus via member checking. RESULTS: In total 72 guidelines were developed and in this paper the most salient guidelines are discussed. The guidelines are related and grouped per layer of the framework. Some guidelines were so generic that these are applicable in any design consideration. These are: the principle of proportionality, rationales should underpin each decisions, and requirement of expertise. Logically, many guidelines focus on practical aspects of assessment. Some guidelines were found to be clear and concrete, others were less straightforward and were phrased more as issues for contemplation. CONCLUSIONS: The set of guidelines is comprehensive and not bound to a specific context or educational approach. From the fitness-for-purpose principle, guidelines are eclectic, requiring expertise judgement to use them appropriately in different contexts. Further validation studies to test practicality are required.


Subject(s)
Educational Measurement/standards , Guidelines as Topic/standards , Humans , Program Development , Program Evaluation/standards , Reproducibility of Results
5.
Med Educ ; 43(1): 74-81, 2009 Jan.
Article in English | MEDLINE | ID: mdl-19141000

ABSTRACT

OBJECTIVES: This study represents an initial evaluation of the first year (F1) of the Foundation Assessment Programme (FAP), in line with Postgraduate Medical Education and Training Board (PMETB) assessment principles. METHODS: Descriptive analyses were undertaken for total number of encounters, assessors and trainees, mean number of assessments per trainee, mean number of assessments per assessor, time taken for the assessments, mean score and standard deviation for each method. Reliability was estimated using generalisability coefficients. Pearson correlations were used to explore relationships between instruments. The study sample included 3640 F1 trainees from 10 English deaneries. RESULTS: A total of 2929 trainees submitted at least one of all four methods. A mean of 16.6 case-focused assessments were submitted per F1 trainee. Based on a return per trainee of six of each of the case-focused assessments, and eight assessors for multi-source feedback, 95% confidence intervals (CIs) ranged between 0.4 and 0.48. The estimated time required for this is 9 hours per trainee per year. Scores increased over time for all instruments and correlations between methods were in keeping with their intended focus of assessment, providing evidence of validity. CONCLUSIONS: The FAP is feasible and achieves acceptable reliability. There is some evidence to support its validity. Collated assessment data should form part of the evidence considered for selection and career progression decisions although work is needed to further develop the FAP. It is in any case of critical importance for the profession's accountability to the public.


Subject(s)
Clinical Competence/standards , Education, Medical, Graduate/methods , Educational Measurement/methods , Education, Medical, Graduate/standards , England , Humans , Quality Control
6.
Med Educ ; 42(10): 1014-20, 2008 Oct.
Article in English | MEDLINE | ID: mdl-18823521

ABSTRACT

CONTEXT: The white paper 'Trust, Assurance and Safety: the Regulation of Health Professionals in the 21st Century' proposes a single, generic multi-source feedback (MSF) instrument in the UK. Multi-source feedback was proposed as part of the assessment programme for Year 1 specialty training in histopathology. METHODS: An existing instrument was modified following blueprinting against the histopathology curriculum to establish content validity. Trainees were also assessed using an objective structured practical examination (OSPE). Factor analysis and correlation between trainees' OSPE performance and the MSF were used to explore validity. All 92 trainees participated and the assessor response rate was 93%. Reliability was acceptable with eight assessors (95% confidence interval 0.38). Factor analysis revealed two factors: 'generic' and 'histopathology'. Pearson correlation of MSF scores with OSPE performances was 0.48 (P = 0.001) and the histopathology factor correlated more highly (histopathology r = 0.54, generic r = 0.42; t = - 2.76, d.f. = 89, P < 0.01). Trainees scored least highly in relation to ability to use histopathology to solve clinical problems (mean = 4.39) and provision of good reports (mean = 4.39). Three of six doctors whose means were < 4.0 received free text comments about report writing. There were 83 forms with aggregate scores of < 4. Of these, 19.2% included comments about report writing. RESULTS: Specialty-specific MSF is feasible and achieves satisfactory reliability. The higher correlation of the 'histopathology' factor with the OSPE supports validity. This paper highlights the importance of validating an MSF instrument within the specialty-specific context as, in addition to assuring content validity, the PATH-SPRAT (Histopathology-Sheffield Peer Review Assessment Tool) also demonstrates the potential to inform training as part of a quality improvement model.


Subject(s)
Clinical Competence/standards , Education, Medical, Graduate/standards , Feedback , Pathology/education , Educational Measurement/methods , Feasibility Studies , Female , Humans , Male , Statistics as Topic , United Kingdom
7.
Adv Health Sci Educ Theory Pract ; 13(2): 181-92, 2008 May.
Article in English | MEDLINE | ID: mdl-17036157

ABSTRACT

PURPOSE: To design, implement and evaluate a multisource feedback instrument to assess Foundation trainees across the UK. METHODS: mini-PAT (Peer Assessment Tool) was modified from SPRAT (Sheffield Peer Review Assessment Tool), an established multisource feedback (360 degrees ) instrument to assess more senior doctors, as part of a blueprinting exercise of instruments suitable for assessment in Foundation programmes (first 2 years postgraduation). mini-PAT's content validity was assured by a mapping exercise against the Foundation Curriculum. Trainees' clinical performance was then assessed using 16 questions rated against a six-point scale on two occasions in the pilot period. Responses were analysed to determine internal structure, potential sources of bias and measurement characteristics. RESULTS: Six hundred and ninety-three mini-PAT assessments were undertaken for 553 trainees across 12 Deaneries in England, Wales and Northern Ireland. Two hundred and nineteen trainees were F1s or PRHOs and 334 were F2s. Trainees identified 5544 assessors of whom 67% responded. The mean score for F2 trainees was 4.61 (SD = 0.43) and for F1s was 4.44 (SD = 0.56). An independent t test showed that the mean scores of these 2 groups were significantly different (t = -4.59, df 390, p < 0.001). 43 F1s (19.6%) and 19 F2s (5.6%) were assessed as being below expectations for F2 completion. The factor analysis produced 2 main factors, one concerned clinical performance, the other humanistic qualities. Seventy-four percent of F2 trainees could have been assessed by as few as 8 assessors (95% CI +/-0.6) as they either scored an overall mean of 4.4 or above or 3.6 and below. Fifty-three percent of F1 trainees could have been assessed by as few as 8 assessors (95% CI +/-0.5) as they scored an overall mean of 4.5 or above or 3.5 and below. The hierarchical regression when controlling for the grade of trainee showed that bias related to the length of the working relationship, occupation of the assessor and the working environment explained 7% of the variation in mean scores when controlling for the year of the Foundation Programme (R squared change = 0.06, F change = 8.5, significant F change <0.001). CONCLUSIONS: As part of an assessment programme, mini-PAT appears to provide a valid way of collating colleague opinions to help reliably assess Foundation trainees.


Subject(s)
Clinical Competence , Educational Measurement/methods , Peer Group , Surveys and Questionnaires , Communication , Humans , Interprofessional Relations , Physician-Patient Relations , Reproducibility of Results , United Kingdom
8.
Stud Health Technol Inform ; 124: 719-24, 2006.
Article in English | MEDLINE | ID: mdl-17108600

ABSTRACT

BACKGROUND: Medical Subject Headings (MeSH) are a hierarchical taxonomy of over 42,000 descriptors designed to classify scientific literature; it is hierarchical with generic high order headings and specific low order headings. Over 1,000 resources in the Primary Care Electronic Library (PCEL - www.pcel.info) were classified with MeSH. METHODS: Each of the entries or resources in the primary care digital library was assigned up to five MeSH terms. We compared whether the most generic or specific MeSH term ascribed to each resource best predicted user preferences. RESULTS: over the four month period analysed statistically significant differences were found for resources according to specific key MeSH terms they were classified by. This result was not repeated for generic key MeSH terms. CONCLUSIONS: Analysis of the use of specific MeSH terms reveals user preferences that would have otherwise remained obscured. These preferences are not found if more generic MeSH terms are analysed.


Subject(s)
Consumer Behavior , Medical Informatics , Medical Subject Headings/statistics & numerical data , England , Health Personnel , Humans , Primary Health Care
9.
Ann R Coll Surg Engl ; 87(4): 242-7, 2005 Jul.
Article in English | MEDLINE | ID: mdl-16053681

ABSTRACT

INTRODUCTION: The objectives were to: (i) establish how 'typical' consultant surgeons perform on 'generic' (non-specialist) surgical simulations before their use in the General Medical Council's Performance Procedures (PPs); (ii) measure any differences in performance between specialties; and (iii) compare the performance of group of surgeons in the PPs with the 'typical' group. VOLUNTEERS AND METHODS: Seventy-four consultant volunteers in gastrointestinal surgery (n=21), vascular surgery (n=11), urology (n=10), orthopaedics (n=15), cardiothoracic surgery (n=10) and plastic surgery (n=7), plus 9 surgeons undertaking phase 2 of the PPs undertook 7 simple simulations in the skills laboratory. The scores of the volunteers were analysed by simulation and specialty using ANOVA. The scores of the volunteers were then compared with the scores of the surgeons in the PPs. RESULTS: There were significant differences between simulations, but most volunteers achieved scores of 75-100%. There was a significant simulation by specialty interaction indicating that the scores of some specialties differed on some simulations. The scores of the group of surgeons in the PPs were significantly lower than the reference group for most simulations. CONCLUSIONS: Simple simulations can be used to assess the basic technical skills of consultant surgeons. The simulation by specialty interaction suggests that whilst some skills may be generic, others are not. The lower scores of the surgeons in the PPs suggest that these tests possess criterion validity, i.e. they may help to determine when poor performance is due to lack of technical competence.


Subject(s)
Educational Measurement/methods , Specialties, Surgical/standards , Adult , Analysis of Variance , Clinical Competence/standards , Female , Humans , Male , Middle Aged , United Kingdom
10.
Med Educ ; 36(10): 936-41, 2002 Oct.
Article in English | MEDLINE | ID: mdl-12390461

ABSTRACT

The assessment of the performance of doctors in practice is becoming more widely accepted. While there are many potential purposes for such assessments, sometimes the consequences of the assessments will be 'high stakes'. In these circumstances, any of the many elements of the assessment programme may potentially be challenged. These assessment programmes therefore need to be robust, fair and defensible, taken from the perspectives of consumer, assessee and assessor. In order to inform the design of defensible programmes for assessing practice performance, a group of education researchers at the 10th Cambridge Conference adopted a project management approach to designing practice performance assessment programmes. This paper describes issues to consider in the articulation of the purposes and outcomes of the assessment, planning the programme, the administrative processes involved, including communication and preparation of assessees. Examples of key questions to be answered are provided, but further work is needed to test validity.


Subject(s)
Clinical Competence/standards , Education, Medical/standards , Physicians, Family/standards , Cost-Benefit Analysis/methods , Humans , Quality of Health Care , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...