Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Med Teach ; 38(7): 715-23, 2016 Jul.
Article in English | MEDLINE | ID: mdl-26383184

ABSTRACT

BACKGROUND: Little is known about medical educators' self-definition. AIMS: The aim of this study is to survey an international community of medical educators focusing on the medical educators' self-definition. METHODS: Within a comprehensive, web-based survey, an open question on the participants' views of how they would define a "medical educator" was sent to 2200 persons on the mailing list of the Association for Medical Education in Europe. The free text definitions were analysed using qualitative thematic analysis. RESULTS: Of the, 2200 medical educators invited to participate, 685 (31.1%) provided a definition of a "medical educator". The qualitative analysis of the free text definitions revealed that medical educators defined themselves in 13 roles, primarily as "Professional Expert", "Facilitator", "Information Provider", "Enthusiast", "Faculty Developer", "Mentor", "Undergraduate and Postgraduate Trainer", "Curriculum Developer", "Assessor and Assessment Creator", and "Researcher". CONCLUSIONS: Our survey revealed that medical educators predominantly define themselves as "Professional Experts" and identified 12 further self-defined roles of a medical educator, several of which not to have been reported previously. The results can be used to further the understanding of our professional identity.


Subject(s)
Faculty, Medical/psychology , Social Identification , Adult , Europe , Female , Humans , Male , Middle Aged , Surveys and Questionnaires
2.
Med Teach ; 32(11): 912-8, 2010.
Article in English | MEDLINE | ID: mdl-21039102

ABSTRACT

BACKGROUND: Little is known about how medical educators perceive their own expertise, needs and challenges in relation to medical education. AIM: To survey an international community of medical educators with a focus on: (1) their expertise, (2) their need for training and (3) perceived challenges. METHODS: A web-based survey comprising closed and open free-text questions was sent to 2200 persons on the mailing list of the Association for Medical Education in Europe. RESULTS: Of the 2200 medical educators invited to participate, 860 (39%) from 76 different countries took part in the survey. In general, their reported areas of expertise mainly comprised principles of teaching, communication skills training, stimulation of students in self-directed learning and student assessment. Respondents most often indicated a need for training with respect to development in medical-education-research methodology, computer-based training, curriculum evaluation and curriculum development. In the qualitative analysis of 1836 free-text responses concerning the main challenges faced, respondents referred to a lack of academic recognition, funding, faculty development, time for medical education issues and institutional support. CONCLUSIONS: The results of this survey indicate that medical educators face several challenges, with a particular need for more academic recognition, funding and academic qualifications in medical education.


Subject(s)
Data Collection , Faculty, Medical , Internationality , Internet , Needs Assessment , Professional Competence , Self Efficacy , Europe , Female , Humans , Male , Schools, Medical
3.
Med Teach ; 27(3): 207-13, 2005 May.
Article in English | MEDLINE | ID: mdl-16011943

ABSTRACT

Increasing physician and patient mobility has led to a move toward internationalization of standards for physician competence. The Institute for International Medical Education proposed a set of outcome-based standards for student performance, which were then measured using three assessment tools in eight leading schools in China: a 150-item multiple-choice examination, a 15-station OSCE and a 16-item faculty observation form. The purpose of this study was to empanel a group of experts to determine whether international student-level performance standards could be set. The IIME convened an international panel of experts in student education with specialty and geographic diversity. The group was split into two, with each sub-group establishing standards independently. After a discussion of the borderline student, the sub-groups established minimally acceptable cut-off scores for performance on the multiple-choice examination (Angoff and Hofstee methods), the OSCE station and global rating performance (modified Angoff method and holistic criterion reference), and faculty observation domains (holistic criterion reference). Panelists within each group set very similar standards for performance. In addition, the two independent parallel panels generated nearly identical performance standards. Cut-off scores changed little before and after being shown pilot data but standard deviations diminished. International experts agreed on a minimum set of competences for medical student performance. In addition, they were able to set consistent performance standards with multiple examination types. This provides an initial basis against which to compare physician performance internationally.


Subject(s)
Clinical Competence/standards , Education, Medical, Graduate/standards , Educational Measurement/methods , Internationality , Physicians/standards , Humans , Pilot Projects
4.
Med Teach ; 26(1): 63-70, 2004 Feb.
Article in English | MEDLINE | ID: mdl-14744697

ABSTRACT

In the UK, new medical graduates are known as Pre-registration House Officers (PRHOs). Postgraduate Deans are responsible for the PRHO year and for the final certification of PRHOs to allow them to be fully registered by the GMC as medical practitioners. However, as much as appraisal of professional growth is central to PRHO training, they are in need of a robust assessment mechanism to detect, at an early stage, individuals with significant clinical and professional deficiencies. Documented, reliable and valid ongoing information on PRHO performance will provide information for early intervention and will establish 'hard' observable evidence, for certification decisions. Thus, an approach that links appraisal/assessment of professionalism and clinical skills to education is the way forward. This paper describes a new approach to appraisal/assessment of PRHOs, which is currently being piloted in a number of regions in Scotland. The conceptual paradigm was developed during the last three years as a proposal for a Scottish national PRHO reform. 'Grounded' qualitative studies were employed to explore trainees' and trainers' perceptions of the expected competences (outcomes) of PRHO performance for appraisal/assessment purpose. The GMC recommendations are reviewed in light of the study results. An assessment model emerged that links appraisal to education. PRHOs' cumulative performance is documented over one year of training resulting in diagnostic profiles that provide guidance for evaluation and training of PRHOs. Poor performers are flagged in the early stages of training, thus allowing early intervention. The feasibility and acceptance of the model by educators, the health system and PRHOs has yet to be established.


Subject(s)
Education, Medical , Physicians , Professional Competence , Humans , United Kingdom
6.
Article in English | MEDLINE | ID: mdl-12386446

ABSTRACT

Purpose. Post-encounter written exercises (e.g., patient notes) have been included in clinical skills assessments that use standardized patients. The purpose of this study was to estimate the generalizability of the scores from these written exercises when they are rated by various trained health professionals, including physicians.Method. The patient notes from a 10 station clinical skills examination involving 10 first year emergency medicine residents were analytically scored by four rater groups: three physicians, three nurses, three fourth year medical students, three billing clerks. Generalizability analyses were used to partition the various sources of error variance and derive reliability-like coefficients for each group of raters.Results. The generalizability analyses indicated that case-to-case variability was a major source of error variance in the patient note scores. The variance attributable to the rater or to the rater by examinee interaction was negligible. This finding was consistent across the four rater groups. Generalizability coefficients in excess of 0.80 were achieved for each of the four sets of raters. Physicians did, however, produce the most dependable scores.Conclusion. There is little advantage, from a reliability perspective, in using more than one trained physician, or other health professional who is adequately trained to score the patient note. Measurement error is introduced primarily by case sampling variability. This suggests that, if required, increases in the generalizability of the patient note scores can be made through the addition of cases, and not the addition of raters.

SELECTION OF CITATIONS
SEARCH DETAIL
...