Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Med Teach ; 27(6): 514-20, 2005 Sep.
Article in English | MEDLINE | ID: mdl-16199358

ABSTRACT

Sharing and collaboration relating to progress testing already takes place on a national level and allows for quality control and comparisons of the participating institutions. This study explores the possibilities of international sharing of the progress test after correction for cultural bias and translation problems. Three progress tests were reviewed and administered to 3043 Pretoria and 3001 Maastricht medical students. In total, 16% of the items were potentially biased and removed from the test items administered to the Pretoria students (9% due to translation problems; 7% due to cultural differences). Of the three clusters (basic, clinical and social sciences) the social sciences contained most bias (32%), basic sciences least (11%). The differences that were found, comparing the student results of both schools, seem a reflection of the deliberate accentuations that both curricula pursue. The results suggest that the progress test methodology provides a versatile instrument that can be used to assess medical schools across the world. Sharing of test material is a viable strategy and test outcomes are interesting and can be used in international quality control.


Subject(s)
Benchmarking , Educational Measurement/standards , International Cooperation , Education, Medical, Undergraduate/standards , Educational Measurement/methods , Humans , Netherlands , South Africa , Students, Medical
2.
Neth J Med ; 63(7): 279-84, 2005.
Article in English | MEDLINE | ID: mdl-16093582

ABSTRACT

BACKGROUND: Global performance rating is frequently used in clinical training despite its known psychometric drawbacks. Inter-rater reliability is low in undergraduate training but better in residency training, possibly because residency offers more opportunities for supervision. The low or moderate predictive validity of global performance ratings in undergraduate and residency training may be due to low or unknown reliability of both global performance ratings and criterion measures. In an undergraduate clerkship, we investigated whether reliability improves when raters are more familiar with students' work and whether validity improves with increased reliability of the predictor and criterion instrument. METHODS: Inter-rater reliability was determined in a clerkship with more student-rater contacts than usual. The in-training assessment programme of the clerkship that immediately followed was used as the criterion measure to determine predictive validity. RESULTS: With four ratings, inter-rater reliability was 0.41 and predictive validity was 0.32. Reliability was lower and validity slightly higher than similar results published for residency training. CONCLUSION: Even with increased student-rater interaction, the reliability and validity of global performance ratings were too low to warrant the usage of global performance ratings as individual assessment format. However, combined with other assessment measures, global performance ratings may lead to improved integral assessment.


Subject(s)
Clinical Clerkship , Educational Measurement/methods , Educational Measurement/statistics & numerical data , Humans , Netherlands , Observer Variation , Reproducibility of Results , Students, Medical
3.
Med Teach ; 27(2): 158-63, 2005 Mar.
Article in English | MEDLINE | ID: mdl-16019338

ABSTRACT

Assessment drives the educational behaviour of students and supervisors. Therefore, an assessment programme targeted at specific competencies may be expected to motivate supervisors and students to pay more attention to those competencies. In-training assessment (ITA) is regarded as a feasible method for assessing a broad range of competencies. Before and after the implementation of an ITA programme in an undergraduate Internal Medicine clerkship we surveyed students on the frequency of unobserved and observed supervision, and the quality of feedback as inferred from the seniority of the person providing it. After the implementation of the ITA programme supervision increased, but the difference was not statistically significant. The quality of feedback showed no significant change either. Inter-student variation in supervision and feedback remained invariably high after the implementation of the ITA programme. Whether these results are attributable to the way the programme was implemented or to the way the results were assessed remains to be clarified.


Subject(s)
Clinical Clerkship/organization & administration , Clinical Competence/statistics & numerical data , Education, Medical, Undergraduate/organization & administration , Internal Medicine/education , Program Evaluation , Educational Measurement , Feedback , Hospitals, University , Humans , Medical Staff, Hospital , Netherlands , Surveys and Questionnaires
4.
Med Educ ; 38(12): 1270-7, 2004 Dec.
Article in English | MEDLINE | ID: mdl-15566538

ABSTRACT

INTRODUCTION: Structured assessment, embedded in a training programme, with systematic observation, feedback and appropriate documentation may improve the reliability of clinical assessment. This type of assessment format is referred to as in-training assessment (ITA). The feasibility and reliability of an ITA programme in an internal medicine clerkship were evaluated. The programme comprised 4 ward-based test formats and 1 outpatient clinic-based test format. Of the 4 ward-based test formats, 3 were single-sample tests, consisting of 1 student-patient encounter, 1 critical appraisal session and 1 case presentation. The other ward-based test and the outpatient-based test were multiple sample tests, consisting of 12 ward-based case write-ups and 4 long cases in the outpatient clinic. In all the ITA programme consisted of 19 assessments. METHODS: During 41 months, data were collected from 119 clerks. Feasibility was defined as over two thirds of the students obtaining 19 assessments. Reliability was estimated by performing generalisability analyses with 19 assessments as items and 5 test formats as items. RESULTS: A total of 73 students (69%) completed 19 assessments. Reliability expressed by the generalisability coefficients was 0.81 for 19 assessments and 0.55 for 5 test formats. CONCLUSIONS: The ITA programme proved to be feasible. Feasibility may be improved by scheduling protected time for assessment for both students and staff. Reliability may be improved by more frequent use of some of the test formats.


Subject(s)
Education, Medical, Undergraduate/standards , Educational Measurement/standards , Clinical Clerkship/standards , Clinical Competence/standards , Curriculum , Data Collection , Denmark , Feasibility Studies , Humans , Inservice Training/methods , Reproducibility of Results
5.
Med Teach ; 26(4): 305-12, 2004 Jun.
Article in English | MEDLINE | ID: mdl-15203842

ABSTRACT

Competences are becoming more and more prominent in undergraduate medical education. Workplace learning is regarded as crucial in competence learning. Assuming that effective learning depends on adequate supervision, feedback and assessment, the authors studied the occurrence of these three variables in relation to a set of clinical competences. They surveyed students at the end of their rotation in surgery, internal medicine or paediatrics asking them to indicate for each competence how often they had received observed and unobserved supervision, the seniority of the person who provided most of their feedback, and whether the competence was addressed in formal assessments. Supervision was found to be scarce and mostly unobserved. Senior staff did not provide much feedback, and assessment mostly targeted patient-related competences. For all variables, the variation between students exceeded that between disciplines. We conclude that conditions for adequate workplace learning are poorly met and that clerkship experiences show huge inter-student variation.


Subject(s)
Clinical Competence , Education, Medical/methods , Educational Measurement/methods , Feedback , Netherlands , Surveys and Questionnaires
6.
Article in English | MEDLINE | ID: mdl-12386445

ABSTRACT

Norm-referenced pass/fail decisions are quite common in achievement testing in health sciences education. The use of relative standards has the advantage of correcting for variations in test-difficulty. However, relative standards also show some serious drawbacks, and the use of an absolute and fixed standard is regularly preferred. The current study investigates the consequences of the use of an absolute instead of a relative standard. The performance of the developed standard setting procedure was investigated by using actual progress test scores obtained at the Maastricht medical school in an episode of eight years. When the absolute instead of the relative standard was used 6% of the decisions changed: 2.6% of the outcomes changed from fail to pass, and 3.4% from pass to fail. The failure rate, which was approximately constant when using the relative standard, varied from 2% to 47% for different tests when an absolute standard was used. It is concluded that an absolute standard is precarious because of the variations in difficulty of tests.

SELECTION OF CITATIONS
SEARCH DETAIL
...