Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Med Teach ; 27(6): 514-20, 2005 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-16199358

RESUMO

Sharing and collaboration relating to progress testing already takes place on a national level and allows for quality control and comparisons of the participating institutions. This study explores the possibilities of international sharing of the progress test after correction for cultural bias and translation problems. Three progress tests were reviewed and administered to 3043 Pretoria and 3001 Maastricht medical students. In total, 16% of the items were potentially biased and removed from the test items administered to the Pretoria students (9% due to translation problems; 7% due to cultural differences). Of the three clusters (basic, clinical and social sciences) the social sciences contained most bias (32%), basic sciences least (11%). The differences that were found, comparing the student results of both schools, seem a reflection of the deliberate accentuations that both curricula pursue. The results suggest that the progress test methodology provides a versatile instrument that can be used to assess medical schools across the world. Sharing of test material is a viable strategy and test outcomes are interesting and can be used in international quality control.


Assuntos
Benchmarking , Avaliação Educacional/normas , Cooperação Internacional , Educação de Graduação em Medicina/normas , Avaliação Educacional/métodos , Humanos , Países Baixos , África do Sul , Estudantes de Medicina
2.
Med Educ ; 36(9): 860-7, 2002 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-12354249

RESUMO

INTRODUCTION: An earlier study showed that an Angoff procedure with > or = 10 recently graduated students as judges can be used to estimate the passing score of a progress test. As the acceptability and feasibility of this approach are questionable, we conducted an Angoff procedure with test item writers as judges. This paper reports on the reliability and credibility of this procedure and compares the standards set by the two different panels. METHODS: Fourteen item writers judged 146 test items. Recently graduated students had assessed these items in a previous study. Generalizability was investigated as a function of the number of items and judges. Credibility was judged by comparing the pass/fail rates associated with the Angoff standard, a relative standard and a fixed standard. The Angoff standards obtained by item writers and graduates were compared. RESULTS: The variance associated with consistent variability of item writers across items was 1.5% and for graduate students it was 0.4%. An acceptable error score required 39 judges. Item-Angoff estimates of the two panels and item P-values correlated highly. Failure rates of 57%, 55% and 7% were associated with the item writers' standard, the fixed standard and the graduates' standard, respectively. CONCLUSION: The graduates' and the item writers' standards differed substantially, as did the associated failure rates. A panel of 39 item writers is not feasible. The item writers' passing score appears to be less credible. The credibility of the graduates' standard needs further evaluation. The acceptability and feasibility of a panel consisting of both students and item writers may be worth investigating.


Assuntos
Educação de Graduação em Medicina/normas , Avaliação Educacional/normas , Revisão por Pares/normas , Currículo , Humanos , Reprodutibilidade dos Testes
3.
Med Educ ; 36(8): 711-7, 2002 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-12191053

RESUMO

BACKGROUND: Knowledge is an essential component of medical competence and a major objective of medical education. Thus, the degree of acquisition of knowledge by students is one of the measures of the effectiveness of a medical curriculum. We studied the growth in student knowledge over the course of Maastricht Medical School's 6-year problem-based curriculum. METHODS: We analysed 60 491 progress test (PT) scores of 3226 undergraduate students at Maastricht Medical School. During the 6-year curriculum a student sits 24 PTs (i.e. four PTs in each year), intended to assess knowledge at graduation level. On each test occasion all students are given the same PT, which means that in year 1 a student is expected to score considerably lower than in year 6. The PT is therefore a longitudinal, objective assessment instrument. Mean scores for overall knowledge and for clinical, basic, and behavioural/social sciences knowledge were calculated and used to estimate growth curves. FINDINGS: Overall medical knowledge and clinical sciences knowledge demonstrated a steady upward growth curve. However, the curves for behavioural/social sciences and basic sciences started to level off in years 4 and 5, respectively. The increase in knowledge was greatest for clinical sciences (43%), whereas it was 32% and 25% for basic and behavioural/social sciences, respectively. INTERPRETATION: Maastricht Medical School claims to offer a problem-based, student-centred, horizontally and vertically integrated curriculum in the first 4 years, followed by clerkships in years 5 and 6. Students learn by analysing patient problems and exploring pathophysiological explanations. Originally, it was intended that students' knowledge of behavioural/social sciences would continue to increase during their clerkships. However, the results for years 5 and 6 show diminishing growth in basic and behavioural/social sciences knowledge compared to overall and clinical sciences knowledge, which appears to suggest there are discrepancies between the actual and the planned curricula. Further research is needed to explain this.


Assuntos
Competência Clínica/normas , Currículo , Educação de Graduação em Medicina/normas , Avaliação Educacional , Humanos , Países Baixos , Aprendizagem Baseada em Problemas/métodos
4.
Med Educ ; 36(2): 148-53, 2002 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-11869442

RESUMO

AIM: This study was conducted to investigate the value of a written knowledge test of communication skills for predicting scores on a performance test of communication skills. METHOD: A paper-and-pencil test of knowledge about communication skills and a performance test of communication skills, consisting of four stations with standardised patients, were administered to students of two classes of the medical schools of Maastricht and Leiden, the Netherlands. The results on these tests were compared. RESULTS: From the results of both instruments, the classes of the participating students could be recognised equally well: 60% correct qualifications of the classes by the knowledge test and 64% by the multiple station examination. Between the two tests an overall, disattenuated correlation of 0.60 was found (N=133, P < 0.01), suggesting moderate predictive value of the knowledge test for the performance test of communication skills. The correlation is stronger for students from Maastricht medical school than for their colleagues in Leiden. Correlation between the knowledge of communication skills test and other available test results of the participating Maastricht students is close to zero, suggesting that the test measures a distinct quality of students' competence. DISCUSSION: The paper-and-pencil test of knowledge of communication skills has predictive value for the performance of these skills, but this value seems to be less pronounced than similar findings for clinical procedural skills. The stronger relationship between 'knowing how' and 'showing' in the Maastricht student group might be indicative of an effect of the training format.


Assuntos
Comunicação , Educação de Graduação em Medicina/métodos , Avaliação Educacional/métodos , Competência Clínica/normas , Currículo , Humanos , Países Baixos , Reprodutibilidade dos Testes
5.
Med Teach ; 22(6): 592-600, 2000.
Artigo em Inglês | MEDLINE | ID: mdl-21275695

RESUMO

This article reviews consistent research findings concerning the assessment of clinical competence during the clerkship phase of the undergraduate medical training programme on issues of reliability, validity, effect on training programme and learning behaviour, acceptability and costs. Subsequently, research findings on the clinical clerkship as a learning environment are discussed demonstrating that the clinical attachment provides a rather unstructured educational framework. Five fundamental questions (why, what, when, how, who) are addressed to generate general suggestions for improving assessment on the basis of the evidence on assessment and clinical training. Good assessment requires a thoughtful compromise between what is achievable and what is ideal. It is argued that educational effects are eminently important in this compromise, particularly in the unstructured clinical setting. Maximizing educational effects can be achieved in combination with improvements of other measurement qualities of the assessment. Two concrete examples are provided to illustrate the recommended assessment strategies.

6.
Med Educ ; 33(11): 832-7, 1999 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-10583792

RESUMO

INTRODUCTION: Progress testing is an assessment method that samples the complete domain of knowledge that is considered pertinent to undergraduate medical education. Because of the comprehensive nature of this test, it is very difficult to set a passing score. We obtained a progress test standard using an Angoff procedure with recent graduates as judges. This paper reports on the reliability and credibility of this approach. METHODS: The Angoff procedure was applied to a sample of 146 progress test items. The items were judged by a panel of eight recently graduated students. Generalizability theory was used to investigate the reliability as a function of the number of items and judges. Credibility was judged by comparing the pass/fail rates resulting from the standard arrived at by the Angoff procedure with those obtained using a relative and a fixed standard. RESULTS: The results indicate that an acceptable error score can be achieved, yielding a precision within one percentage on the scoring scale, by using 10 judges on a full-length progress test (i.e. 250 items). The pass/fail rates associated with the Angoff standard came closest to those of the relative standard, which takes variations in test difficulty into account. A high correlation was found between item-Angoff estimates and the item P-values. CONCLUSION: The results of this study suggest that the Angoff procedure, using recently graduated students as judges, is an appropriate standard setting method for a progress test.


Assuntos
Educação de Graduação em Medicina/métodos , Avaliação Educacional/métodos , Aprendizagem Baseada em Problemas , Humanos , Sensibilidade e Especificidade
7.
Artigo em Inglês | MEDLINE | ID: mdl-12386445

RESUMO

Norm-referenced pass/fail decisions are quite common in achievement testing in health sciences education. The use of relative standards has the advantage of correcting for variations in test-difficulty. However, relative standards also show some serious drawbacks, and the use of an absolute and fixed standard is regularly preferred. The current study investigates the consequences of the use of an absolute instead of a relative standard. The performance of the developed standard setting procedure was investigated by using actual progress test scores obtained at the Maastricht medical school in an episode of eight years. When the absolute instead of the relative standard was used 6% of the decisions changed: 2.6% of the outcomes changed from fail to pass, and 3.4% from pass to fail. The failure rate, which was approximately constant when using the relative standard, varied from 2% to 47% for different tests when an absolute standard was used. It is concluded that an absolute standard is precarious because of the variations in difficulty of tests.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...