Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Teach Learn Med ; 14(1): 20-3, 2002.
Artigo em Inglês | MEDLINE | ID: mdl-11865744

RESUMO

BACKGROUND: Whether examinees benefit from the opportunity to change answers to examination questions has been discussed widely. PURPOSE: This study was undertaken to document the impact of answer changing on exam performance on a computer-based course examination in a second-year medical school course. METHODS: This study analyzed data from a 2 hour, 80-item computer delivered multiple-choice exam administered to 190 students (166 second-year medical students and 24 physician's assistant students). RESULTS: There was a small but significant net improvement in overall score when answers were changed: one student's score increased by 7 points, 93 increased by 1 to 4 points, and 38 decreased by 1 to 3 points. On average, lower-performing students benefited slightly less than higher-performing students. Students spent more time on questions for which they changed the answers and were more likely to change items that were more difficult. CONCLUSIONS: Students should not be discouraged from changing answers, especially to difficult questions that require careful consideration, although the net effect is quite small.


Assuntos
Instrução por Computador , Educação de Graduação em Medicina/métodos , Avaliação Educacional/métodos , Assistentes Médicos/psicologia , Estudantes de Medicina/psicologia , Avaliação Educacional/normas , Humanos , Iowa , Assistentes Médicos/normas
2.
Adv Health Sci Educ Theory Pract ; 4(3): 261-270, 1999.
Artigo em Inglês | MEDLINE | ID: mdl-12386483

RESUMO

Context: The University of Iowa College of Medicine has developed a series of computer-based clinical simulations and successfully integrated them into the clinical clerkship curriculum. The computerized patient simulations provide a high degree of realism in simulating a clinical encounter. In an effort to improve the validity of our clinical skills assessment, we have initiated testing research utilizing these simulations. Because of the high costs associated with employing expert raters for performance scoring, automated scoring was deemed essential.Purpose: This study is designed to address the preliminary research questions related to utilizing the simulations for performance assessment and developing a psychometrically sound automated scoring mechanism. Specifically, it addresses issues of reliability in relation to rater and simulation characteristics, and provides essential data required for designing a sound methodology to obtain ratings for modeling.Design: The judgements of 3 expert clinician/raters, grading the responses of 69 third-year medical students, to 2 computerized simulations, are analyzed in a generalizabilty study. A random effects (persons by raters) ANOVA was performed to estimate variance components for modeling. A case facet was added to the anlaysis to provide data regarding performance assessment characteristics. Estimation of the magnitude of each variance component represents the outcome of the generalizability study. Variance estimates are used in the decision study phase of the research.Results: Only moderate levels of inter-rater reliability were obtained. Four or more raters were indicated to obtain adequate reliability. A high level of task/simulation specificity was found. Three or more simulations were indicated for performance assessment. Suggestions for improving ratings were offered.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...