Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
BMC Med Educ ; 17(1): 106, 2017 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-28659125

RESUMO

BACKGROUND: Clinicians making decisions require the ability to self-monitor and evaluate their certainty of being correct while being mindful of the potential consequences of alternative actions. For clinical students, this ability could be inferred from their responses to multiple-choice questions (MCQ) by recording their certainty in correctness and avoidance of options that are potentially unsafe. METHODS: Response certainty was assessed for fifth year medical students (n = 330) during a summative MCQ examination by having students indicate their certainty in each response they gave on the exam. Incorrect responses were classified as to their inherent level of safeness by an expert panel (response consequence). Analyses compared response certainty, response consequence across student performance groupings. RESULTS: As students' certainty in responses increased, the odds they answered correctly increased and the odds of giving unsafe answers decreased. However, from some ability groups the odds of an incorrect response being unsafe increased with high certainty. CONCLUSIONS: Certainty in, and safeness of, MCQ responses can provide additional information to the traditional measure of a number correct. In this sample, even students below standard demonstrated appropriate certainty. However, apart from those scoring lowest, student's incorrect responses were more likely to be unsafe when they expressed high certainty. These findings suggest that measures of certainty and consequence are somewhat independent of the number of correct responses to MCQs and could provide useful extra information particularly for those close to the pass-fail threshold.


Assuntos
Competência Clínica/normas , Tomada de Decisão Clínica , Avaliação Educacional/normas , Estudantes de Medicina , Atitude do Pessoal de Saúde , Comportamento de Escolha , Análise Fatorial , Humanos , Probabilidade
2.
Resuscitation ; 50(3): 281-6, 2001 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-11719157

RESUMO

The number of short 'life support' and emergency care courses available are increasing. Variability in examiner assessments has been reported previously in more traditional types of examinations but there is little data on the reliability of the assessments used on these newer courses. This study evaluated the reliability and consistency of instructor marking for the Resuscitation Council UK Advanced Life Support Course. Twenty five instructors from 15 centres throughout the UK were shown four staged video recorded defibrillation tests (one repeated) and three cardiac arrest simulation tests in order to assess inter-observer and intra-observer variability. These tests form part of the final assessment of competence on an Advanced Life Support course. Significant levels of variability were demonstrated between instructors with poor levels of agreement of 52-80% for defibrillation tests and 52-100% for cardiac arrest simulation tests. There was evidence of differences in the observation/recognition of errors and rating tendencies of instructors. Four instructors made a different pass/fail decision when shown defibrillation test 2 for a second time leading to only moderate levels of intra-observer agreement (kappa=0.43). In conclusion there is significant variability between instructors in the assessment of advanced life support skills, which may undermine the present assessment mechanisms for the advanced life support course. Validation of the assessment tools for the rapidly growing number of life support courses is required with urgent steps to improve reliability where required.


Assuntos
Suporte Vital Cardíaco Avançado/normas , Educação Médica Continuada/normas , Avaliação Educacional/normas , Humanos , Reprodutibilidade dos Testes , Reino Unido
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...