Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38977526

RESUMO

Rasch modelling is a powerful tool for evaluating item performance, measuring drift in difficulty over time, and comparing students who sat assessments at different times or at different sites. Here, we use data from thirty UK medical schools to describe the benefits of Rasch modelling in quality assurance and the barriers to using it. Sixty "common content" multiple choice items were offered to all UK medical schools in 2016-17, and a further sixty in 2017-18, with five available in both years. Thirty medical schools participated, for sixty total datasets across two sessions, and 14,342 individual sittings. Schools selected items to embed in written assessment near the end of their programmes. We applied Rasch modelling to evaluate unidimensionality, model fit statistics and item quality, horizontal equating to compare performance across schools, and vertical equating to compare item performance across time. Of the sixty sittings, three provided non-unidimensional data, and eight violated goodness of fit measures. Item-level statistics identified potential improvements in item construction and provided quality assurance. Horizontal equating demonstrated large differences in scores across schools, while vertical equating showed item characteristics were stable across sessions. Rasch modelling provides significant advantages in model- and item- level reporting compared to classical approaches. However, the complexity of the analysis and the smaller number of educators familiar with Rasch must be addressed locally for a programme to benefit. Furthermore, due to the comparative novelty of Rasch modelling, there is greater ambiguity on how to proceed when a Rasch model identifies misfitting or problematic data.

2.
Qual Health Res ; 32(12): 1881-1896, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35981561

RESUMO

Most people in high income countries experience dying while receiving healthcare, yet dying has no clear beginning, and contexts influence how dying is conceptualised. This study investigates how UK physicians conceptualise the dying patient. We employed Scoping Study Methodology to obtain medical literature from 2006-2021, and Qualitative Content Analysis to analyse stated and implied meanings of language used, informed by social-materialism. Our findings indicate physicians do not conceive a dichotomous distinction between dying and not dying, but construct conceptions of the dying patient in subjective ways linked to their practice. We argue that the focus of future research should be on exploring practice-based challenges in the workplace to understanding patient dying. Furthermore, pre-Covid-19 literature related dying to chronic illness, but analysis of literature published since the pandemic generated conceptions of dying from acute illness. Researchers should note the ongoing effects of Covid-19 on societal and medical awareness of dying.


Assuntos
COVID-19 , Médicos , Humanos , Pacientes , Pesquisa Qualitativa , Reino Unido
3.
BMJ Open ; 11(9): e046056, 2021 09 03.
Artigo em Inglês | MEDLINE | ID: mdl-34479932

RESUMO

OBJECTIVE: To measure Differential Attainment (DA) among Scottish medical students and to explore whether attainment gaps increase or decrease during medical school. DESIGN: A retrospective analysis of undergraduate medical student performance on written assessment, measured at the start and end of medical school. SETTING: Four Scottish medical schools (universities of Aberdeen, Dundee, Edinburgh and Glasgow). PARTICIPANTS: 1512 medical students who attempted (but did not necessarily pass) final written assessment. MAIN OUTCOME MEASURES: The study modelled the change in attainment gap during medical school for four student demographical categories (white/non-white, international/Scottish domiciled, male/female and with/without a known disability) to test whether the attainment gap grew, shrank or remained stable during medical school. Separately, the study modelled the expected versus actual frequency of different demographical groups in the top and bottom decile of the cohort. RESULTS: The attainment gap grew significantly for white versus non-white students (t(449.39)=7.37, p=0.001, d=0.49 and 95% CI 0.34 to 0.58), for internationally domiciled versus Scottish-domiciled students (t(205.8) = -7, p=0.01, d=0.61 and 95% CI -0.75 to -0.42) and for male versus female students (t(1336.68)=3.54, p=0.01, d=0.19 and 95% CI 0.08 to 0.27). International, non-white and male students received higher marks than their comparison group at the start of medical school but lower marks by final assessment. No significant differences were observed for disability status. Students with a known disability, Scottish students and non-white students were over-represented in the bottom decile and under-represented in the top decile. CONCLUSIONS: The tendency for attainment gaps to grow during undergraduate medical education suggests that educational factors at medical schools may-however inadvertently-contribute to DA. It is of critical importance that medical schools investigate attainment gaps within their cohorts and explore potential underlying causes.


Assuntos
Educação de Graduação em Medicina , Estudantes de Medicina , Feminino , Humanos , Masculino , Estudos Retrospectivos , Faculdades de Medicina , Escócia
4.
BMC Med Educ ; 21(1): 323, 2021 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-34090426

RESUMO

BACKGROUND: Due to differing assessment systems across UK medical schools, making meaningful cross-school comparisons on undergraduate students' performance in knowledge tests is difficult. Ahead of the introduction of a national licensing assessment in the UK, we evaluate schools' performances on a shared pool of "common content" knowledge test items to compare candidates at different schools and evaluate whether they would pass under different standard setting regimes. Such information can then help develop a cross-school consensus on standard setting shared content. METHODS: We undertook a cross-sectional study in the academic sessions 2016-17 and 2017-18. Sixty "best of five" multiple choice 'common content' items were delivered each year, with five used in both years. In 2016-17 30 (of 31 eligible) medical schools undertook a mean of 52.6 items with 7,177 participants. In 2017-18 the same 30 medical schools undertook a mean of 52.8 items with 7,165 participants, creating a full sample of 14,342 medical students sitting common content prior to graduation. Using mean scores, we compared performance across items and carried out a "like-for-like" comparison of schools who used the same set of items then modelled the impact of different passing standards on these schools. RESULTS: Schools varied substantially on candidate total score. Schools differed in their performance with large (Cohen's d around 1) effects. A passing standard that would see 5 % of candidates at high scoring schools fail left low-scoring schools with fail rates of up to 40 %, whereas a passing standard that would see 5 % of candidates at low scoring schools fail would see virtually no candidates from high scoring schools fail. CONCLUSIONS: Candidates at different schools exhibited significant differences in scores in two separate sittings. Performance varied by enough that standards that produce realistic fail rates in one medical school may produce substantially different pass rates in other medical schools - despite identical content and the candidates being governed by the same regulator. Regardless of which hypothetical standards are "correct" as judged by experts, large institutional differences in pass rates must be explored and understood by medical educators before shared standards are applied. The study results can assist cross-school groups in developing a consensus on standard setting future licensing assessment.


Assuntos
Educação de Graduação em Medicina , Faculdades de Medicina , Estudos Transversais , Avaliação Educacional , Humanos , Reino Unido
5.
Med Teach ; 43(9): 1039-1043, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33844612

RESUMO

PURPOSE OF THE ARTICLE: Students who fail assessments are at risk of negative consequences, including emotional distress and cessation of studies. Identifying students at risk of failure before they experience difficulties may considerably improve their outcomes. METHODS: Using a prospective design, we collected simple measures of engagement (formative assessment scores, compliance with routine administrative tasks, and attendance) over the first 6 weeks of Year 1. These measures were combined to form an engagement score which was used to predict a summative examination sat 14 weeks after the start of medical school. The project was repeated for five cohorts, giving a total sample size of 1042. RESULTS: Simple linear regression showed engagement predicted performance (R2adj = 0.03, F(1,1040) = 90.09, p < 0.001) with a small effect size. More than half of failing students had an engagement score in the lowest two deciles. CONCLUSIONS: At-risk medical students can be identified with some accuracy immediately after starting medical school using routinely collected, easily analysed data, allowing for tailored interventions to support students. The toolkit provided here can reproduce the predictive model in any equivalent educational context. Medical educationalists must evaluate how the advantages of early detection are balanced against the potential invasiveness of using student data.


Assuntos
Educação de Graduação em Medicina , Estudantes de Medicina , Avaliação Educacional , Humanos , Estudos Prospectivos , Faculdades de Medicina
6.
Acad Med ; 96(7): 958-963, 2021 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-33735127

RESUMO

Scholars are increasingly aware that studies-across many disciplines-cannot be replicated by independent researchers. Here, the authors describe how medical education research may be vulnerable to this "replication crisis," explain how researchers can act together to reduce risks, and discuss the positive steps that can increase confidence in research findings. Medical education research contributes to policy and influences practitioner behavior. Findings that cannot be replicated suggest that the original research was not credible. This risk raises the possibility that unhelpful or even harmful changes to medical education have been implemented as a result of research that appeared defensible but was not. By considering these risk factors, researchers can increase the likelihood that studies are generating credible results. The authors discuss and provide examples of 6 factors that may endanger the replicability of medical education research: (1) small sample sizes, (2) small effect sizes, (3) exploratory designs, (4) flexibility in design choices, analysis strategy, and outcome measures, (5) conflicts of interest, and (6) very active fields with many competing research teams. Importantly, medical education researchers can adopt techniques used successfully elsewhere to improve the rigor of their investigations. Researchers can improve their work through better planning in the development stage, carefully considering design choices, and using sensible data analysis. The wider medical education community can help by encouraging higher levels of collaboration among medical educators, by routinely evaluating existing educational innovations, and by raising the prestige of replication and collaborative medical education research. Medical education journals should adopt new approaches to publishing. As medical education research improves, so too will the quality of medical education and patient care.


Assuntos
Educação Médica/métodos , Pesquisa sobre Serviços de Saúde/métodos , Assistência ao Paciente/estatística & dados numéricos , Faculdades de Medicina/organização & administração , Viés , Análise de Dados , Educação Médica/tendências , Escolaridade , Feminino , Pesquisa sobre Serviços de Saúde/estatística & dados numéricos , Humanos , Masculino , Assistência ao Paciente/tendências , Segurança do Paciente , Formulação de Políticas , Editoração/organização & administração , Projetos de Pesquisa/tendências , Apoio à Pesquisa como Assunto , Fatores de Risco
7.
BMC Med Educ ; 21(1): 86, 2021 Feb 02.
Artigo em Inglês | MEDLINE | ID: mdl-33530962

RESUMO

BACKGROUND: The use of remote online delivery of summative assessments has been underexplored in medical education. Due to the COVID-19 pandemic, all end of year applied knowledge multiple choice question (MCQ) tests at one UK medical school were switched from on campus to remote assessments. METHODS: We conducted an online survey of student experience with remote exam delivery and compared test performance in remote versus invigilated campus-based forms of similar assessments for Year 4 and 5 students across two academic years. RESULTS: Very few students experienced technical or practical problems in completing their exam remotely. Test anxiety was reduced for some students but increased for others. The majority of students preferred the traditional setting of invigilated exams in a computer lab, feeling this ensured an even playing field for all candidates. Mean score was higher for Year 4 students in the remotely-delivered versus campus-based form of the same exam (76.53% [SD 6.57] vs. 72.81% [6.64]; t438.38 = 5.94, p = 0.001; d = 0.56), whereas candidate performance was equivalent across both forms for Year 5 students. CONCLUSIONS: Remote online MCQ exam delivery is an effective and generally acceptable approach to summative assessment, and could be used again in future without detriment to students if onsite delivery is not possible.


Assuntos
Desempenho Acadêmico , COVID-19 , Educação a Distância/métodos , Educação de Graduação em Medicina/métodos , Avaliação Educacional/métodos , Ansiedade , COVID-19/epidemiologia , Comportamento do Consumidor , Avaliação Educacional/normas , Humanos , Pandemias , SARS-CoV-2 , Estudantes/psicologia , Reino Unido/epidemiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...