Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Med Teach ; 45(6): 565-573, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36862064

RESUMO

The use of Artificial Intelligence (AI) in medical education has the potential to facilitate complicated tasks and improve efficiency. For example, AI could help automate assessment of written responses, or provide feedback on medical image interpretations with excellent reliability. While applications of AI in learning, instruction, and assessment are growing, further exploration is still required. There exist few conceptual or methodological guides for medical educators wishing to evaluate or engage in AI research. In this guide, we aim to: 1) describe practical considerations involved in reading and conducting studies in medical education using AI, 2) define basic terminology and 3) identify which medical education problems and data are ideally-suited for using AI.


Assuntos
Inteligência Artificial , Educação Médica , Humanos , Reprodutibilidade dos Testes
2.
Acad Med ; 97(11S): S22-S28, 2022 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-35947480

RESUMO

PURPOSE: Feedback continues to present a challenge for competency-based medical education. Clear, consistent, and credible feedback is vital to supporting one's ongoing development, yet it can be difficult to gather clinical performance data about residents. This study sought to determine whether providing residents with electronic health record (EHR)-based report cards, as well as an opportunity to discuss these data with faculty trained using the R2C2 model, can help residents understand and interpret their clinical performance metrics. METHOD: Using action research methodology, the author team collected EHR data from July 2017 to February 2020, for all residents (n = 21) in one 5-year Emergency Medicine program and created personalized report cards for each resident. During October 6-17, 2020, 8 out of 17 eligible residents agreed to have their feedback conversations recorded and participate in a subsequent interview with a nonphysician member of the research team. Data were analyzed using thematic analysis, and the authors used inductive analysis to identify themes in the data. RESULTS: In analyzing both the feedback conversations as well as the individual interviews with faculty and residents, the authors identified 2 main themes: (1) Reactions and responses to receiving personalized EHR data and (2) The value of EHR data for assessment and feedback purposes. All participants believed that EHR data metrics are useful for prompting self-reflection, and many pointed to their utility in providing suggestions for actionable changes in their clinical practice. For faculty, having a tool through which underperforming residents can be shown "objective" data about their clinical performance helps underscore the need for improvement, particularly when residents are resistant. CONCLUSIONS: The EHR is a valuable source of educational data, and this study demonstrates one of the many thoughtful ways it can be used for assessment and feedback purposes.


Assuntos
Internato e Residência , Tutoria , Humanos , Retroalimentação , Tutoria/métodos , Registros Eletrônicos de Saúde , Competência Clínica , Projetos de Pesquisa , Docentes de Medicina
3.
Med Educ ; 55(10): 1123-1130, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33825192

RESUMO

INTRODUCTION: Individual assessment disregards the team aspect of clinical work. Team assessment collapses the individual into the group. Neither is sufficient for medical education, where measures need to attend to the individual while also accounting for interactions with others. Valid and reliable measures of interdependence are critical within medical education given the collaborative manner in which patient care is provided. Medical education currently lacks a consistent approach to measuring the performance between individuals working together as part of larger healthcare team. This review's objective was to identify existing approaches to measuring this interdependence. METHODS: Following Arksey & O'Malley's methodology, we conducted a scoping review in 2018 and updated it to 2020. A search strategy involving five databases located >12 000 citations. At least two reviewers independently screened titles and abstracts, screened full texts (n = 161) and performed data extraction on twenty-seven included articles. Interviews were also conducted with key informants to check if any literature was missing and assess that our interpretations made sense. RESULTS: Eighteen of the twenty-seven articles were empirical; nine conceptual with an empirical illustration. Eighteen were quantitative; nine used mixed methods. The articles spanned five disciplines and various application contexts, from online learning to sports performance. Only two of the included articles were from the field of Medical Education. The articles conceptualised interdependence of a group, using theoretical constructs such as collaboration synergy; of a network, using constructs such as degree centrality; and of a dyad, using constructs such as synchrony. Both descriptive (eg social network analysis) and inferential (eg multi-level modelling) approaches were described. CONCLUSION: Efforts to measure interdependence are scarce and scattered across disciplines. Multiple theoretical concepts and inconsistent terminology may be limiting programmatic work. This review motivates the need for further study of measurement techniques, particularly those combining multiple approaches, to capture interdependence in medical education.


Assuntos
Educação Médica , Atenção à Saúde , Humanos
4.
Acad Med ; 90(11 Suppl): S50-5, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26505102

RESUMO

BACKGROUND: Raters represent a significant source of unexplained, and often undesired, variance in performance-based assessments. To better understand rater variance, this study investigated how various raters, observing the same performance, perceived relationships amongst different noncognitive attributes measured in performance assessments. METHOD: Medical admissions data from a Multiple Mini-Interview (MMI) used at one Canadian medical school were collected and subsequently analyzed using the Many Facet Rasch Model (MFRM) and hierarchical clustering. This particular MMI consisted of eight stations. At each station a faculty member and an upper-year medical student rated applicants on various noncognitive attributes including communication, critical thinking, effectiveness, empathy, integrity, maturity, professionalism, and resolution. RESULTS: The Rasch analyses revealed differences between faculty and student raters across the eight different MMI stations. These analyses also identified that, at times, raters were unable to distinguish between the various noncognitive attributes. Hierarchical clustering highlighted differences in how faculty and student raters observed the various noncognitive attributes. Differences in how individual raters associated the various attributes within a station were also observed. CONCLUSIONS: The MFRM and hierarchical clustering helped to explain some of the variability associated with raters in a way that other measurement models are unable to capture. These findings highlight that differences in ratings may result from raters possessing different interpretations of an observed performance. This study has implications for developing more purposeful rater selection and rater profiling in performance-based assessments.


Assuntos
Docentes de Medicina , Critérios de Admissão Escolar , Faculdades de Medicina , Estudantes de Medicina/psicologia , Canadá , Análise por Conglomerados , Comunicação , Empatia , Humanos , Entrevistas como Assunto , Modelos Psicológicos , Variações Dependentes do Observador , Profissionalismo , Psicometria , Reprodutibilidade dos Testes , Pensamento
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...