Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 45.846
Filter
2.
BMC Med Educ ; 24(1): 609, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38824578

ABSTRACT

BACKGROUND: Evidence indicates that communication skills teaching learnt in the classroom are not often readily transferable to the assessment methods that are applied nor to the clinical environment. An observational study was conducted to objectively evaluate students' communication skills in different learning environments. The study sought to investigate the extent to which the communication skills demonstrated by students in classroom, clinical, and assessment settings align. METHOD: A mixed methods study was conducted to observe and evaluate students during the fourth year of a five-year medical program. Participants were videorecorded during structured classroom 'interactional skills' sessions, as well as clinical encounters with real patients and an OSCE station calling upon communication skills. The Calgary Cambridge Observational Guides was used to evaluate students at different settings. RESULT: This study observed 28 students and findings revealed that while in the classroom students were able to practise a broad range of communication skills, in contrast in the clinical environment, information-gathering and relationship-building with patients became the focus of their encounters with patients. In the OSCEs, limited time and high-pressure scenarios caused the students to rush to complete the task which focussed solely on information-gathering and/or explanation, diminishing opportunity for rapport-building with the patient. CONCLUSION: These findings indicate a poor alignment that can develop between the skills practiced across learning environments. Further research is needed to investigate the development and application of students' skills over the long term to understand supports for and barriers to effective teaching and learning of communication skills in different learning environments.


Subject(s)
Clinical Competence , Communication , Education, Medical, Undergraduate , Educational Measurement , Humans , Clinical Competence/standards , Education, Medical, Undergraduate/methods , Students, Medical , Teaching , Male , Female , Physician-Patient Relations
4.
Can Med Educ J ; 15(2): 34-38, 2024 May.
Article in English | MEDLINE | ID: mdl-38827904

ABSTRACT

Purpose: Given the COVID-19 pandemic, many Objective Structured Clinical Examinations (OSCEs) have been adapted to virtual formats without addressing whether physical examination maneuvers can or should be assessed virtually. In response, we developed a novel touchless physical examination station for a virtual OSCE and gathered validity evidence for its use. Methods: We used a touchless physical examination OSCE station pilot-tested in a virtual OSCE in which Internal Medicine residents had to verbalize their approach to the physical examination, interpret images and videos of findings provided upon request, and make a diagnosis. We explored differences in performance by training year using ANOVA. In addition, we analyzed data using elements of Bloom's taxonomy of learning, i.e. knowledge, understanding, and synthesis. Results: Sixty-seven residents (PGY1-3) participated in the OSCE. Scores on the pilot station were significantly different between training levels (F=3.936, p = 0.024, ηp2 = 0.11). The pilot station-total correlation (STC) was r = 0.558, and the item-station correlations ranged from r = 0.115-0.571, with the most discriminating items being those that assessed application of knowledge (interpretation and synthesis) rather than recall. Conclusion: This touchless physical examination station was feasible, had acceptable psychometric characteristics, and discriminated between residents at different levels of training.


Objet: Compte tenu de la pandémie de COVID-19, de nombreux examens cliniques objectifs structurés (ECOS) ont été adaptés vers un format virtuel sans que l'on se questionne à savoir si les manœuvres d'examen physique peuvent ou doivent être évaluées virtuellement. Conséquemment, nous avons développé une nouvelle station d'examen physique sans contact pour un ECOS virtuel et recueilli des preuves de validité concernant son utilisation. Méthodes: Nous avons utilisé une station d'examen physique sans contact testée dans le cadre d'un ECOS virtuel pendant lequel les résidents en médecine interne devaient verbaliser leur approche concernant l'examen physique, interpréter des images et des vidéos d'examens fournis sur demande, et poser un diagnostic. Nous avons étudié les différences de rendement en fonction de l'année de formation à l'aide de l'ANOVA. En outre, nous avons analysé les données en utilisant les éléments de la taxonomie de l'apprentissage de Bloom, c'est-à-dire la connaissance, la compréhension et la synthèse. Résultats: Soixante-sept résidents (PGY1-3) ont participé à l'ECOS. Les scores de la station pilote étaient significativement différents entre les niveaux de formation (F=3.936, p=0.024, ηp2=0.11). La corrélation totale de la station pilote (STC) était de r=0,558, et les corrélations question-station variaient de r=0,115-0,571, les questions les plus discriminantes étant celles qui évaluaient l'application (interprétation et synthèse) plutôt que le rappel de connaissances. Conclusion: Cette station d'examen physique sans contact était réalisable, a présenté des caractéristiques psychométriques acceptables et a permis d'établir une discrimination entre les résidents de différents niveaux de formation.


Subject(s)
COVID-19 , Clinical Competence , Educational Measurement , Internship and Residency , Physical Examination , Humans , Physical Examination/methods , Educational Measurement/methods , Internal Medicine/education , SARS-CoV-2 , Pandemics , Female , Male , Virtual Reality
5.
Can Med Educ J ; 15(2): 14-26, 2024 May.
Article in English | MEDLINE | ID: mdl-38827914

ABSTRACT

Purpose: Competency-based medical education relies on feedback from workplace-based assessment (WBA) to direct learning. Unfortunately, WBAs often lack rich narrative feedback and show bias towards Medical Expert aspects of care. Building on research examining interactive assessment approaches, the Queen's University Internal Medicine residency program introduced a facilitated, team-based assessment initiative ("Feedback Fridays") in July 2017, aimed at improving holistic assessment of resident performance on the inpatient medicine teaching units. In this study, we aim to explore how Feedback Fridays contributed to formative assessment of Internal Medicine residents within our current model of competency-based training. Method: A total of 53 residents participated in facilitated, biweekly group assessment sessions during the 2017 and 2018 academic year. Each session was a 30-minute facilitated assessment discussion done with one inpatient team, which included medical students, residents, and their supervising attending. Feedback from the discussion was collected, summarized, and documented in narrative form in electronic WBA forms by the program's assessment officer for the residents. For research purposes, verbatim transcripts of feedback sessions were analyzed thematically. Results: The researchers identified four major themes for feedback: communication, intra- and inter-personal awareness, leadership and teamwork, and learning opportunities. Although feedback related to a broad range of activities, it showed strong emphasis on competencies within the intrinsic CanMEDS roles. Additionally, a clear formative focus in the feedback was another important finding. Conclusions: The introduction of facilitated team-based assessment in the Queen's Internal Medicine program filled an important gap in WBA by providing learners with detailed feedback across all CanMEDS roles and by providing constructive recommendations for identified areas for improvement.


Objectif: La formation médicale fondée sur les compétences s'appuie sur la rétroaction faite lors de l'évaluation des apprentissages par observation directe dans le milieu de travail. Malheureusement, les évaluations dans le milieu de travail omettent souvent de fournir une rétroaction narrative exhaustive et privilégient les aspects des soins relevant de l'expertise médicale. En se basant sur la recherche ayant étudié les approches d'évaluation interactive, le programme de résidence en médecine interne de l'Université Queen's a introduit en juillet 2017 une initiative d'évaluation facilitée et en équipe (« Les vendredis rétroaction ¼), visant à améliorer l'évaluation holistique du rendement des résidents dans les unités d'enseignement clinique en médecine interne. Dans cette étude, nous visons à explorer comment ces « vendredis rétroaction ¼ ont contribué à l'évaluation formative des résidents en médecine interne dans le cadre de notre modèle actuel de formation axée sur les compétences. Méthode: Au total, 53 résidents ont participé à des séances d'évaluation de groupe facilitées et bi-hebdomadaires au cours de l'année universitaire 2017-2018. Chaque séance consistait en une discussion d'évaluation facilitée de 30 minutes menée avec une équipe de l'unité de soins, qui comprenait des étudiants en médecine, des résidents et le médecin superviseur. Les commentaires issus de la discussion ont été recueillis, résumés et documentés sous forme narrative dans des formulaires électroniques d'observation directe dans le milieu de travail par le responsable de l'évaluation du programme de résidence. À des fins de recherche, les transcriptions verbatim des séances de rétroaction ont été analysées de façon thématique. Résultats: Les chercheurs ont identifié quatre thèmes principaux pour les commentaires : la communication, la conscience intra- et interpersonnelle, le leadership et le travail d'équipe, et les occasions d'apprentissage. Bien que la rétroaction concerne un large éventail d'activités, elle met fortement l'accent sur les compétences liées aux rôles intrinsèques de CanMEDS. De plus, le fait que la rétroaction avait un rôle clairement formatif est une autre constatation importante. Conclusions: L'introduction de l'évaluation en équipe facilitée dans le programme de médecine interne à Queen's a comblé une lacune importante dans l'apprentissage par observation directe dans le milieu de travail en fournissant aux apprenants une rétroaction détaillée sur tous les rôles CanMEDS et en formulant des recommandations constructives sur les domaines à améliorer.


Subject(s)
Clinical Competence , Internal Medicine , Internship and Residency , Qualitative Research , Internal Medicine/education , Humans , Competency-Based Education/methods , Formative Feedback , Leadership , Feedback , Educational Measurement/methods , Communication
6.
JMIR Med Educ ; 10: e52207, 2024 May 30.
Article in English | MEDLINE | ID: mdl-38825848

ABSTRACT

Background: The relationship between educational outcomes and the use of web-based clinical knowledge support systems in teaching hospitals remains unknown in Japan. A previous study on this topic could have been affected by recall bias because of the use of a self-reported questionnaire. Objective: We aimed to explore the relationship between the use of the Wolters Kluwer UpToDate clinical knowledge support system in teaching hospitals and residents' General Medicine In-Training Examination (GM-ITE) scores. In this study, we objectively evaluated the relationship between the total number of UpToDate hospital use logs and the GM-ITE scores. Methods: This nationwide cross-sectional study included postgraduate year-1 and -2 residents who had taken the examination in the 2020 academic year. Hospital-level information was obtained from published web pages, and UpToDate hospital use logs were provided by Wolters Kluwer. We evaluated the relationship between the total number of UpToDate hospital use logs and residents' GM-ITE scores. We analyzed 215 teaching hospitals with at least 5 GM-ITE examinees and hospital use logs from 2017 to 2019. Results: The study population consisted of 3013 residents from 215 teaching hospitals with at least 5 GM-ITE examinees and web-based resource use log data from 2017 to 2019. High-use hospital residents had significantly higher GM-ITE scores than low-use hospital residents (mean 26.9, SD 2.0 vs mean 26.2, SD 2.3; P=.009; Cohen d=0.35, 95% CI 0.08-0.62). The GM-ITE scores were significantly correlated with the total number of hospital use logs (Pearson r=0.28; P<.001). The multilevel analysis revealed a positive association between the total number of logs divided by the number of hospital physicians and the GM-ITE scores (estimated coefficient=0.36, 95% CI 0.14-0.59; P=.001). Conclusions: The findings suggest that the development of residents' clinical reasoning abilities through UpToDate is associated with high GM-ITE scores. Thus, higher use of UpToDate may lead physicians and residents in high-use hospitals to increase the implementation of evidence-based medicine, leading to high educational outcomes.


Subject(s)
Hospitals, Teaching , Internet , Internship and Residency , Humans , Internship and Residency/statistics & numerical data , Japan , Cross-Sectional Studies , Clinical Competence/statistics & numerical data , Educational Measurement , Female , Male , Education, Medical, Graduate , Adult
7.
S Afr Fam Pract (2004) ; 66(1): e1-e7, 2024 May 13.
Article in English | MEDLINE | ID: mdl-38832393

ABSTRACT

The 'Mastering your Fellowship' series provides examples of the question format encountered in the written and clinical examinations for the Fellowship of the College of Family Physicians of South Africa (FCFP [SA]) examination. The series is aimed at helping family medicine registrars prepare for this examination. Model answers are available online.


Subject(s)
Family Practice , Fellowships and Scholarships , Humans , South Africa , Family Practice/education , Educational Measurement , Clinical Competence
8.
Sci Eng Ethics ; 30(3): 23, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38833046

ABSTRACT

The Defining Issues Test 2 (DIT-2) and Engineering Ethical Reasoning Instrument (EERI) are designed to measure ethical reasoning of general (DIT-2) and engineering-student (EERI) populations. These tools-and the DIT-2 especially-have gained wide usage for assessing the ethical reasoning of undergraduate students. This paper reports on a research study in which the ethical reasoning of first-year undergraduate engineering students at multiple universities was assessed with both of these tools. In addition to these two instruments, students were also asked to create personal concept maps of the phrase "ethical decision-making." It was hypothesized that students whose instrument scores reflected more postconventional levels of moral development and more sophisticated ethical reasoning skills would likewise have richer, more detailed concept maps of ethical decision-making, reflecting their deeper levels of understanding of this topic and the complex of related concepts. In fact, there was no significant correlation between the instrument scores and concept map scoring, suggesting that the way first-year students conceptualize ethical decision making does not predict the way they behave when performing scenario-based ethical reasoning (perhaps more situated). This disparity indicates a need to more precisely quantify engineering ethical reasoning and decision making, if we wish to inform assessment outcomes using the results of such quantitative analyses.


Subject(s)
Decision Making , Educational Measurement , Engineering , Students , Humans , Engineering/ethics , Engineering/education , Decision Making/ethics , Universities , Thinking , Morals , Moral Development , Male , Female , Ethics, Professional/education , Problem Solving/ethics
9.
Korean J Med Educ ; 36(2): 175-188, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38835310

ABSTRACT

PURPOSE: This study evaluated the underlying subdomain structure of the Self-Directed Learning Readiness Scale (SDLRS) for medical students and refined the instrument to measure the subdomains to provide evidence for construct validity. Developing self-directed learners is a well-recognized goal amongst medical educators. The SDLRS has been frequently used, however, lack of construct validity makes it difficult to interpret results. METHODS: To identify the valid subdomains of the SDLRS, items were calibrated with the graded response model (GRM) and results were used to construct a 30-item short form. Short-form validity was evaluated by examining the correspondence between the total scores from the short form and the original instrument for individual students. RESULTS: A five-subdomain model explained the SDLRS item response data reasonably well. These included: (1) initiative and independence in learning, (2) self-concept as an effective learner, (3) openness to learning opportunity, (4) love of learning, and (5) acceptance for one's own learning. The unidimensional GRM for each subdomain fits the data better than multi-dimensional models. The total scores from the refined short form and the original form were correlated at 0.98 and the mean difference was 1.33, providing evidence for validation. Nearly 91% of 179 respondents were accurately classified within the low, average, and high readiness groups. CONCLUSION: Sufficient evidence was obtained for the validity and reliability of the refined 30-item short-form targeting five subdomains to measure medical students' readiness to engage in self-directed learning.


Subject(s)
Learning , Students, Medical , Humans , Students, Medical/psychology , Surveys and Questionnaires , Female , Male , Education, Medical, Undergraduate/methods , Self Concept , Reproducibility of Results , Psychometrics , Self-Directed Learning as Topic , Young Adult , Educational Measurement/methods , Adult
10.
Korean J Med Educ ; 36(2): 213-221, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38835313

ABSTRACT

PURPOSE: This study developed and implemented case-based flipped learning using illness script worksheets and investigated the responses of preclinical students and professors to the intervention in terms of its effectiveness, design, and implementation. METHODS: The study was conducted at a medical school in Korea, where the "clinical reasoning method" course, originally a lecture-oriented course, was redesigned into a flipped learning. In total, 42 second-year medical students and 15 professors participated in this course. After the class, online surveys were conducted, and a focus group interview was held with seven students to explore the students' experiences in more detail. RESULTS: In total, 37 students and seven professors participated in the survey. The mean score for all items is 3.12/4 for the student survey and 3.43/4 for the professor survey. The focus group interview results were categorized as the beneficial aspects and challenges for the development of clinical reasoning. CONCLUSION: The findings indicated that their responses to the intervention were generally positive, and it is thought to be an effective instructional method for fostering clinical reasoning skills in preclinical medical students.


Subject(s)
Clinical Reasoning , Curriculum , Education, Medical, Undergraduate , Focus Groups , Problem-Based Learning , Students, Medical , Humans , Problem-Based Learning/methods , Education, Medical, Undergraduate/methods , Republic of Korea , Surveys and Questionnaires , Clinical Competence , Faculty, Medical , Schools, Medical , Educational Measurement , Male , Female
11.
J Christ Nurs ; 41(3): 184-190, 2024.
Article in English | MEDLINE | ID: mdl-38853319

ABSTRACT

ABSTRACT: Test construction and test reviews are responsibilities nursing faculty arduously undertake, with an obligation to give appropriate effort and time to prepare and review exams. During test review, item analysis and statistical analysis offer valuable empirical information about the exam. However, objective compassion is also needed and can be demonstrated through careful test question construction and item analysis. Furthermore, compassion is needed in preparing students for the Next Generation NCLEX-RN (NGN) and constructing exams that appropriately test students' clinical judgment.


Subject(s)
Christianity , Educational Measurement , Empathy , Humans , Educational Measurement/methods , Students, Nursing/psychology , Education, Nursing, Baccalaureate/methods , Faculty, Nursing/psychology , Adult , Female , Male
12.
Br J Biomed Sci ; 81: 12229, 2024.
Article in English | MEDLINE | ID: mdl-38854458

ABSTRACT

This paper describes the successful implementation of an assessment literacy strategy within a Biomedical Sciences degree. Teaching was aligned with an assessment literacy framework and aimed to prepare undergraduates for a literature comprehension assessment. Students were introduced to the assessment purpose and an adapted Miller's pyramid model illustrated how the assessment contributed to competency development during their degree. Students read primary research papers and answered questions relating to the publications. They were then introduced to the processes of assessment and collaboratively graded answers of different standards. Finally, student and faculty grades were compared, differences considered, and key characteristics of answers discussed. Most students reported that they understood more about assessment standards than prior to the intervention [139/159 (87.4%)] and felt it had helped prepare them for their exam [138/159 (86.8%)]. The majority also reported they had increased confidence in evaluating data [118/159 (74%)], communicating their reasoning [113/159 (71%)] and considering what a reader needs to know [127/159 (79.9%)]. Students were asked to state the most important thing they had learned from the assessment literacy teaching. Notably, no responses referred to domain-specific knowledge. 129 free text responses were mapped to the University of Edinburgh graduate attribute framework. 93 (72%) statements mapped to the graduate attribute category "Research and Enquiry," 66 (51.16%) mapped to "Communication" and 21 (16.27%) mapped to "Personal and Intellectual Autonomy." To explore any longer-term impact of the assessment literacy teaching, a focus group was held with students from the same cohort, 2 years after the original intervention. Themes from this part of the study included that teaching had provided insights into standards and expectations for the assessment and the benefits of domain specific knowledge. A variety of aspects related to graduate attributes were also identified. Here, assessment literacy as a vehicle for graduate attribute development was an unexpected outcome. We propose that by explicitly engaging students with purpose, process, standards, and expectations, assessment literacy strategies may be used to successfully raise awareness of developmental progression, and enhance skills, aptitudes, and dispositions beneficial to Biomedical Sciences academic achievement and life after university.


Subject(s)
Curriculum , Educational Measurement , Humans , Educational Measurement/methods , Literacy , Male , Female , Students/psychology , Comprehension
13.
South Med J ; 117(6): 342-344, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38830589

ABSTRACT

OBJECTIVES: This study assessed the content of US Medical Licensing Examination question banks with regard to out-of-hospital births and whether the questions aligned with current evidence. METHODS: Three question banks were searched for key words regarding out-of-hospital births. A thematic analysis was then utilized to analyze the results. RESULTS: Forty-seven questions were identified, and of these, 55% indicated a lack of inadequate, limited, or irregular prenatal care in the question stem. CONCLUSIONS: Systematic studies comparing prenatal care in out-of-hospital births versus hospital births are nonexistent, leading to the potential for bias and adverse outcomes. Adjustments to question stems that accurately portray current evidence are recommended.


Subject(s)
Licensure, Medical , Humans , United States , Licensure, Medical/standards , Female , Pregnancy , Prenatal Care/standards , Educational Measurement/methods , Education, Medical/methods , Education, Medical/standards
15.
BMC Med Inform Decis Mak ; 24(1): 157, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38840136

ABSTRACT

BACKGROUND: Learning of burn patient assessment is very important, but heart-breaking for nursing students. This study aimed to compare the effects of feedback lecture method with a serious game (BAM Game) on nursing students' knowledge and skills in the assessment of burn patients. METHOD: In this randomized controlled clinical trial, 42 nursing students in their 5th semester at Mashhad University of Medical Sciences School of Nursing and Midwifery, were randomly assigned to intervention (BAM game, available for two weeks) and control (feedback lecture method presented in two 90-minute sessions) groups. Two weeks after the intervention, all students were evaluated for their knowledge (using knowledge assessment test) and skills (using an Objective Structured Clinical Examination). Statistical analysis involved independent t-test, Fisher's exact test, analysis of covariance (ANCOVA), and univariable and multivariable ordinal logistic regression models. RESULTS: Following the intervention, the skill scores were 16.4 (SD 2.2) for the intervention group and 11.8 (SD 3.8) for the control group. Similarly, the knowledge scores were 17.4 (SD 2.2) for the intervention group and 14.7 (SD 2.6) for the control group. Both differences were statistically significant (P < .001). These differences remained significant even after adjusting for various factors such as age, gender, marital status, residence, university entrance exam rank, and annual GPA (P < .05). Furthermore, the BAM game group showed significantly higher skills rank than the feedback lecture group across most stations (eight of ten) (P < .05) in the univariable analysis. Multivariable analysis also revealed a significantly higher skills score across most stations even after adjusting for the mentioned factors (P < .05). These results suggest that the BAM game group had higher skills scores over a range of 1.5 to 3.9 compared to the feedback lecture group. CONCLUSIONS: This study demonstrated that nursing students who participated in the BAM game group exhibited superior performance in knowledge acquisition and skill development, compared to those in the control group. These results underscore a significant enhancement in educational outcomes for students involved with the BAM game, confirming its utility as a potent and effective pedagogical instrument within the realm of nursing education. TRIAL REGISTRATION: Iranian Registry of Clinical Trials: IRCT20220410054483N1, Registration date: 18/04/2022.


Subject(s)
Burns , Clinical Competence , Students, Nursing , Humans , Female , Male , Young Adult , Burns/therapy , Adult , Educational Measurement , Health Knowledge, Attitudes, Practice , Education, Nursing
16.
BMC Med Educ ; 24(1): 619, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38840140

ABSTRACT

INTRODUCTION/AIM: Radiological imaging is crucial in modern clinical practice and requires thorough and early training. An understanding of cross-sectional imaging is essential for effective interpretation of such imaging. This study examines the extent to which completing an undergraduate ultrasound course has positive effects on the development of visual-spatial ability, knowledge of anatomical spatial relationships, understanding of radiological cross-sectional images, and theoretical ultrasound competencies. MATERIAL AND METHODS: This prospective observational study was conducted at a medical school with 3rd year medical students as part of a voluntary extracurricular ultrasound course. The participants completed evaluations (7-level Likert response formats and dichotomous questions "yes/no") and theoretical tests at two time points (T1 = pre course; T2 = post course) to measure their subjective and objective cross-sectional imaging skills competencies. A questionnaire on baseline values and previous experience identified potential influencing factors. RESULTS: A total of 141 participants were included in the study. Most participants had no previous general knowledge of ultrasound diagnostics (83%), had not yet performed a practical ultrasound examination (87%), and had not attended any courses on sonography (95%). Significant subjective and objective improvements in competencies were observed after the course, particularly in the subjective sub-area of "knowledge of anatomical spatial relationships" (p = 0.009). Similarly, participants showed improvements in the objective sub-areas of "theoretical ultrasound competencies" (p < 0.001), "radiological cross-section understanding and knowledge of anatomical spatial relationships in the abdomen" (p < 0.001), "visual-spatial ability in radiological cross-section images" (p < 0.001), and "visual-spatial ability" (p = 0.020). CONCLUSION: Ultrasound training courses can enhance the development of visual-spatial ability, knowledge of anatomical spatial relationships, radiological cross-sectional image understanding, and theoretical ultrasound competencies. Due to the reciprocal positive effects of the training, students should receive radiology training at an early stage of their studies to benefit as early as possible from the improved skills, particularly in the disciplines of anatomy and radiology.


Subject(s)
Clinical Competence , Education, Medical, Undergraduate , Students, Medical , Ultrasonography , Humans , Prospective Studies , Male , Female , Educational Measurement , Young Adult , Adult , Curriculum
17.
BMC Med Educ ; 24(1): 620, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38840190

ABSTRACT

BACKGROUND: Collective decision-making by grading committees has been proposed as a strategy to improve the fairness and consistency of grading and summative assessment compared to individual evaluations. In the 2020-2021 academic year, Washington University School of Medicine in St. Louis (WUSM) instituted grading committees in the assessment of third-year medical students on core clerkships, including the Internal Medicine clerkship. We explored how frontline assessors perceive the role of grading committees in the Internal Medicine core clerkship at WUSM and sought to identify challenges that could be addressed in assessor development initiatives. METHODS: We conducted four semi-structured focus group interviews with resident (n = 6) and faculty (n = 17) volunteers from inpatient and outpatient Internal Medicine clerkship rotations. Transcripts were analyzed using thematic analysis. RESULTS: Participants felt that the transition to a grading committee had benefits and drawbacks for both assessors and students. Grading committees were thought to improve grading fairness and reduce pressure on assessors. However, some participants perceived a loss of responsibility in students' grading. Furthermore, assessors recognized persistent challenges in communicating students' performance via assessment forms and misunderstandings about the new grading process. Interviewees identified a need for more training in formal assessment; however, there was no universally preferred training modality. CONCLUSIONS: Frontline assessors view the switch from individual graders to a grading committee as beneficial due to a perceived reduction of bias and improvement in grading fairness; however, they report ongoing challenges in the utilization of assessment tools and incomplete understanding of the grading and assessment process.


Subject(s)
Clinical Clerkship , Educational Measurement , Focus Groups , Students, Medical , Humans , Students, Medical/psychology , Internal Medicine/education , Clinical Competence/standards , Female , Male , Education, Medical, Undergraduate/standards , Faculty, Medical , Attitude of Health Personnel
18.
BMC Med Educ ; 24(1): 621, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38840242

ABSTRACT

INTRODUCTION: The long case is used to assess medical students' proficiency in performing clinical tasks. As a formative assessment, the purpose is to offer feedback on performance, aiming to enhance and expedite clinical learning. The long case stands out as one of the primary formative assessment methods for clinical clerkship in low-resource settings but has received little attention in the literature. OBJECTIVE: To explore the experiences of medical students and faculty regarding the use of the Long Case Study as a formative assessment method at a tertiary care teaching hospital in a low-resource setting. METHODOLOGY: A qualitative study design was used. The study was conducted at Makerere University, a low-resource setting. The study participants were third- and fifth-year medical students as well as lecturers. Purposive sampling was utilized to recruit participants. Data collection comprised six Focus Group Discussions with students and five Key Informant Interviews with lecturers. The qualitative data were analyzed by inductive thematic analysis. RESULTS: Three themes emerged from the study: ward placement, case presentation, and case assessment and feedback. The findings revealed that students conduct their long cases at patients' bedside within specific wards/units assigned for the entire clerkship. Effective supervision, feedback, and marks were highlighted as crucial practices that positively impact the learning process. However, challenges such as insufficient orientation to the long case, the super-specialization of the hospital wards, pressure to hunt for marks, and inadequate feedback practices were identified. CONCLUSION: The long case offers students exposure to real patients in a clinical setting. However, in tertiary care teaching hospitals, it's crucial to ensure proper design and implementation of this practice to enable students' exposure to a variety of cases. Adequate and effective supervision and feedback create valuable opportunities for each learner to present cases and receive corrections.


Subject(s)
Clinical Clerkship , Clinical Competence , Hospitals, Teaching , Qualitative Research , Students, Medical , Humans , Students, Medical/psychology , Faculty, Medical , Focus Groups , Male , Tertiary Care Centers , Educational Measurement , Formative Feedback , Female , Education, Medical, Undergraduate/methods , Resource-Limited Settings
19.
Med Educ Online ; 29(1): 2364990, 2024 Dec 31.
Article in English | MEDLINE | ID: mdl-38848480

ABSTRACT

The COVID-19 pandemic triggered transformations in academic medicine, rapidly adopting remote teaching and online assessments. Whilst virtual environments show promise in evaluating medical knowledge, their impact on examiner workload is unclear. This study explores examiner's workload during different European Diploma in Anaesthesiology and Intensive Care Part 2 Structured Oral Examinations formats. We hypothesise that online exams result in lower examiner's workload than traditional face-to-face methods. We also investigate workload structure and its correlation with examiner characteristics and marking performance. In 2023, examiner's workload for three examination formats (face-to-face, hybrid, online) using the NASA TLX instrument was prospectively evaluated. The impact of examiner demographics, candidate scoring agreement, and examination scores on workload was analysed. The overall NASA TLX score from 215 workload measurements in 142 examiners was high at 59.61 ± 14.13. The online examination had a statistically higher workload (61.65 ± 12.84) than hybrid but not face-to-face. Primary contributors to workload were mental and temporal demands, and effort. Online exams were associated with elevated frustration. Male examiners and those spending more time on exam preparation experienced a higher workload. Multiple diploma specialties and familiarity with European Diploma in Anaesthesiology and Intensive Care exams were protective against high workload. Perceived workload did not impact marking agreement or examination scores across all formats. Examiners experience high workload. Online exams are not systematically associated with decreased workload, likely due to frustration. Despite workload differences, no impact on examiner's performance or examination scores was found. The hybrid examination mode, combining face-to-face and online, was associated with a minor but statistically significant workload reduction. This hybrid approach may offer a more balanced and efficient examination process while maintaining integrity, cost savings, and increased accessibility for candidates.


Subject(s)
Anesthesiology , Critical Care , Educational Measurement , Workload , Humans , Anesthesiology/education , Male , Educational Measurement/methods , Europe , COVID-19/epidemiology , Female , Prospective Studies , Education, Distance/organization & administration , Clinical Competence
20.
BMC Med Educ ; 24(1): 636, 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38844972

ABSTRACT

BACKGROUND: General practitioner interns need to acquire the expected clinical, communication, personal and professional competencies. Internship evaluations use qualitative evaluation tables to assess competency acquisition. However, there is no standardised evaluation table used in France. Some faculties use the exhaustive, precise, and manageable Exceler evaluation tool. We aim to evaluate opinions of General practice interns in Brest about the acceptability and feasibility of using the Exceler evaluation tool to monitor competency acquisition during internships. METHODS: This qualitative study used intern focus groups. Six-open ended questions with optional follow-up questions were asked. Cards from the Dixit® game were used to guide and facilitate discussion. Open, axial, then integrative analysis of the verbatim was performed. RESULTS: This is the first study to evaluate intern opinions about GP internship evaluations using focus groups. Participants felt that the quality of existing evaluations was insufficient, and it was difficult to monitor their progress. Adapting evaluations to individual profiles and backgrounds seemed necessary. Exceler appeared to be a possible solution due to its content validity, flexibility of use and accessibility. However, there were comments about possible modifications. CONCLUSIONS: Analysing opinions of tutors, supervisors and other practice centers could help identify potential barriers and reveal solutions to facilitate its implementation and use. TRIAL REGISTRATION: Not applicable.


Subject(s)
Clinical Competence , Feasibility Studies , Focus Groups , General Practice , Internship and Residency , Qualitative Research , Humans , Internship and Residency/standards , Clinical Competence/standards , General Practice/education , Educational Measurement/methods , Male , Female , Adult , France , Attitude of Health Personnel
SELECTION OF CITATIONS
SEARCH DETAIL
...