Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 631
Filter
1.
Cureus ; 16(6): e61564, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38962609

ABSTRACT

INTRODUCTION: Objective Structured Clinical Examinations (OSCEs) are essential assessments for evaluating the clinical competencies of medical students. The COVID-19 pandemic caused a significant disruption in medical education, prompting institutions to adopt virtual formats for academic activities. This study analyzes the feasibility, satisfaction, and experiences of pediatric board candidates and faculty during virtual or electronic OSCE (e-OSCE) training sessions using Zoom video communication (Zoom Video Communications, Inc., San Jose, USA). METHODS: This is a post-event survey assessing the perceptions of faculty and candidates and the perceived advantages and obstacles of e-OSCE. RESULTS: A total of 142 participants were invited to complete a post-event survey, and 105 (73.9%) completed the survey. There was equal gender representation. More than half of the participants were examiners. The overall satisfaction with the virtual e-OSCE was high, with a mean score of 4.7±0.67 out of 5. Most participants were likely to recommend e-OSCE to a friend or colleague (mean score 8.84±1.51/10). More faculty (66.1%) than candidates (40.8%) preferred e-OSCE (P=0.006). CONCLUSION: Transitioning to virtual OSCE training during the pandemic proved feasible, with high satisfaction rates. Further research on virtual training for OSCE in medical education is recommended to optimize its implementation and outcomes.

2.
Med Teach ; : 1-6, 2024 Jun 29.
Article in English | MEDLINE | ID: mdl-38943517

ABSTRACT

PURPOSE OF ARTICLE: This paper explores issues pertinent to teaching and assessment of clinical skills at the early stages of medical training, aimed at preventing academic integrity breaches. The drivers for change, the changes themselves, and student perceptions of those changes are described. METHODS: Iterative changes to a summative high stakes Objective Structured Clinical Examination (OSCE) assessment in an undergraduate medical degree were undertaken in response to perceived/known breaches of assessment security. Initial strategies focused on implementing best practice teaching and assessment design principles, in association with increased examination security. RESULTS: These changes failed to prevent alleged sharing of examination content between students. A subsequent iteration saw a more radical deviation from classic OSCE assessment design, with students being assessed on equivalent competencies, not identical items (OSCE stations). This more recent approach was broadly acceptable to students, and did not result in breaches of academic integrity that were detectable. CONCLUSIONS: Ever increasing degrees of assessment security need not be the response to breaches of academic integrity. Use of non-identical OSCE items across a cohort, underpinned by constructive alignment of teaching and assessment may mitigate the incentives to breach academic integrity, though face validity is not universal.

3.
Am J Pharm Educ ; : 100734, 2024 Jun 27.
Article in English | MEDLINE | ID: mdl-38944280

ABSTRACT

OBJECTIVE: To identify factors influencing patient responses in potentially sensitive situations that might lead to embarrassment (defined by politeness theory (PT) as positive face-threatening acts [FTAs]) or a sense of imposition (defined by PT as negative FTAs) during Objective Structured Clinical Examinations (OSCEs) and to assess the participant's ability to mitigate such situations. METHODS: Nineteen OSCE video recordings of 10 pharmacy trainees interacting with mock patients were examined using the PT framework. All relevant participants' speech acts were coded and quantified into type of FTAs and the mitigation strategies used. Patient (assessor) responses were classified then quantified into preferred responses (ie, quick response) vs unpreferred (ie, delayed or hesitant responses) using conversation analysis. The chi-square test was used to identify any association between relevant variables according to predefined hypotheses using SPSS version 27. RESULTS: A total of 848 FTAs were analyzed. Participants failed to meet patient face needs in 32.4% of positive FTAs, negative FTAs in 11.5% of negative FTAs, and 44.4% of positive and negative FTAs. Although patients disclosing information about any inappropriate lifestyle behavior (as per OSCE scripts) expressed these via unpreferred mannerisms, participants were less likely to provide patients with reassurance when patient face needs were challenged in this way (68.2% of these unpreferred responses were not given reassuring feedback) than when they were maintained. CONCLUSION: Improving educational programs to include the context of patient face needs and conversational strategies for properly dealing with highly sensitive situations was suggested as a way to equip trainees with the skills to effectively build rapport with patients.

4.
JMIR Med Educ ; 10: e47438, 2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38904482

ABSTRACT

Unlabelled: A significant component of Canadian medical education is the development of clinical skills. The medical educational curriculum assesses these skills through an objective structured clinical examination (OSCE). This OSCE assesses skills imperative to good clinical practice, such as patient communication, clinical decision-making, and medical knowledge. Despite the widespread implementation of this examination across all academic settings, few preparatory resources exist that cater specifically to Canadian medical students. MonkeyJacket is a novel, open-access, web-based application, built with the goal of providing medical students with an accessible and representative tool for clinical skill development for the OSCE and clinical settings. This viewpoint paper presents the development of the MonkeyJacket application and its potential to assist medical students in preparation for clinical examinations and practical settings. Limited resources exist that are web-based; accessible in terms of cost; specific to the Medical Council of Canada (MCC); and, most importantly, scalable in nature. The goal of this research study was to thoroughly describe the potential utility of the application, particularly its capacity to provide practice and scalable formative feedback to medical students. MonkeyJacket was developed to provide Canadian medical students with the opportunity to practice their clinical examination skills and receive peer feedback by using a centralized platform. The OSCE cases included in the application were developed by using the MCC guidelines to ensure their applicability to a Canadian setting. There are currently 75 cases covering 5 specialties, including cardiology, respirology, gastroenterology, neurology, and psychiatry. The MonkeyJacket application is a web-based platform that allows medical students to practice clinical decision-making skills in real time with their peers through a synchronous platform. Through this application, students can practice patient interviewing, clinical reasoning, developing differential diagnoses, and formulating a management plan, and they can receive both qualitative feedback and quantitative feedback. Each clinical case is associated with an assessment checklist that is accessible to students after practice sessions are complete; the checklist promotes personal improvement through peer feedback. This tool provides students with relevant case stems, follow-up questions that probe for differential diagnoses and management plans, assessment checklists, and the ability to review the trend in their performance. The MonkeyJacket application provides medical students with a valuable tool that promotes clinical skill development for OSCEs and clinical settings. MonkeyJacket introduces a way for medical learners to receive feedback regarding patient interviewing and clinical reasoning skills that is both formative and scalable in nature, in addition to promoting interinstitutional learning. The widespread use of this application can increase the practice of and feedback on clinical skills among medical learners. This will not only benefit the learner; more importantly, it can provide downstream benefits for the most valuable stakeholder in medicine-the patient.


Subject(s)
Clinical Competence , Internet , Humans , Canada , Educational Measurement/methods , Students, Medical , Education, Medical/methods , Curriculum
5.
Front Med (Lausanne) ; 11: 1395466, 2024.
Article in English | MEDLINE | ID: mdl-38903805

ABSTRACT

This study aimed to investigate the experience of medical students assessing their cohort peers in formative clinical assessment. The exercise was designed to provide students with a formative experience prior to their summative assessment, and to determine what students could learn by being on the "other side of the mark sheet." Students were grateful for the experience learning both from the assessment practice, and from the individual written feedback provided immediately afterwards. They also described how much they learnt from seeing the assessment from the assessor's viewpoint, with many students commenting that they learnt more from being the "assessor" than from being the "student" in the process. Students were asked how they felt about being assessed by their peers, with some describing the experience as being more intimidating and stressful than when compared to assessment by clinicians. An interesting aspect of this study is that it also demonstrates some findings which suggest that the students' current learning context appears to have an effect on their attitudes to their peers as assessors. It is possible the competitive cultural milieu of the teaching hospital environment may have a negative effect on medical student collegiality and peer support.

6.
Med Educ Online ; 29(1): 2370617, 2024 Dec 31.
Article in English | MEDLINE | ID: mdl-38934534

ABSTRACT

While objective clinical structured examination (OSCE) is a worldwide recognized and effective method to assess clinical skills of undergraduate medical students, the latest Ottawa conference on the assessment of competences raised vigorous debates regarding the future and innovations of OSCE. This study aimed to provide a comprehensive view of the global research activity on OSCE over the past decades and to identify clues for its improvement. We performed a bibliometric and scientometric analysis of OSCE papers published until March 2024. We included a description of the overall scientific productivity, as well as an unsupervised analysis of the main topics and the international scientific collaborations. A total of 3,224 items were identified from the Scopus database. There was a sudden spike in publications, especially related to virtual/remote OSCE, from 2020 to 2024. We identified leading journals and countries in terms of number of publications and citations. A co-occurrence term network identified three main clusters corresponding to different topics of research in OSCE. Two connected clusters related to OSCE performance and reliability, and a third cluster on student's experience, mental health (anxiety), and perception with few connections to the two previous clusters. Finally, the United States, the United Kingdom, and Canada were identified as leading countries in terms of scientific publications and collaborations in an international scientific network involving other European countries (the Netherlands, Belgium, Italy) as well as Saudi Arabia and Australia, and revealed the lack of important collaboration with Asian countries. Various avenues for improving OSCE research have been identified: i) developing remote OSCE with comparative studies between live and remote OSCE and issuing international recommendations for sharing remote OSCE between universities and countries; ii) fostering international collaborative studies with the support of key collaborating countries; iii) investigating the relationships between student performance and anxiety.


Subject(s)
Bibliometrics , Clinical Competence , Education, Medical, Undergraduate , Educational Measurement , Humans , Educational Measurement/methods , Educational Measurement/standards , Education, Medical, Undergraduate/standards , Reproducibility of Results , Students, Medical/psychology , Students, Medical/statistics & numerical data , Biomedical Research/standards
7.
BMC Med Educ ; 24(1): 673, 2024 Jun 17.
Article in English | MEDLINE | ID: mdl-38886698

ABSTRACT

OBJECTIVE: To analyze the satisfaction levels, perceptions of developing clinical competencies through objective structured clinical examination and to explore the experiences, challenges, and suggestions of undergraduate dental students. METHODS: The study adopted a mixed-method convergent design. Quantitative data were collected from 303 participants through surveys, evaluating satisfaction levels with objective structured clinical examination (OSCE). Additionally, qualitative insights were gathered through student focus group interviews, fundamental themes were developed from diverse expressions on various aspects of OSCE assessments. The Chi-Square tests, was performed to assess associations between variables. Data integration involved comparing and contrasting quantitative and qualitative findings to derive comprehensive conclusions. RESULTS: The satisfaction rates include 69.4% for the organization of OSCE stations and 57.4% for overall effectiveness. However, a crucial challenge was identified, with only 36.7% of students receiving adequate post-OSCE feedback. Furthermore, a majority of students (50%) expressed concerns about the clinical relevance of OSCEs. The study showed a significant associations (p < 0.05) between satisfaction levels and years of study as well as previous OSCE experience. Student focus group interviews revealed diverse perspectives on OSCE assessments. While students appreciate the helpfulness of OSCEs, concerns were raised regarding time constraints, stress, examiner training, and the perceived lack of clinical relevance. CONCLUSION: The students anticipated concerns about the clinical relevance of OSCEs, highlighting the need for a more aligned assessment approach. Diverse perspectives on OSCE assessments reveal perceived helpfulness alongside challenges such as lack of feedback, examiner training, time constraints, and mental stress.


Subject(s)
Clinical Competence , Education, Dental , Educational Measurement , Focus Groups , Personal Satisfaction , Students, Dental , Humans , Students, Dental/psychology , Female , Male , Education, Dental/standards , Surveys and Questionnaires , Young Adult , Adult
8.
Rev Neurol (Paris) ; 2024 May 04.
Article in English | MEDLINE | ID: mdl-38705796

ABSTRACT

BACKGROUND: There is little consensus on how to make a diagnosis announcement of severe chronic disease in neurology. Other medical specialties, such as oncology, have developed assessment methods similar to the Objective Structured Clinical Examination (OSCE) to address this issue. Here we report the implementation of an OSCE focused on the diagnosis announcement of chronic disease in neurology by residents. OBJECTIVE: We aimed to evaluate the acceptability, feasibility and validity in routine practice of an OSCE combined with a theoretical course focused on diagnosis announcement in neurology. METHOD: Eighteen neurology residents were prospectively included between 2019 and 2022. First, they answered a questionnaire on their previous level of training in diagnosis announcement. Second, in a practical session with a simulated patient, they made a 15-min diagnosis announcement and then had 5mins of immediate feedback with an expert observer, present in the room. The OSCE consisted of 4 different stations, with standardized scenarios dedicated to the announcement of multiple sclerosis (MS), Parkinson's disease (PD), Alzheimer's disease (AD) and amyotrophic lateral sclerosis (ALS). Third, in a theory session, expert observers covered the essential theoretical points. All residents and expert observers completed an evaluation of the "practical session" and the "theory session". RESULTS: Residents estimated their previous level of diagnosis announcement training at 3.1/5. The most feared announcements were AD and ALS. The "practical session" was rated at a mean of 4.1/5 by the residents and 4.8/5 by the expert observers, and the "theory session" at a mean of 4.7/5 by the residents and 5/5 by the expert observers. After the OSCEs, 11 residents felt more confident about making an announcement. CONCLUSION: This study has shown a benefit of using an OSCE to learn how to make a diagnosis announcement of severe chronic disease in neurology. OSCEs could be used in many departments in routine practice and seem adapted to residents.

9.
J Dent Educ ; 2024 May 12.
Article in English | MEDLINE | ID: mdl-38736189

ABSTRACT

PURPOSE: This study aims to evaluate how student performance and perspectives changed when the Objective Structured Clinical Exam (OSCE) assessment system was changed from a composite score to discipline-specific grading at the Harvard School of Dental Medicine. METHODS: The retrospective study population consisted of all students (n = 349) who completed three OSCEs (OSCE 1, 2, and 3) as part of the predoctoral program during the years 2014-2023. Data on the students' OSCE scores were obtained from the Office of Dental Education, and data on students' race/ethnicity and gender were obtained from their admissions data. RESULTS: The likelihood of a student failing the OSCE after the assessment system change significantly increased with an adjusted odds ratio of 20.12. After the change, the number of failed subjects per student decreased with an adjusted mean ratio of 0.48. Students perceived the OSCE as being less useful after the change. Independent of the grading change, OSCEs 1 and 2 were seen as more useful compared to OSCE 3, which is administered in the last year of the Doctor of Dental Medicine program. CONCLUSION: The discipline-specific nature of the new assessment system helps focus on specific areas of remediation, rather than blanket remediation used previously, in order to isolate the actual areas of deficiency and to focus remediation efforts so that students can align their learning needs appropriately. Therefore, although the actual number of fails identified increased for the course, the assessment change has allowed for more directed, actionable information to be gained from the OSCE to prepare students to work toward competency standards.

10.
JMIR Med Educ ; 10: e53997, 2024 04 30.
Article in English | MEDLINE | ID: mdl-38693686

ABSTRACT

SaNuRN is a five-year project by the University of Rouen Normandy (URN) and the Côte d'Azur University (CAU) consortium to optimize digital health education for medical and paramedical students, professionals, and administrators. The project includes a skills framework, training modules, and teaching resources. In 2027, SaNuRN is expected to train a significant portion of the 400,000 health and paramedical professions students at the French national level. Our purpose is to give a synopsis of the SaNuRN initiative, emphasizing its novel educational methods and how they will enhance the delivery of digital health education. Our goals include showcasing SaNuRN as a comprehensive program consisting of a proficiency framework, instructional modules, and educational materials and explaining how SaNuRN is implemented in the participating academic institutions. SaNuRN is a project aimed at educating and training health-related and paramedics students in digital health. The project results from a cooperative effort between URN and CAU, covering four French departments. The project is based on the French National Referential on Digital Health (FNRDH), which defines the skills and competencies to be acquired and validated by every student in the health, paramedical, and social professions curricula. The SaNuRN team is currently adapting the existing URN and CAU syllabi to FNRDH and developing short-duration video capsules of 20 to 30 minutes to teach all the relevant material. The project aims to ensure that the largest student population earns the necessary skills, and it has developed a two-tier system involving facilitators who will enable the efficient expansion of the project's educational outreach and support the students in learning the needed material efficiently. With a focus on real-world scenarios and innovative teaching activities integrating telemedicine devices and virtual professionals, SaNuRN is committed to enabling continuous learning for healthcare professionals in clinical practice. The SaNuRN team introduced new ways of evaluating healthcare professionals by shifting from a knowledge-based to a competencies-based evaluation, aligning with the Miller teaching pyramid and using the Objective Structured Clinical Examination and Script Concordance Test in digital health education. Drawing on the expertise of URN, CAU, and their public health and digital research laboratories and partners, the SaNuRN project represents a platform for continuous innovation, including telemedicine training and living labs with virtual and interactive professional activities. The SaNuRN project provides a comprehensive, personalized 30-hour training package for health and paramedical students, addressing all 70 FNRDH competencies. The program is enhanced using AI and NLP to create virtual patients and professionals for digital healthcare simulation. SaNuRN teaching materials are open-access. The project collaborates with academic institutions worldwide to develop educational material in digital health in English and multilingual formats. SaNuRN offers a practical and persuasive training approach to meet the current digital health education requirements.


Subject(s)
Health Education , Education, Distance/methods , Education, Distance/trends , Forecasting , Health Education/trends , Health Education/methods
11.
GMS J Med Educ ; 41(2): Doc14, 2024.
Article in English | MEDLINE | ID: mdl-38779694

ABSTRACT

Modern medical moulages are becoming increasingly important in simulation-based health professions education. Their lifelikeness is important so that simulation engagement is not disrupted while their standardization is crucial in high-stakes exams. This report describes in detail how three-dimensional transfers are developed and produced so that educators will be able to develop their own. In addition, evaluation findings and lessons learnt from deploying transfers in summative assessments are shared. Step-by-step instructions are given for the creation and application of transfers, including materials and photographic visualizations. We also examined feedback on 10 exam stations (out of a total of 81) with self-developed three-dimensional transfers and complement this with additional lessons learnt. By the time of submission, the authors successfully developed and deployed over 40 different three-dimensional transfers representing different clinical findings in high-stakes exams using the techniques explained in this article or variations thereof. Feedback from students and examiners after completing the OSCE is predominantly positive, with lifelikeness being the quality most often commented upon. Caveats derived from feedback and own experiences are included. The step-by-step approach reported can be adapted and replicated by healthcare educators to build their own three-dimensional transfers. This should widen the scope and the lifelikeness of their simulations. At the same time we propose that this level of lifelikeness should be expected by learners as not to disrupt simulation engagement. Our evaluation of their use in high-stakes assessments suggests they are both useful and accepted.


Subject(s)
Simulation Training , Humans , Simulation Training/methods , Educational Measurement/methods , Clinical Competence/standards , Skin Diseases , Models, Anatomic , Imaging, Three-Dimensional
12.
Cureus ; 16(4): e59008, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38800217

ABSTRACT

INTRODUCTION: Medical communication skills are a critical component of clinical medicine and patient satisfaction. Communication skills are difficult to teach and evaluate, necessitating tools that are effective and efficient. This study presents and validates the 7 Elements Communication Rating Form (7E-CRF), a streamlined, dual-purpose, evidence-based medical communication checklist that functions as a teaching and assessment tool. METHOD: A 14-item teaching and assessment tool is described and validated using face, concurrent, and predictive validity indices. The study was conducted with 661 medical students from the West Virginia School of Osteopathic Medicine (WVSOM). Student performance was assessed in year 1 labs, year 2 labs, and year 2 and year 3 objective structured clinical examination (OSCE). These internal indices were compared with student performance on the Humanistic Domain of the Comprehensive Osteopathic Medical Licensing Examination (COMLEX) Level 2-Performance Evaluation (PE), a licensure exam previously taken in years 3 or 4 of osteopathic medical schools. RESULTS: The evidence of interrater reliability and predictive validity is strong. Data from the 7E-CRF is compared to performance on the COMLEX Level 2-PE, Humanistic Domain. The 7E-CRF can identify students who are at a 10-fold increased risk of failure on the COMLEX Level 2-PE Humanistic Domain.  Conclusions: The 7E-CRF integrates instruction and assessment, based on a national and international model. The simplicity, foundation in professional consensus, ease of use, and predictive efficacy make the 7E-CRF a highly valuable instrument for medical schools in teaching and evaluating competency in medical communication skills.

13.
Adv Simul (Lond) ; 9(1): 13, 2024 Apr 05.
Article in English | MEDLINE | ID: mdl-38581026

ABSTRACT

BACKGROUND: In adapting to COVID-19, many health professional training programs moved abruptly from in-person to online simulated patient interviews for teaching and evaluation without the benefit of evidence regarding the efficacy of this mode of delivery. This paper reports on a multi-methods research project comparing in-person and online simulated patient interviews conducted by allied health professionals as part of an educational intervention offered at a large university teaching hospital. METHODS: Twenty-three participants conducted two 15-min interviews with simulated patients using previously validated scenarios of patients presenting with suicide risk. In order to assess the equivalency of the two modalities, physiological and psychological stress were measured using heart rate variability parameters and the State-Trait Anxiety Inventory respectively, and then were compared across cohorts using t-tests. Reflective interviews elicited qualitative impressions of the simulations that were subject to thematic qualitative analysis. RESULTS: There were no statistical differences in measures of psychological stress or physiological arousal of participant health care professionals who engaged with in-person versus online simulated interviews, suggesting they were equally effective in eliciting reactions commonly found in challenging clinical situations. In reflective interviews, participants commented on the realism of both modalities of simulated patient encounters and that simulated interviews provoked emotional and physiological responses consistent with actual patient encounters. CONCLUSIONS: These findings provide developing evidence that carefully designed online clinical simulations can be a useful tool for the education and assessment of healthcare professionals.

14.
J Surg Educ ; 81(6): 880-887, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38677896

ABSTRACT

OBJECTIVE: Remote OSCEs (Objective Structured Clinical Examination) are an alternative evaluation method during pandemic periods but they have never been evaluated in orthopedic surgery. We aimed to evaluate whether remote OSCEs would be feasible, and efficient for assessment of undergraduate medical students. METHODS: A cross-sectional study was performed. Thirty-four students were randomly assigned into 2 equal groups, either the conventional OSCE group or the digital OSCE group. Three types of skills were assessed: technical procedure, clinical examination, and radiographic analysis. Students were graded and they filled in a satisfaction questionnaire for both types of OSCEs. RESULTS: The mean score, out of 20, was 14.3 ± 2.5 (range 9.3-19) for the digital sessions, versus 14.4 ± 2.3 (range 10-18.6) for conventional sessions (p = 0.81). Bland Altman Plot showed that 88% of students scored within agreement. The average global feedback was different for item repeatability, relevance, and OSCEs preference (p < 0.0001, p = 0.0001, and p < 0.0001 respectively). However, they did not report differences for the item concerning the organization (p = 0.2). CONCLUSION: The results of this comparative study between digital and conventional OSCEs showed comparable distance learning scores between the 2 groups, whatever the skill assessed. However, the student's evaluation showed some reticence to conduct again OSCEs remotely.


Subject(s)
Clinical Competence , Education, Medical, Undergraduate , Educational Measurement , Feasibility Studies , Orthopedics , Cross-Sectional Studies , Humans , Educational Measurement/methods , Education, Medical, Undergraduate/methods , Male , Female , Orthopedics/education , Orthopedic Procedures/education , COVID-19 , Surveys and Questionnaires
15.
Curr Pharm Teach Learn ; 16(6): 465-468, 2024 06.
Article in English | MEDLINE | ID: mdl-38582641

ABSTRACT

BACKGROUND AND PURPOSE: To describe one institution's approach to transformation of high-stakes objective structure clinical examinations (OSCEs) from norm-referenced to criterion-referenced standards setting and to evaluate the impact of these changes on OSCE performance and pass rates. EDUCATIONAL ACTIVITY AND SETTING: The OSCE writing team at the college selected a modified Angoff method appropriate for high-stakes assessments to replace the two standard deviation method previously used. Each member of the OSCE writing team independently reviewed the analytical checklist and calculated a passing score for active stations on OSCEs. Then the group met to determine a final pass score for each station. The team also determined critical cut points for each station, when indicated. After administration of the OSCEs, scores, pass rates, and need for remediation were compared to the previous norm-referenced method. Descriptive statistics were used to summarize the data. FINDINGS: OSCE scores remained relatively unchanged when switched to a criterion-referenced method, but the number of remediators increased up to 2.6 fold. In the first year, the average score increased from 86.8% to 91.7% while the remediation rate increased from 2.8% to 7.4%. In the third year, the average increased from 90.9% to 92% while the remediation rate increased from 6% to 15.6%. Likewise, the fourth-year average increased from 84.9% to 87.5% while the remediation rate increased from 4.4% to 9%. SUMMARY: Transition to a modified Angoff method did not impact average OSCE score but did increase the number of remediations.


Subject(s)
Educational Measurement , Humans , Educational Measurement/methods , Educational Measurement/statistics & numerical data , Educational Measurement/standards , Clinical Competence/standards , Clinical Competence/statistics & numerical data , Education, Pharmacy/methods , Education, Pharmacy/standards , Education, Pharmacy/statistics & numerical data
16.
Pharmacy (Basel) ; 12(2)2024 Mar 24.
Article in English | MEDLINE | ID: mdl-38668080

ABSTRACT

The Medical and Pharmacy Student Collaboration (MAPSC) student organization at the University of Southern California, Alfred E. Mann School of Pharmacy and Pharmaceutical Sciences, created an extracurricular, peer-led, virtual group mock objective structured clinical examination (MOSCE) to expose first-year pharmacy students (P1s) to the Pharmacists' Patient Care Process (PPCP). The purpose of this study is to evaluate the impact of a MAPSC MOSCE on P1s self-reported confidence in applying the PPCP and on patient communication, medication knowledge, and clinical skills. An anonymous, optional, self-reported survey was administered to P1s before and after the event, where they rated their confidence on a scale of 0-100 (0 = not confident, 100 = certainly confident). The statistical analysis was a paired two-tailed t-test with a significance level of p < 0.05. A total of 152 P1s and 30 facilitators attended the MOSCE. One hundred thirty-nine students met the inclusion criteria and were included in the data analysis. There was a statistically significant difference in the change in self-reported confidence for all PPCP components and learning outcomes. The results of our study strongly indicate that introducing P1 students to the PPCP through a MAPSC MOSCE format is a valuable experience.

17.
Med Teach ; : 1-9, 2024 Apr 18.
Article in English | MEDLINE | ID: mdl-38635469

ABSTRACT

INTRODUCTION: Whilst rarely researched, the authenticity with which Objective Structured Clinical Exams (OSCEs) simulate practice is arguably critical to making valid judgements about candidates' preparedness to progress in their training. We studied how and why an OSCE gave rise to different experiences of authenticity for different participants under different circumstances. METHODS: We used Realist evaluation, collecting data through interviews/focus groups from participants across four UK medical schools who participated in an OSCE which aimed to enhance authenticity. RESULTS: Several features of OSCE stations (realistic, complex, complete cases, sufficient time, autonomy, props, guidelines, limited examiner interaction etc) combined to enable students to project into their future roles, judge and integrate information, consider their actions and act naturally. When this occurred, their performances felt like an authentic representation of their clinical practice. This didn't work all the time: focusing on unavoidable differences with practice, incongruous features, anxiety and preoccupation with examiners' expectations sometimes disrupted immersion, producing inauthenticity. CONCLUSIONS: The perception of authenticity in OSCEs appears to originate from an interaction of station design with individual preferences and contextual expectations. Whilst tentatively suggesting ways to promote authenticity, more understanding is needed of candidates' interaction with simulation and scenario immersion in summative assessment.

18.
Med Educ Online ; 29(1): 2339040, 2024 Dec 31.
Article in English | MEDLINE | ID: mdl-38603644

ABSTRACT

To offset grade inflation, many clerkships combine faculty evaluations with objective assessments including the Medical Examiners Subject Examination (NBME-SE) or Objective Structured Clinical Examination (OSCE), however, standardized methods are not established. Following a curriculum transition removing faculty clinical evaluations from summative grading, final clerkship designations of fail (F), pass (P), and pass-with-distinction (PD) were determined by combined NBME-SE and OSCE performance, with overall PD for the clerkship requiring meeting this threshold in both. At the time, 90% of students achieved PD on the Internal Medicine (IM) OSCE resulting in overall clerkship grades primarily determined by the NBME-SE. The clerkship sought to enhance the OSCE to provide a more thorough objective clinical skills assessment, offset grade inflation, and reduce the NBME-SE primary determination of the final clerkship grade. The single-station 43-point OSCE was enhanced to a three-station 75-point OSCE using the Reporter-Interpreter-Manager-Educator (RIME) framework to align patient encounters with targeted assessments of progressive skills and competencies related to the clerkship rotation. Student performances were evaluated pre- and post-OSCE enhancement. Student surveys provided feedback about the clinical realism of the OSCE and the difficulty. Pre-intervention OSCE scores were more tightly clustered (SD = 5.65%) around a high average performance with scores being highly negatively skewed. Post-intervention OSCE scores were more dispersed (SD = 6.88%) around a lower average with scores being far less skewed resulting in an approximately normal distribution. This lowered the total number of students achieving PD on the OSCE and PD in the clerkship, thus reducing the relative weight of the NMBE-SE in the overall clerkship grade. Student response was positive, indicating the examination was fair and reflective of their clinical experiences. Through structured development, OSCE assessment can provide a realistic and objective measurement of clinical performance as part of the summative evaluation of students.


Subject(s)
Clinical Clerkship , Students, Medical , Humans , Physical Examination , Curriculum , Internal Medicine/education , Clinical Competence , Educational Measurement/methods
19.
Nurs Rep ; 14(2): 683-694, 2024 Mar 22.
Article in English | MEDLINE | ID: mdl-38525698

ABSTRACT

To determine the usefulness of combining two methodologies (OSCE and escape room) in a scenario simulation to evaluate a subject, and determine the evaluation of the students of this experience. An observational cross-sectional study was carried out with students enrolled in a sexual and reproductive health-care course as a part of their nursing degree. The students had to solve four clinical cases based on the contents of the teaching practices of the subject by solving clues that led them to carry out procedures and techniques and provide care in scenario simulators. Students evaluated the experience using the GAMEX (Gameful Experience in Gamification) scale. Mean differences were estimated with their respective 95% confidence intervals. A total of 124 students participated. Of these, 63.7% (79) solved the clinical cases with their knowledge and skills. Most (80.6%, 100) students stated that they completely remembered and applied the knowledge of the topic during the game. Almost all (98.4%, 122) would recommend this experience. The dimensions with the best rating on the GAMEX scale were "fun", with an average score of 4.7 points (0.49), followed by "critical thinking", with 4.2 (0.59). Women presented statistically better scores than men (mean difference: 1.58; 95% CI: 0.55, 2.61). The OSCE combined with an escape room using scenario simulations may be a useful tool to evaluate the subject. In addition, the students were satisfied, had fun, and recommended the experience. This study was not registered.

20.
Cureus ; 16(2): e54662, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38529442

ABSTRACT

BACKGROUND: Medical students' transition to internship has a discernible gap in structured preparation, particularly in practical skill application. We introduced the internship preparatory clinical course (IPCC) to address this gap.  Methods: The course was conducted at the clinical skills and simulation center at King Saud University Medical City and included a total of eight skills distributed across four stations. It employs a timed-station methodology, inspired by the Observed Structured Clinical Examination, but innovatively adapted as a teaching method. Participants were exposed to various stations, such as suturing techniques, interactive mannequins for anatomical structure demonstration, real ultrasound machines on simulated patients, IV cannulation, and urinary catheterization. To facilitate active learning, participants received course materials a day prior, equipped with QR codes for quick reference. Instructors emphasized on-the-spot review, fostering an environment where learners actively engage. Toward the end of the course, after internship a follow-up survey was administered to obtain feedback, achieving a response rate of 83% (n=45/54). RESULTS: Feedback from the course was overwhelmingly positive, with 91.1% (n=41/45) rating the course as 7 and above out of 10. Participants expressed a higher degree of confidence in providing wound care (Median: 8, IQR: 2) and inserting or removing a Foley catheter (Median: 8, IQR: 4). Lower confidence was observed in stoma examination and care (Median: 5, IQR: 4). During their internships, participants reported that 100% (n=45/45) utilized suturing skills, 48.9% (n=22/45) performed focused assessment with sonography in trauma (FAST) examinations, and 62.2% (n=28/45) attempted nasogastric tube insertions. Additionally, 88.9% (n=40/45) performed wound examinations, 77.8% (n=35/45) provided wound care and dressing, 37.8% (n=17/45) performed abscess drainage, 51.1% (n=23/45) removed and 37.8% (n=17/45) inserted a Foley catheter, and 20% (n=9/45) provided stoma care. CONCLUSION: The IPCC effectively addresses the existing gap in medical education, bridging the theory-to-practice divide. The innovative use of the timed-station approach emphasizes the importance of active learning. Our results signify the importance of simulation training, as most interns acknowledge the positive impact of the course on their internship. We recommend preparing pre-interns for internships by giving special consideration to the procedural aspects as most associated with medical errors. The timed-station approach can improve cost-effectiveness and enhance responsibility-driven learning.

SELECTION OF CITATIONS
SEARCH DETAIL
...