Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 11.659
Filter
2.
Can Med Educ J ; 15(2): 34-38, 2024 May.
Article in English | MEDLINE | ID: mdl-38827904

ABSTRACT

Purpose: Given the COVID-19 pandemic, many Objective Structured Clinical Examinations (OSCEs) have been adapted to virtual formats without addressing whether physical examination maneuvers can or should be assessed virtually. In response, we developed a novel touchless physical examination station for a virtual OSCE and gathered validity evidence for its use. Methods: We used a touchless physical examination OSCE station pilot-tested in a virtual OSCE in which Internal Medicine residents had to verbalize their approach to the physical examination, interpret images and videos of findings provided upon request, and make a diagnosis. We explored differences in performance by training year using ANOVA. In addition, we analyzed data using elements of Bloom's taxonomy of learning, i.e. knowledge, understanding, and synthesis. Results: Sixty-seven residents (PGY1-3) participated in the OSCE. Scores on the pilot station were significantly different between training levels (F=3.936, p = 0.024, ηp2 = 0.11). The pilot station-total correlation (STC) was r = 0.558, and the item-station correlations ranged from r = 0.115-0.571, with the most discriminating items being those that assessed application of knowledge (interpretation and synthesis) rather than recall. Conclusion: This touchless physical examination station was feasible, had acceptable psychometric characteristics, and discriminated between residents at different levels of training.


Objet: Compte tenu de la pandémie de COVID-19, de nombreux examens cliniques objectifs structurés (ECOS) ont été adaptés vers un format virtuel sans que l'on se questionne à savoir si les manœuvres d'examen physique peuvent ou doivent être évaluées virtuellement. Conséquemment, nous avons développé une nouvelle station d'examen physique sans contact pour un ECOS virtuel et recueilli des preuves de validité concernant son utilisation. Méthodes: Nous avons utilisé une station d'examen physique sans contact testée dans le cadre d'un ECOS virtuel pendant lequel les résidents en médecine interne devaient verbaliser leur approche concernant l'examen physique, interpréter des images et des vidéos d'examens fournis sur demande, et poser un diagnostic. Nous avons étudié les différences de rendement en fonction de l'année de formation à l'aide de l'ANOVA. En outre, nous avons analysé les données en utilisant les éléments de la taxonomie de l'apprentissage de Bloom, c'est-à-dire la connaissance, la compréhension et la synthèse. Résultats: Soixante-sept résidents (PGY1-3) ont participé à l'ECOS. Les scores de la station pilote étaient significativement différents entre les niveaux de formation (F=3.936, p=0.024, ηp2=0.11). La corrélation totale de la station pilote (STC) était de r=0,558, et les corrélations question-station variaient de r=0,115-0,571, les questions les plus discriminantes étant celles qui évaluaient l'application (interprétation et synthèse) plutôt que le rappel de connaissances. Conclusion: Cette station d'examen physique sans contact était réalisable, a présenté des caractéristiques psychométriques acceptables et a permis d'établir une discrimination entre les résidents de différents niveaux de formation.


Subject(s)
COVID-19 , Clinical Competence , Educational Measurement , Internship and Residency , Physical Examination , Humans , Physical Examination/methods , Educational Measurement/methods , Internal Medicine/education , SARS-CoV-2 , Pandemics , Female , Male , Virtual Reality
3.
Can Med Educ J ; 15(2): 14-26, 2024 May.
Article in English | MEDLINE | ID: mdl-38827914

ABSTRACT

Purpose: Competency-based medical education relies on feedback from workplace-based assessment (WBA) to direct learning. Unfortunately, WBAs often lack rich narrative feedback and show bias towards Medical Expert aspects of care. Building on research examining interactive assessment approaches, the Queen's University Internal Medicine residency program introduced a facilitated, team-based assessment initiative ("Feedback Fridays") in July 2017, aimed at improving holistic assessment of resident performance on the inpatient medicine teaching units. In this study, we aim to explore how Feedback Fridays contributed to formative assessment of Internal Medicine residents within our current model of competency-based training. Method: A total of 53 residents participated in facilitated, biweekly group assessment sessions during the 2017 and 2018 academic year. Each session was a 30-minute facilitated assessment discussion done with one inpatient team, which included medical students, residents, and their supervising attending. Feedback from the discussion was collected, summarized, and documented in narrative form in electronic WBA forms by the program's assessment officer for the residents. For research purposes, verbatim transcripts of feedback sessions were analyzed thematically. Results: The researchers identified four major themes for feedback: communication, intra- and inter-personal awareness, leadership and teamwork, and learning opportunities. Although feedback related to a broad range of activities, it showed strong emphasis on competencies within the intrinsic CanMEDS roles. Additionally, a clear formative focus in the feedback was another important finding. Conclusions: The introduction of facilitated team-based assessment in the Queen's Internal Medicine program filled an important gap in WBA by providing learners with detailed feedback across all CanMEDS roles and by providing constructive recommendations for identified areas for improvement.


Objectif: La formation médicale fondée sur les compétences s'appuie sur la rétroaction faite lors de l'évaluation des apprentissages par observation directe dans le milieu de travail. Malheureusement, les évaluations dans le milieu de travail omettent souvent de fournir une rétroaction narrative exhaustive et privilégient les aspects des soins relevant de l'expertise médicale. En se basant sur la recherche ayant étudié les approches d'évaluation interactive, le programme de résidence en médecine interne de l'Université Queen's a introduit en juillet 2017 une initiative d'évaluation facilitée et en équipe (« Les vendredis rétroaction ¼), visant à améliorer l'évaluation holistique du rendement des résidents dans les unités d'enseignement clinique en médecine interne. Dans cette étude, nous visons à explorer comment ces « vendredis rétroaction ¼ ont contribué à l'évaluation formative des résidents en médecine interne dans le cadre de notre modèle actuel de formation axée sur les compétences. Méthode: Au total, 53 résidents ont participé à des séances d'évaluation de groupe facilitées et bi-hebdomadaires au cours de l'année universitaire 2017-2018. Chaque séance consistait en une discussion d'évaluation facilitée de 30 minutes menée avec une équipe de l'unité de soins, qui comprenait des étudiants en médecine, des résidents et le médecin superviseur. Les commentaires issus de la discussion ont été recueillis, résumés et documentés sous forme narrative dans des formulaires électroniques d'observation directe dans le milieu de travail par le responsable de l'évaluation du programme de résidence. À des fins de recherche, les transcriptions verbatim des séances de rétroaction ont été analysées de façon thématique. Résultats: Les chercheurs ont identifié quatre thèmes principaux pour les commentaires : la communication, la conscience intra- et interpersonnelle, le leadership et le travail d'équipe, et les occasions d'apprentissage. Bien que la rétroaction concerne un large éventail d'activités, elle met fortement l'accent sur les compétences liées aux rôles intrinsèques de CanMEDS. De plus, le fait que la rétroaction avait un rôle clairement formatif est une autre constatation importante. Conclusions: L'introduction de l'évaluation en équipe facilitée dans le programme de médecine interne à Queen's a comblé une lacune importante dans l'apprentissage par observation directe dans le milieu de travail en fournissant aux apprenants une rétroaction détaillée sur tous les rôles CanMEDS et en formulant des recommandations constructives sur les domaines à améliorer.


Subject(s)
Clinical Competence , Internal Medicine , Internship and Residency , Qualitative Research , Internal Medicine/education , Humans , Competency-Based Education/methods , Formative Feedback , Leadership , Feedback , Educational Measurement/methods , Communication
4.
South Med J ; 117(6): 342-344, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38830589

ABSTRACT

OBJECTIVES: This study assessed the content of US Medical Licensing Examination question banks with regard to out-of-hospital births and whether the questions aligned with current evidence. METHODS: Three question banks were searched for key words regarding out-of-hospital births. A thematic analysis was then utilized to analyze the results. RESULTS: Forty-seven questions were identified, and of these, 55% indicated a lack of inadequate, limited, or irregular prenatal care in the question stem. CONCLUSIONS: Systematic studies comparing prenatal care in out-of-hospital births versus hospital births are nonexistent, leading to the potential for bias and adverse outcomes. Adjustments to question stems that accurately portray current evidence are recommended.


Subject(s)
Licensure, Medical , Humans , United States , Licensure, Medical/standards , Female , Pregnancy , Prenatal Care/standards , Educational Measurement/methods , Education, Medical/methods , Education, Medical/standards
6.
BMC Med Educ ; 24(1): 487, 2024 May 02.
Article in English | MEDLINE | ID: mdl-38698352

ABSTRACT

BACKGROUND: Workplace-based assessment (WBA) used in post-graduate medical education relies on physician supervisors' feedback. However, in a training environment where supervisors are unavailable to assess certain aspects of a resident's performance, nurses are well-positioned to do so. The Ottawa Resident Observation Form for Nurses (O-RON) was developed to capture nurses' assessment of trainee performance and results have demonstrated strong evidence for validity in Orthopedic Surgery. However, different clinical settings may impact a tool's performance. This project studied the use of the O-RON in three different specialties at the University of Ottawa. METHODS: O-RON forms were distributed on Internal Medicine, General Surgery, and Obstetrical wards at the University of Ottawa over nine months. Validity evidence related to quantitative data was collected. Exit interviews with nurse managers were performed and content was thematically analyzed. RESULTS: 179 O-RONs were completed on 30 residents. With four forms per resident, the ORON's reliability was 0.82. Global judgement response and frequency of concerns was correlated (r = 0.627, P < 0.001). CONCLUSIONS: Consistent with the original study, the findings demonstrated strong evidence for validity. However, the number of forms collected was less than expected. Exit interviews identified factors impacting form completion, which included clinical workloads and interprofessional dynamics.


Subject(s)
Clinical Competence , Internship and Residency , Psychometrics , Humans , Reproducibility of Results , Female , Male , Educational Measurement/methods , Ontario , Internal Medicine/education
8.
MedEdPORTAL ; 20: 11401, 2024.
Article in English | MEDLINE | ID: mdl-38716162

ABSTRACT

Introduction: Vascular anomalies are a spectrum of disorders, including vascular tumors and malformations, that often require multispecialty care. The rarity and variety of these lesions make diagnosis, treatment, and management challenging. Despite the recognition of the medical complexity and morbidity associated with vascular anomalies, there is a general lack of education on the subject for pediatric primary care and subspecialty providers. A needs assessment and the lack of an available standardized teaching tool presented an opportunity to create an educational workshop for pediatric trainees using the POGIL (process-oriented guided inquiry learning) framework. Methods: We developed a 2-hour workshop consisting of an introductory didactic followed by small- and large-group collaboration and case-based discussion. The resource included customizable content for learning assessment and evaluation. Residents completed pre- and posttest assessments of content and provided written evaluations of the teaching session. Results: Thirty-four learners in pediatrics participated in the workshop. Session evaluations were positive, with Likert responses of 4.6-4.8 out of 5 on all items. Pre- and posttest comparisons of four content questions showed no overall statistically significant changes in correct response rates. Learners indicated plans to use the clinical content in their practice and particularly appreciated the interactive teaching forum and the comprehensive overview of vascular anomalies. Discussion: Vascular anomalies are complex, potentially morbid, and often lifelong conditions; multispecialty collaboration is key to providing comprehensive care for affected patients. This customizable resource offers a framework for trainees in pediatrics to appropriately recognize, evaluate, and refer patients with vascular anomalies.


Subject(s)
Hemangioma , Internship and Residency , Pediatrics , Vascular Malformations , Humans , Pediatrics/education , Pediatrics/methods , Internship and Residency/methods , Vascular Malformations/diagnosis , Hemangioma/diagnosis , Teaching , Problem-Based Learning/methods , Educational Measurement/methods , Education, Medical, Graduate/methods , Curriculum
9.
S Afr Fam Pract (2004) ; 66(1): e1-e15, 2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38708750

ABSTRACT

BACKGROUND:  Learning portfolios (LPs) provide evidence of workplace-based assessments (WPBAs) in clinical settings. The educational impact of LPs has been explored in high-income countries, but the use of portfolios and the types of assessments used for and of learning have not been adequately researched in sub-Saharan Africa. This study investigated the evidence of learning in registrars' LPs and the influence of the training district and year of training on assessments. METHODS:  A cross-sectional study evaluated 18 Family Medicine registrars' portfolios from study years 1-3 across five decentralised training sites affiliated with the University of the Witwatersrand. Descriptive statistics were calculated for the portfolio and quarterly assessment (QA) scores and self-reported clinical skills competence levels. The competence levels obtained from the portfolios and university records served as proxy measures for registrars' knowledge and skills. RESULTS:  The total LP median scores ranged from 59.9 to 81.0, and QAs median scores from 61.4 to 67.3 across training years. The total LP median scores ranged from 62.1 to 83.5 and 62.0 to 67.5, respectively in QAs across training districts. Registrars' competence levels across skill sets did not meet the required standards. Higher skills competence levels were reported in the women's health, child health, emergency care, clinical administration and teaching and learning domains. CONCLUSION:  The training district and training year influence workplace-based assessment (WPBA) effectiveness. Ongoing faculty development and registrar support are essential for WPBA.Contribution: This study contributes to the ongoing discussion of how to utilise WPBA in resource-constrained sub-Saharan settings.


Subject(s)
Clinical Competence , Educational Measurement , Family Practice , Workplace , Humans , Cross-Sectional Studies , Family Practice/education , Educational Measurement/methods , Female , Male , South Africa , Learning , Adult
10.
JAMA Netw Open ; 7(5): e2410127, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38713464

ABSTRACT

Importance: Board certification can have broad implications for candidates' career trajectories, and prior research has found sociodemographic disparities in pass rates. Barriers in the format and administration of the oral board examinations may disproportionately affect certain candidates. Objective: To characterize oral certifying examination policies and practices of the 16 Accreditation Council for Graduate Medical Education (ACGME)-accredited specialties that require oral examinations. Design, Setting, and Participants: This cross-sectional study was conducted from March 1 to April 15, 2023, using data on oral examination practices and policies (examination format, dates, and setting; lactation accommodations; and accommodations for military deployment, family emergency, or medical leave) as well as the gender composition of the specialties' boards of directors obtained from websites, telephone calls and email correspondence with certifying specialists. The percentages of female residents and residents of racial and ethnic backgrounds who are historically underrepresented in medicine (URM) in each specialty as of December 31, 2021, were obtained from the Graduate Medical Education 2021 to 2022 report. Main Outcome and Measures: For each specialty, accommodation scores were measured by a modified objective scoring system (score range: 1-13, with higher scores indicating more accommodations). Poisson regression was used to assess the association between accommodation score and the diversity of residents in that specialty, as measured by the percentages of female and URM residents. Linear regression was used to assess whether gender diversity of a specialty's board of directors was associated with accommodation scores. Results: Included in the analysis were 16 specialties with a total of 46 027 residents (26 533 males [57.6%]) and 233 members of boards of directors (152 males [65.2%]). The mean (SD) total accommodation score was 8.28 (3.79), and the median (IQR) score was 9.25 (5.00-12.00). No association was found between test accommodation score and the percentage of female or URM residents. However, for each 1-point increase in the test accommodation score, the relative risk that a resident was female was 1.05 (95% CI, 0.96-1.16), and the relative risk that an individual was a URM resident was 1.04 (95% CI, 1.00-1.07). An association was found between the percentage of female board members and the accommodation score: for each 10% increase in the percentage of board members who were female, the accommodation score increased by 1.20 points (95% CI, 0.23-2.16 points; P = .03). Conclusions and Relevance: This cross-sectional study found considerable variability in oral board examination accommodations among ACGME-accredited specialties, highlighting opportunities for improvement and standardization. Promoting diversity in leadership bodies may lead to greater accommodations for examinees in extenuating circumstances.


Subject(s)
Certification , Humans , Cross-Sectional Studies , Female , Male , Certification/statistics & numerical data , United States , Specialty Boards/statistics & numerical data , Educational Measurement/statistics & numerical data , Educational Measurement/methods , Education, Medical, Graduate/statistics & numerical data , Medicine/statistics & numerical data , Adult
11.
BMC Med Educ ; 24(1): 504, 2024 May 07.
Article in English | MEDLINE | ID: mdl-38714975

ABSTRACT

BACKGROUND: Evaluation of students' learning strategies can enhance academic support. Few studies have investigated differences in learning strategies between male and female students as well as their impact on United States Medical Licensing Examination® (USMLE) Step 1 and preclinical performance. METHODS: The Learning and Study Strategies Inventory (LASSI) was administered to the classes of 2019-2024 (female (n = 350) and male (n = 262)). Students' performance on preclinical first-year (M1) courses, preclinical second-year (M2) courses, and USMLE Step 1 was recorded. An independent t-test evaluated differences between females and males on each LASSI scale. A Pearson product moment correlation determined which LASSI scales correlated with preclinical performance and USMLE Step 1 examinations. RESULTS: Of the 10 LASSI scales, Anxiety, Attention, Information Processing, Selecting Main Idea, Test Strategies and Using Academic Resources showed significant differences between genders. Females reported higher levels of Anxiety (p < 0.001), which significantly influenced their performance. While males and females scored similarly in Concentration, Motivation, and Time Management, these scales were significant predictors of performance variation in females. Test Strategies was the largest contributor to performance variation for all students, regardless of gender. CONCLUSION: Gender differences in learning influence performance on STEP1. Consideration of this study's results will allow for targeted interventions for academic success.


Subject(s)
Education, Medical, Undergraduate , Educational Measurement , Licensure, Medical , Students, Medical , Humans , Female , Male , Educational Measurement/methods , Education, Medical, Undergraduate/standards , Sex Factors , Licensure, Medical/standards , Learning , United States , Academic Performance , Young Adult
12.
BMJ Open Qual ; 13(Suppl 2)2024 May 07.
Article in English | MEDLINE | ID: mdl-38719519

ABSTRACT

INTRODUCTION: Safe practice in medicine and dentistry has been a global priority area in which large knowledge gaps are present.Patient safety strategies aim at preventing unintended damage to patients that can be caused by healthcare practitioners. One of the components of patient safety is safe clinical practice. Patient safety efforts will help in ensuring safe dental practice for early detection and limiting non-preventable errors.A valid and reliable instrument is required to assess the knowledge of dental students regarding patient safety. OBJECTIVE: To determine the psychometric properties of a written test to assess safe dental practice in undergraduate dental students. MATERIAL AND METHODS: A test comprising 42 multiple-choice questions of one-best type was administered to final year students (52) of a private dental college. Items were developed according to National Board of Medical Examiners item writing guidelines. The content of the test was determined in consultation with dental experts (either professor or associate professor). These experts had to assess each item on the test for language clarity as A: clear, B: ambiguous and relevance as 1: essential, 2: useful, not necessary, 3: not essential. Ethical approval was taken from the concerned dental college. Statistical analysis was done in SPSS V.25 in which descriptive analysis, item analysis and Cronbach's alpha were measured. RESULT: The test scores had a reliability (calculated by Cronbach's alpha) of 0.722 before and 0.855 after removing 15 items. CONCLUSION: A reliable and valid test was developed which will help to assess the knowledge of dental students regarding safe dental practice. This can guide medical educationist to develop or improve patient safety curriculum to ensure safe dental practice.


Subject(s)
Educational Measurement , Patient Safety , Psychometrics , Humans , Psychometrics/instrumentation , Psychometrics/methods , Patient Safety/standards , Patient Safety/statistics & numerical data , Surveys and Questionnaires , Educational Measurement/methods , Educational Measurement/statistics & numerical data , Educational Measurement/standards , Reproducibility of Results , Students, Dental/statistics & numerical data , Students, Dental/psychology , Education, Dental/methods , Education, Dental/standards , Male , Female , Clinical Competence/statistics & numerical data , Clinical Competence/standards
13.
BMC Med Educ ; 24(1): 527, 2024 May 11.
Article in English | MEDLINE | ID: mdl-38734603

ABSTRACT

BACKGROUND: High stakes examinations used to credential trainees for independent specialist practice should be evaluated periodically to ensure defensible decisions are made. This study aims to quantify the College of Intensive Care Medicine of Australia and New Zealand (CICM) Hot Case reliability coefficient and evaluate contributions to variance from candidates, cases and examiners. METHODS: This retrospective, de-identified analysis of CICM examination data used descriptive statistics and generalisability theory to evaluate the reliability of the Hot Case examination component. Decision studies were used to project generalisability coefficients for alternate examination designs. RESULTS: Examination results from 2019 to 2022 included 592 Hot Cases, totalling 1184 individual examiner scores. The mean examiner Hot Case score was 5.17 (standard deviation 1.65). The correlation between candidates' two Hot Case scores was low (0.30). The overall reliability coefficient for the Hot Case component consisting of two cases observed by two separate pairs of examiners was 0.42. Sources of variance included candidate proficiency (25%), case difficulty and case specificity (63.4%), examiner stringency (3.5%) and other error (8.2%). To achieve a reliability coefficient of > 0.8 a candidate would need to perform 11 Hot Cases observed by two examiners. CONCLUSION: The reliability coefficient for the Hot Case component of the CICM second part examination is below the generally accepted value for a high stakes examination. Modifications to case selection and introduction of a clear scoring rubric to mitigate the effects of variation in case difficulty may be helpful. Increasing the number of cases and overall assessment time appears to be the best way to increase the overall reliability. Further research is required to assess the combined reliability of the Hot Case and viva components.


Subject(s)
Clinical Competence , Critical Care , Educational Measurement , Humans , New Zealand , Australia , Reproducibility of Results , Retrospective Studies , Critical Care/standards , Educational Measurement/methods , Education, Medical, Graduate/standards
14.
J Phys Ther Educ ; 38(2): 133-140, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38758177

ABSTRACT

INTRODUCTION: The Burley Readiness Examination (BRE) for Musculoskeletal (MSK) Imaging Competency assesses physical therapists' baseline MSK imaging competency. Establishing its reliability is essential to its value in determining MSK imaging competency. The purpose of this study was to test the reliability of the BRE for MSK Imaging Competency among physical therapists (PTs) with varying levels of training and education. REVIEW OF LITERATURE: Previous literature supports PTs' utility concerning diagnostic imaging; however, no studies directly measure their competency. With PTs expanding their practice scope and professional PT education programs, increasing their MSK imaging instruction, assessing competency becomes strategic in determining the future of MSK education and training. SUBJECTS: One hundred twenty-three United States licensed PTs completed the BRE. METHODS: Physical therapists completed the BRE through an online survey platform. Point biserial correlation (rpb) was calculated for each examination question. Final analyses were based on 140 examination questions. Examination scores were compared using independent sample t-test and one-way analysis of variance. Chi-square tests and odds ratios (ORs) assessed the relationship of a passing examination score (≥75%) and the type of training. Reliability of the BRE was assessed using Cronbach's alpha (α). RESULTS: Mean overall examination score was 75.89 ± 8.56%. Seventy PTs (56.9%) obtained a passing score. Physical therapists with additional MSK imaging training, board certification, and residency or fellowship training scored significantly higher (P < .001) compared with those with only entry-level PT program education. Physical therapists with additional MSK imaging training scored significantly higher (x̄ = 81.07% ± 8.93%) and were almost 5 times (OR = 4.74, 95% CI [1.95-11.50]) as likely to achieve a passing score than those without. The BRE demonstrated strong internal consistency (Cronbach's α = 0.874). DISCUSSION AND CONCLUSIONS: The BRE was reliable, consistently identifying higher examination scores among those with increased MSK imaging training. Training in MSK imaging influenced competency more than other factors. The BRE may be of analytical value to PT professional and postprofessional programs.


Subject(s)
Clinical Competence , Educational Measurement , Physical Therapists , Humans , Clinical Competence/standards , Reproducibility of Results , Physical Therapists/education , Educational Measurement/methods , United States , Female , Male , Musculoskeletal Diseases/diagnostic imaging , Surveys and Questionnaires , Adult , Diagnostic Imaging/standards
15.
J Pak Med Assoc ; 74(4): 730-735, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38751270

ABSTRACT

Objective: To explore the reasons of unsuccessful attempt in examination during postgraduate clinical training in Pakistan. METHODS: The qualitative, exploratory study was conducted at the Allied Hospital, Faisalabad, Pakistan, from December 1, 2022, to February 25, 2023, and comprised postgraduate trainees from different departments who had at least one unsuccessful attempt in examination during their residency programme. Data was collected through direct interviews that were recorded. The data was subjected to thematic narrative analysis. RESULTS: Of the 14 participants, 10(71.4%) were males and 4(28.5%) were females. The maximum number of unsuccessful attempts were 7(7%), followed by 6(14%), 4(7%), 3(14%), 2(42%) and 1(14%). There were 3 main themes; personal factors, training factors, and exam factors. All the themes had subthemes. Conclusion: At the start of the residency programme, postgraduate trainees must be provided with adequate guidance, and a support system must be present during the programme to help them cope with the stress during training.


Subject(s)
Education, Medical, Graduate , Internship and Residency , Humans , Female , Male , Pakistan , Education, Medical, Graduate/methods , Qualitative Research , Educational Measurement/methods , Adult , Clinical Competence
16.
Tunis Med ; 102(4): 194-199, 2024 Apr 05.
Article in English | MEDLINE | ID: mdl-38746957

ABSTRACT

INTRODUCTION: In intensive care medicine (ICM), the use of Patient-Management Problem (PMP) remains limited and no feedback from students is available. AIM: To compare the feasibility of employing PMP referring to clinical cases (CC) as assessment tools for appraising the knowledge and competencies in ICM students; and to gather the students' perception regarding this experience. METHODS: it was a cross-sectional randomized trial. Were included, external students in the 3rd year of the 2nd cycle of medical studies (3rd-SCMS) during their ICM externship. All the participants underwent two random draws (the 1st one for assessment tool to be started (PMP or CC) and the 2nd for the passage order for PMP. Two PMPs versus two grouped QCMs-CC were prepared and a satisfaction questionnaire was distributed. The main judgment criterion was the effect of each assessment tool on the students' decision-making process. This focused on the relevance of the elements provided by each technique, the implication and the difficulty felt. The secondary endpoint was the scores taken for each tool tested. RESULTS: 20 students were included. All participants had previous experience with PMPs and only nine were familiar with grouped MCQs-CC. PMP scores were 14.9 for the 1st theme and 15.8 for the 2nd theme. The median of the grouped MCQs-CC scores was 14 [12-16] for both. The scores didn't differ between the two techniques. For the 1st theme: the scores were negatively correlated (r=-0.58 and p=0.007). Students felt a better satisfaction for PMP evaluation (p<10-3), the elements provided by PMP were more relevant for decision-making process (p<10-3), the involvement was more felt with PMP (p<10-3) and difficulty was more felt with CCs (p<10-3). The effect of PMP was found to be significant on clinical reasoning (n=36), self-assessment (n=38), problem solving (n=40) and decision making (n=39). Students recommended strongly PMP as a tool of evaluation in ICM (p<10-3). CONCLUSION: scores were comparable between the two tested techniques. The positive perception of students regarding PMP encourages its generalization and teacher training must be strengthened.


Subject(s)
Clinical Competence , Critical Care , Students, Medical , Humans , Cross-Sectional Studies , Students, Medical/psychology , Clinical Competence/standards , Critical Care/standards , Critical Care/methods , Male , Female , Educational Measurement/methods , Surveys and Questionnaires , Adult , Feasibility Studies , Young Adult
17.
Rev Col Bras Cir ; 51: e20243749, 2024.
Article in English, Portuguese | MEDLINE | ID: mdl-38747884

ABSTRACT

The article discusses the evolution of the Brazilian College of Surgeons (CBC) specialist title exam, highlighting the importance of evaluating not only theoretical knowledge, but also the practical skills and ethical behavior of candidates. The test was instituted in 1971, initially with only the written phase, and later included the oral practical test, starting with the 13th edition in 1988. In 2022, the assessment process was improved by including the use of simulated stations in the practical test, with the aim of assessing practical and communication skills, as well as clinical reasoning, in order to guarantee excellence in the assessment of surgeons training. The aim of this study is to demonstrate the performance of candidates in the last five years of the Specialist Title Test and to compare the performance results between the different surgical training groups of the candidates. The results obtained by candidates from the various categories enrolled in the test in the 2018 to 2022 editions were analyzed. There was a clear and statistically significant difference between doctors who had completed three years of residency recognized by the Ministry of Education in relation to the other categories of candidates for the Specialist Title..


Subject(s)
Educational Measurement , Brazil , Humans , Educational Measurement/methods , Clinical Competence , Surgeons , Time Factors , Societies, Medical , Specialties, Surgical/education
18.
BMC Med Educ ; 24(1): 502, 2024 May 09.
Article in English | MEDLINE | ID: mdl-38724925

ABSTRACT

INTRODUCTION: The Clinical Skill Training Center (CSTC) is the first environment where third year medical students learn clinical skills after passing basic science. Consumer- based evaluation is one of the ways to improve this center with the consumer. This study was conducted with the aim of preparing a consumer-oriented evaluation tool for CSTC among medical students. METHOD: The study was mixed method. The first phase was qualitative and for providing an evaluation tool. The second phase was for evaluating the tool. At the first phase, after literature review in the Divergent phase, a complete list of problems in the field of CSTC in medicine schools was prepared. In the convergent step, the prepared list was compared with the standards of clinical education and values of scriven. In the second phase it was evaluated by the scientific and authority committee. Validity has been measured by determining CVR and CVI: Index. The face and content validity of the tool was obtained through the approval of a group of specialists. RESULTS: The findings of the research were in the form of 4 questionnaires: clinical instructors, pre-clinical medical students, and interns. All items were designed as a 5-point Likert. The main areas of evaluation included the objectives and content of training courses, implementation of operations, facilities and equipment, and the environment and indoor space. In order to examine the long-term effects, a special evaluation form was designed for intern. CONCLUSION: The tool for consumer evaluation was designed with good reliability and trustworthiness and suitable for use in the CSTC, and its use can improve the effectiveness of clinical education activities.


Subject(s)
Clinical Competence , Program Evaluation , Students, Medical , Humans , Clinical Competence/standards , Education, Medical, Undergraduate/standards , Surveys and Questionnaires , Educational Measurement/methods
19.
BMC Med Educ ; 24(1): 540, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38750433

ABSTRACT

BACKGROUND: Situational Judgment Tests (SJTs) are commonly used in medical school admissions. However, it has been consistently found that native speakers tend to score higher on SJTs than non-native speakers, which can be particularly problematic in the admission context due to the potential risk of limited fairness. Besides type of SJT, awareness of time limit may play a role in subgroup differences in the context of cognitive load theory. This study examined the influence of SJT type and awareness of time limit against the background of language proficiency in a quasi high-stakes setting. METHODS: Participants (N = 875), applicants and students in healthcare-related study programs, completed an online study that involved two SJTs: one with a text-based stimulus and response format (HAM-SJT) and another with a video-animated stimulus and media-supported response format (Social Shapes Test, SST). They were randomly assigned to a test condition in which they were either informed about a time limit or not. In a multilevel model analysis, we examined the main effects and interactions of the predictors (test type, language proficiency and awareness of time limit) on test performance (overall, response percentage). RESULTS: There were significant main effects on overall test performance for language proficiency in favor of native speakers and for awareness of time limit in favor of being aware of the time limit. Furthermore, an interaction between language proficiency and test type was found, indicating that subgroup differences are smaller for the animated SJT than for the text-based SJT. No interaction effects on overall test performance were found that included awareness of time limit. CONCLUSION: A SJT with video-animated stimuli and a media-supported response format can reduce subgroup differences in overall test performance between native and non-native speakers in a quasi high-stakes setting. Awareness of time limit is equally important for high and low performance, regardless of language proficiency or test type.


Subject(s)
Judgment , Humans , Female , Male , Young Adult , Adult , Awareness , School Admission Criteria , Educational Measurement/methods , Language , Students, Medical/psychology , Schools, Medical
20.
GMS J Med Educ ; 41(2): Doc20, 2024.
Article in English | MEDLINE | ID: mdl-38779693

ABSTRACT

As medical educators grapple with the consistent demand for high-quality assessments, the integration of artificial intelligence presents a novel solution. This how-to article delves into the mechanics of employing ChatGPT for generating Multiple Choice Questions (MCQs) within the medical curriculum. Focusing on the intricacies of prompt engineering, we elucidate the steps and considerations imperative for achieving targeted, high-fidelity results. The article presents varying outcomes based on different prompt structures, highlighting the AI's adaptability in producing questions of distinct complexities. While emphasizing the transformative potential of ChatGPT, we also spotlight challenges, including the AI's occasional "hallucination", underscoring the importance of rigorous review. This guide aims to furnish educators with the know-how to integrate AI into their assessment creation process, heralding a new era in medical education tools.


Subject(s)
Artificial Intelligence , Curriculum , Education, Medical , Educational Measurement , Humans , Education, Medical/methods , Educational Measurement/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...