Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 72
Filter
1.
Cureus ; 16(6): e62358, 2024 Jun.
Article in English | MEDLINE | ID: mdl-39006591

ABSTRACT

Introduction The American Board of Surgery (ABS) plays a pivotal role in certifying surgeons in the United States, with the American Board of Surgery In-Training Examination (ABSITE) serving as a critical assessment tool for general surgery residents aspiring for certification. The aim of this study is to compare the performance of international medical graduates (IMGs) to their domestic counterparts and assess the impact of different medical degrees on ABSITE scores. Notably, ABSITE scores often dictate the trajectory of a surgical career, including opportunities for fellowship placements in specialized fields such as plastic surgery. Methods This study focused on general surgery residents enrolled at Marshall University from 2014 to 2022. Data encompassing ABSITE scores, TrueLearn quiz percentages, and TrueLearn mock exam results were collected for analysis. Descriptive statistics summarized sample characteristics, and linear mixed models were employed to address correlations. Statistical analyses were conducted using the Statistical Analysis System (SAS) (version 9.4; SAS Institute Inc., Cary, NC, USA), with significance defined by a two-sided test with p < 0.05. Results Among the 48 participants, comprising 24 non-international medical graduates (nIMGs) and 24 IMGs, IMGs demonstrated superior performance across various metrics. They exhibited higher quiz percentages (67% vs. 61%; p = 0.0029), mock Exam 1 scores (64% vs. 58%; p = 0.0021), mock Exam 2 scores (66% vs. 58%; p = 0.0015), ABSITE scores (560 vs. 505; p = 0.010), and ABSITE percentages (74% vs. 68%; p = 0.0077) compared to nIMGs. Analysis between Doctor of Osteopathic Medicine (DO) and Doctor of Medicine (MD) participants revealed no statistically significant differences in performance metrics, highlighting the comparability of these medical degrees in the context of ABSITE scores and related assessments. Discussion/conclusion This study underscores the superior performance of IMGs over nIMGs in ABSITE examinations, shedding light on the critical role of ABSITE scores in shaping surgical careers. Higher scores correlate with enhanced opportunities for coveted fellowship placements, particularly in specialized fields like plastic surgery. Understanding these dynamics is crucial for resident training and navigating the competitive landscape of surgical sub-specialization. Future research endeavors can delve deeper into the factors influencing ABSITE performance, thereby facilitating the development of targeted interventions to support residents in achieving their career aspirations.

2.
J Hand Surg Glob Online ; 6(2): 164-168, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38903829

ABSTRACT

Purpose: Currently, there is a paucity of prior investigations and studies examining applications for artificial intelligence (AI) in upper-extremity (UE) surgical education. The purpose of this investigation was to assess the performance of a novel AI tool (ChatGPT) on UE questions on the Orthopaedic In-Training Examination (OITE). We aimed to compare the performance of ChatGPT to the examination performance of hand surgery residents. Methods: We selected questions from the 2020-2022 OITEs that focused on both the hand and UE as well as the shoulder and elbow content domains. These questions were divided into two categories: those with text-only prompts (text-only questions) and those that included supplementary images or videos (media questions). Two authors (B.K.F. and G.S.M.) converted the accompanying media into text-based descriptions. Included questions were inputted into ChatGPT (version 3.5) to generate responses. Each OITE question was entered into ChatGPT three times: (1) open-ended response, which requested a free-text response; (2) multiple-choice responses without asking for justification; and (3) multiple-choice response with justification. We referred to the OITE scoring guide for each year in order to compare the percentage of correct AI responses to correct resident responses. Results: A total of 102 UE OITE questions were included; 59 were text-only questions, and 43 were media-based. ChatGPT correctly answered 46 (45%) of 102 questions using the Multiple Choice No Justification prompt requirement (42% for text-based and 44% for media questions). Compared to ChatGPT, postgraduate year 1 orthopaedic residents achieved an average score of 51% correct. Postgraduate year 5 residents answered 76% of the same questions correctly. Conclusions: ChatGPT answered fewer UE OITE questions correctly compared to hand surgery residents of all training levels. Clinical relevance: Further development of novel AI tools may be necessary if this technology is going to have a role in UE education.

3.
JMIR Med Educ ; 10: e52207, 2024 May 30.
Article in English | MEDLINE | ID: mdl-38825848

ABSTRACT

Background: The relationship between educational outcomes and the use of web-based clinical knowledge support systems in teaching hospitals remains unknown in Japan. A previous study on this topic could have been affected by recall bias because of the use of a self-reported questionnaire. Objective: We aimed to explore the relationship between the use of the Wolters Kluwer UpToDate clinical knowledge support system in teaching hospitals and residents' General Medicine In-Training Examination (GM-ITE) scores. In this study, we objectively evaluated the relationship between the total number of UpToDate hospital use logs and the GM-ITE scores. Methods: This nationwide cross-sectional study included postgraduate year-1 and -2 residents who had taken the examination in the 2020 academic year. Hospital-level information was obtained from published web pages, and UpToDate hospital use logs were provided by Wolters Kluwer. We evaluated the relationship between the total number of UpToDate hospital use logs and residents' GM-ITE scores. We analyzed 215 teaching hospitals with at least 5 GM-ITE examinees and hospital use logs from 2017 to 2019. Results: The study population consisted of 3013 residents from 215 teaching hospitals with at least 5 GM-ITE examinees and web-based resource use log data from 2017 to 2019. High-use hospital residents had significantly higher GM-ITE scores than low-use hospital residents (mean 26.9, SD 2.0 vs mean 26.2, SD 2.3; P=.009; Cohen d=0.35, 95% CI 0.08-0.62). The GM-ITE scores were significantly correlated with the total number of hospital use logs (Pearson r=0.28; P<.001). The multilevel analysis revealed a positive association between the total number of logs divided by the number of hospital physicians and the GM-ITE scores (estimated coefficient=0.36, 95% CI 0.14-0.59; P=.001). Conclusions: The findings suggest that the development of residents' clinical reasoning abilities through UpToDate is associated with high GM-ITE scores. Thus, higher use of UpToDate may lead physicians and residents in high-use hospitals to increase the implementation of evidence-based medicine, leading to high educational outcomes.


Subject(s)
Hospitals, Teaching , Internet , Internship and Residency , Humans , Internship and Residency/statistics & numerical data , Japan , Cross-Sectional Studies , Clinical Competence/statistics & numerical data , Educational Measurement , Female , Male , Education, Medical, Graduate , Adult
4.
J Med Educ Curric Dev ; 11: 23821205241250145, 2024.
Article in English | MEDLINE | ID: mdl-38706938

ABSTRACT

Objectives: The study aims to assess the impacts of a sports medicine (SM) track on musculoskeletal (MSK) knowledge of family medicine (FM) residents. In-training examination (ITE) results were used to compare the MSK knowledge of FM residents with and without SM track participation. Methods: A single-center, retrospective study was completed on 85 FM residents from the 2018 to 2024 graduating classes who completed the ITE from 2017 to 2021. Residents were categorized by participation in the SM track, where half a day of FM continuity clinic per week is replaced with an SM clinic, supervised by a fellowship-trained SM physician. ITE scores throughout training were compared between the 2 groups using mixed-effects regression. Results: The ITE MSK scores increased among both SM track participants (+77 points/year, p = .001) and nonparticipants (+39 points/year, p = .001) throughout their training. By postgraduate year 3, SM track participants performed significantly better on the MSK portion of the ITE (+87 points compared to non-participants, p = .045). No significant difference in total ITE scores was seen between groups. Conclusions: Our data demonstrates that participation in an SM track is associated with an increase in MSK knowledge of ITE, suggesting that an SM track may provide FM residents with a better understanding of MSK conditions.

5.
HCA Healthc J Med ; 5(1): 49-54, 2024.
Article in English | MEDLINE | ID: mdl-38560390

ABSTRACT

Background: We endeavored to create an evidence-based curriculum to improve general surgery resident fund of knowledge. Global and resident-specific interventions were employed to this end. These interventions were monitored via multiple choice question results on a weekly basis and American Board of Surgery In-Training Examination (ABSITE) performance. Methods: This study was performed in a prospective manner over a 2-year period. A structured textbook review with testing was implemented for all residents. A focused textbook question-writing assignment and a Surgical Council on Resident Education (SCORE)-based individualized learning plan (ILP) were implemented for residents scoring below the 35th percentile on the ABSITE. Results: Curriculum implementation resulted in a statistically significant reduction in the number of residents scoring below the 35th percentile, from 50% to 30.8% (P = .023). One hundred percent of residents initially scoring below the 35th percentile were successfully remediated over the study period. Average overall program ABSITE percentile scores increased from 38.5% to 51.4% over a 2-year period. Conclusion: Structured textbook review and testing combined with a question-writing assignment and a SCORE-focused ILP successfully remediated residents scoring below the 35th percentile and improved general surgery residency ABSITE performance.

6.
JMIR Med Educ ; 10: e54401, 2024 Feb 29.
Article in English | MEDLINE | ID: mdl-38421691

ABSTRACT

BACKGROUND: Medical students in Japan undergo a 2-year postgraduate residency program to acquire clinical knowledge and general medical skills. The General Medicine In-Training Examination (GM-ITE) assesses postgraduate residents' clinical knowledge. A clinical simulation video (CSV) may assess learners' interpersonal abilities. OBJECTIVE: This study aimed to evaluate the relationship between GM-ITE scores and resident physicians' diagnostic skills by having them watch a CSV and to explore resident physicians' perceptions of the CSV's realism, educational value, and impact on their motivation to learn. METHODS: The participants included 56 postgraduate medical residents who took the GM-ITE between January 21 and January 28, 2021; watched the CSV; and then provided a diagnosis. The CSV and GM-ITE scores were compared, and the validity of the simulations was examined using discrimination indices, wherein ≥0.20 indicated high discriminatory power and >0.40 indicated a very good measure of the subject's qualifications. Additionally, we administered an anonymous questionnaire to ascertain participants' views on the realism and educational value of the CSV and its impact on their motivation to learn. RESULTS: Of the 56 participants, 6 (11%) provided the correct diagnosis, and all were from the second postgraduate year. All domains indicated high discriminatory power. The (anonymous) follow-up responses indicated that the CSV format was more suitable than the conventional GM-ITE for assessing clinical competence. The anonymous survey revealed that 12 (52%) participants found the CSV format more suitable than the GM-ITE for assessing clinical competence, 18 (78%) affirmed the realism of the video simulation, and 17 (74%) indicated that the experience increased their motivation to learn. CONCLUSIONS: The findings indicated that CSV modules simulating real-world clinical examinations were successful in assessing examinees' clinical competence across multiple domains. The study demonstrated that the CSV not only augmented the assessment of diagnostic skills but also positively impacted learners' motivation, suggesting a multifaceted role for simulation in medical education.


Subject(s)
Clinical Competence , Learning , Humans , Cross-Sectional Studies , Educational Status , Motivation
7.
J Surg Educ ; 81(1): 56-63, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38036385

ABSTRACT

OBJECTIVE: The American Board of Surgery In-Training Examination (ABSITE) was designed as a low-stakes, medical knowledge examination for US general surgery residency programs. However, in practice, this exam has been utilized for higher stakes purposes, such as resident promotion or remediation, and fellowship selection. Several studies have demonstrated the efficacy of ABSITE preparation resources, but best practices for ABSITE preparation and national preparatory habits are currently unknown. The aim of this work was to determine current residency programs' strategies for ABSITE preparation. DESIGN: We distributed an electronic survey to program directors or program coordinators of US general surgery programs asking them to anonymously report program ABSITE educational practices and ABSITE scores. We analyzed the proportion of responses using descriptive statistics and compared the effect of various strategies using the Mann-Whitney testing for nonparametric data. An average ABSITE percentile score was calculated for each residency based on program self-reported scores. SETTING: Association of Program Directors (APDS) Listserv PARTICIPANTS: General surgery residency programs participating at the time of distribution (278). RESULTS: Response rate was 24% (66/278); 41 programs (62.1%) identified as university-affiliated, and 25 (37.9%) were community-based. Median intern class size was 8 (range: 3-14), including preliminary interns. Average ABSITE percentile score was 52.8% (range 36.9%-67.6%). There were no significant differences in ABSITE scores based on affiliation or program size. Educational resources utilized for ABSITE preparation included SCORE (89.3%), Q-banks (50%), and surgical textbooks (25.8%). The majority (56.1%) of programs reported using a year-long curriculum for ABSITE preparation, and 66.6% used a time-limited curriculum completed in the months immediately prior to ABSITE. Most programs reported that ABSITE scores were a low priority (63.6%) or not a priority (13.6%). The existence of a year-long curriculum for ABSITE was positively correlated with score as compared to programs without a year-long curricula (53.9% vs 48.5%, p <0.01). Programs using a time-limited curriculum demonstrated lower scores as compared to programs without time-limited curricula (51.3% v 56.1%, p < 0.01). CONCLUSION: General surgery programs use a variety of strategies to prepare residents for the ABSITE. Despite reporting that they utilize ABSITE scores for a variety of high stakes purposes including evaluation for promotion and as a predictor of the preparedness for the ABS QE, many programs reported that they consider ABSITE scores as a low priority. A year-long focused curriculum was the only strategy correlated with increased scores, which may reflect the value of encouraging consistent studying and spaced repetition. Additional work is needed to guide programs in optimal utilization of ABSITE scores for remediation and resident evaluation, as well as understanding how ABSITE preparatory strategies correlate with clinical performance.


Subject(s)
General Surgery , Internship and Residency , Humans , United States , Education, Medical, Graduate , Educational Measurement , Curriculum , Surveys and Questionnaires , General Surgery/education
8.
Postgrad Med J ; 99(1177): 1197-1204, 2023 Oct 19.
Article in English | MEDLINE | ID: mdl-37474744

ABSTRACT

PURPOSE: A regional quota program (RQP) was introduced in Japan to ameliorate the urban-rural imbalance of physicians. Despite concerns about the low learning abilities of RQP graduates, the relationship between the RQP and practical clinical competency after initiating clinical residency has not been evaluated. METHODS: We conducted a nationwide cross-sectional study to assess the association between the RQP and practical clinical competency based on General Medicine In-Training Examination (GM-ITE) scores. We compared the overall and category GM-ITE results between RQP graduates and other resident physicians. The relationship between the RQP and scores was examined using multilevel linear regression analysis. RESULTS: There were 4978 other resident physicians and 1119 RQP graduates out of 6097 participants from 593 training hospitals. Being younger; preferring internal, general, or emergency medicine; managing fewer inpatients; and having fewer ER shifts were all characteristics of RQP graduates. In multilevel multivariable linear regression analysis, there was no significant association between RQP graduates and total GM-ITE scores (coefficient: 0.26; 95% confidence interval: -0.09, 0.61; P = .15). The associations of RQP graduates with GM-ITE scores in each category and specialty were not clinically relevant. However, in the same multivariable model, the analysis did reveal that total GM-ITE scores demonstrated strong positive associations with younger age and GM preference, both of which were significantly common in RQP graduates. CONCLUSION: Practical clinical competency evaluated based on the GM-ITE score showed no clinically relevant differences between RQP graduates and other resident physicians. Key messages What is already known on this topic Many countries offer unique admission processes to medical schools and special undergraduate programs to increase the supply of physicians in rural areas. Concerns have been raised about the motivation, learning capabilities, and academic performance of the program graduates. What this study adds This nationwide cross-sectional study in Japan revealed clinical competency based on the scores from the General Medicine In-Training Examination showed no clinically relevant differences between graduates of regional quota programs and other resident physicians. How this study might affect research, practice, or policy The study provides evidence to support the Japanese regional quota program from the perspective of clinical competency after initiating clinical practice.

9.
Postgrad Med J ; 99(1176): 1080-1087, 2023 Sep 21.
Article in English | MEDLINE | ID: mdl-37265446

ABSTRACT

PURPOSE: In 2024, the Japanese government will enforce a maximum 80-hour weekly duty hours (DHs) regulation for medical residents. Although this reduction in weekly DHs could increase the self-study time (SST) of these residents, the relationship between these two variables remains unclear. The aim of the study was to investigate the relationship between the SST and DHs of residents in Japan. METHODS: In this nationwide cross-sectional study, the subjects were candidates of the General Medicine In-Training Examination in the 2020 academic year. We administered questionnaires and categorically asked questions regarding daily SST and weekly DHs during the training period. To account for hospital variability, proportional odds regression models with generalized estimating equations were used to analyse the association between SST and DHs. RESULTS: Of the surveyed 6117 residents, 32.0% were female, 49.1% were postgraduate year-1 residents, 83.8% were affiliated with community hospitals, and 19.9% worked for ≥80 hours/week. Multivariable analysis revealed that residents working ≥80 hours/week spent more time on self-study than those working 60-70 hours/week. Conversely, residents who worked <50 hours/week spent less time on self-study than those who worked 60-70 hours/week. The factors associated with longer SST were sex, postgraduate year, career aspiration for internal medicine, affiliation with community hospitals, academic involvement, and well-being. CONCLUSION: Residents with long DHs had longer SSTs than residents with short DHs. Future DH restrictions may not increase but rather decrease resident SST. Effective measures to encourage self-study are required, as DH restrictions may shorten SST.


Subject(s)
Internship and Residency , Personnel Staffing and Scheduling , Humans , Female , Male , Workload , Work Schedule Tolerance , Cross-Sectional Studies
10.
J Gen Fam Med ; 24(2): 87-93, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36909787

ABSTRACT

Background: The effect of duty hour (DH) restrictions on postgraduate residents' acquisition of clinical competencies is unclear. We evaluated the relationship between DHs and competency-related knowledge acquisition using the General Medicine In-training Examination (GM-ITE). Methods: We conducted a multicenter, cross-sectional study of community hospital residents among 2019 GM-ITE examinees. Self-reported average DHs per week were classified into five DH categories and the competency domains were classified into four areas: symptomatology and clinical reasoning (CR), physical examination and clinical procedure (PP), medical interview and professionalism (MP), and disease knowledge (DK). The association between these scores and DHs was examined using random-intercept linear models with and without adjustment for confounding factors. Results: We included 4753 participants in the analyses. Of these, 31% were women, and 49.1% were in the postgraduate year (PGY) 2. Mean CR and MP scores were lower among residents in Category 1 (<50 h) than in residents in Category 3 (≥60 and <70 h; reference group). Mean DK scores were lower among residents in Categories 1 and 2 (≥50 and <60 h) than in the reference group. PGY-2 residents in Categories 1 and 2 had lower CR scores than those in Category 3; however, PGY-1 residents in Category 5 showed higher scores. Conclusions: The relationship between DHs and each competency area is not strictly linear. The acquisition of knowledge of physical examination and clinical procedures skills in particular may not be related to DHs.

11.
J Surg Educ ; 80(5): 714-719, 2023 05.
Article in English | MEDLINE | ID: mdl-36849323

ABSTRACT

INTRODUCTION: There is a bias in the medical community that allopathic training is superior to osteopathic training, despite the lack of substantiation. The orthopedic in-training examination (OITE) is a yearly exam evaluating educational advancement and orthopedic surgery resident's scope of knowledge. The purpose of this study was to compare OITE scores between doctor of osteopathic medicine (DO) and medical doctor (MD) orthopedic surgery residents to determine whether any appreciable differences exist in the achievement levels between the 2 groups. METHODS: The American Academy of Orthopedic Surgeons 2019 OITE technical report, which reports the scores from the 2019 OITE for MDs and DOs, was evaluated to determine OITE scores for MD and DO residents. The progression of scores obtained during various postgraduate years (PGY) for both groups was also analyzed. MD and DO scores throughout PGY 1-5 were compared with independent t-tests. RESULTS: PGY-1 DO residents outperformed MD residents on the OITE (145.8 vs 138.8, p < 0.001). The mean scores achieved by DO and MD residents during PGY-2 (153.2 vs 153.2), 3 (176.2 vs 175.2), and 4 (182.0 vs 183.7) did not differ (p = 0.997, 0.440, and 0.149, respectively). However, for PGY-5, the mean scores for MD residents (188.6) were higher than those of DO residents (183.5, p < 0.001). Both groups had trends of improvement seen throughout PGY 1 to 5 years, with both groups showing an increase in average PGY scores when compared to each preceding PGY. CONCLUSION: This study provides evidence that DO and MD orthopedic surgery residents perform similarly on the OITE within PGY 2 to 4, thus displaying equivalencies in orthopedic knowledge within the majority of PGYs. Program directors at allopathic and osteopathic orthopedic residency programs should take this into account when considering applicants for residency.


Subject(s)
Internship and Residency , Orthopedics , Osteopathic Medicine , Surgeons , Humans , United States , Education, Medical, Graduate , Osteopathic Medicine/education , Educational Measurement , Clinical Competence , Orthopedics/education
12.
Foot Ankle Orthop ; 7(3): 24730114221119754, 2022 Jul.
Article in English | MEDLINE | ID: mdl-36051865

ABSTRACT

Background: The Orthopaedic In-Training Examination (OITE) is a standardized examination administered annually to orthopaedic surgery residents. The examination is designed to evaluate resident knowledge and academic performance of residency programs. Methods: All OITE foot and ankle questions from 2009 through 2012 and 2017 through 2020 were analyzed. Subtopics, taxonomy, references, and use of imaging modalities were recorded. Results: There were a total of 167 foot and ankle (F&A)-related questions across 8 years of OITE examinations. Trauma remained the most commonly tested subtopic of F&A across both subsets, followed by rehabilitation, tendon disorders, and arthritis. We found an increase in questions related to arthritis (P = .05) and a decrease of questions related to the diabetic foot (P = .02). Taxonomy 3 questions constituted 49.5% of F&A questions from 2009 through 2012 compared with 44.7% of questions from 2017 to 2020 (P = .54). Radiography was the most commonly used imaging modality in both subsets. From 2009 to 2012, 63.6% of questions included a radiograph compared with 76.5% in 2017 through 2020 (P = .13). FAI (Foot & Ankle International), JAAOS (Journal of the American Academy of Orthopaedic Surgeons), and JBJS (The Journal of Bone and Joint Surgery) were the most commonly cited journals, making up more than 50% of total citations. Citations per question increased from 2.20 to 2.42 from 2009-2012 to 2017-2020 (P = .01). The average lag time in the early subset was 8.2 years and 8.9 years in the later subset. Conclusion: This study provides a detailed analysis of the F&A section of the OITE. Use of this analysis can provide residents with a guide on how to better prepare for the OITE examination. Level of Evidence: Level IV, cross-sectional review of Orthopaedic In-Training Examination questions.

13.
J Shoulder Elbow Surg ; 31(11): e562-e568, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35803548

ABSTRACT

BACKGROUND: The Orthopedic In-Training Examination (OITE) is an annual examination for orthopedic surgery residents used to assess orthopedic knowledge across a national standard. Having an updated understanding of currently tested topics and resources is useful to help residents guide their education. PURPOSE: The purpose of this study is to analyze the shoulder and elbow domain of the OITE in an effort to provide current trends and commonly tested topics. METHODS: All OITE questions related to shoulder and elbow topics over the years 2009-2013 and 2017-2020 were analyzed. Subcategories, the number and types of references used, publication lag time, imaging modalities, taxonomic classification, and resident performance were recorded. RESULTS: Shoulder and elbow topics comprised 8.61% of all OITE questions from 2009-2013 and 2017-2020. The most commonly tested shoulder topics were rotator cuff arthropathy and reverse total shoulder arthroplasty (13.6%), followed by hemiarthroplasty and total shoulder arthroplasty (12.9%), rotator cuff-related pathology (12.9%), anterior shoulder instability and/or dislocation (10.2%), and general anatomy (10.2%). The most commonly tested elbow topics were trauma (21%), ulnar collateral ligament injuries (12.12%), general anatomy (10%), and arthroplasty (10%). Decisions regarding management or appropriate next steps (taxonomy T3) comprised 39% of all question types. The incorporation of imaging modalities continues to be an important component of the OITE questions. The Journal of Shoulder and Elbow Surgery (JSES), the Journal of the American Academy of Orthopedic Surgeons (JAAOS), the Journal of Bone and Joint Surgery (JBJS), and the American Journal of Sports Medicine (AJSM) comprised 65% of articles referenced in all questions over our analysis period. CONCLUSION: This study provides an updated analysis of trends within the shoulder and elbow domain of the OITE. Application of these data can aid residents in their preparation for the examination.


Subject(s)
Internship and Residency , Joint Instability , Orthopedics , Shoulder Joint , Humans , United States , Education, Medical, Graduate/methods , Educational Measurement , Elbow , Shoulder , Shoulder Joint/diagnostic imaging , Shoulder Joint/surgery , Orthopedics/education
14.
J Vasc Surg ; 76(6): 1721-1727, 2022 12.
Article in English | MEDLINE | ID: mdl-35863554

ABSTRACT

OBJECTIVE: Vascular surgery trainees participate in the vascular surgery in-training examination (VSITE) during each year of their training. Although the VSITE was developed as a low-stakes, formative examination, performance on that examination might correlate with the pass rates for the Vascular Surgery Board written qualifying examination (VQE) and oral certifying examination (VCE) and might, therefore, guide both trainees and program directors. The present study was designed to examine the ability of the VSITE to predict trainees' performance on the VQE and VCE. METHODS: All first-time candidates of the Vascular Surgery Board VQE and VCE were analyzed from 2016 to 2020, including those from both the integrated and independent training pathways. VSITE scores from the final year of training were associated with the VQE scores and the probability of passing the VQE and VCE both. Linear and logistic regression models were used to determine the ability of VSITE results to predict the VQE scores and the probability of passing each board examination. RESULTS: VSITE scores available for the 559 candidates (69.3% male; 30.7% female) who had completed the VQE and 369 candidates (66.7% male; 33.3% female) who had completed both the VQE and the VCE. The linear regression model results for the final year of training showed that the VSITE scores explained 34% of the variance in the VQE scores (29% for the integrated and 37% for the independent trainees). Logistic regression demonstrated that the final year VSITE scores were a significant predictor of passing the VQE for both integrated and independent trainees (P < .001). A VSITE score of 500 during the final year of training predicted a VQE passing probability of >90% for each group of candidates. The probability of passing the VQE decreased to 73% for candidates from integrated programs, 61% for candidates from independent programs, and 64% for the whole cohort when the score was 400. The VSITE scores were a significant predictor of passing the VCE only for the candidates from independent programs (odds ratio, 1.01; 95% confidence interval, 1.00-1.02; P < .01), for whom a VSITE score of 400 correlated with an 82% probability of passing the VCE. CONCLUSIONS: VSITE performance is predictive of passing the VQE for trainees from both integrated and independent training paradigms. Vascular surgery trainees and training programs should optimize their preparation and educational efforts to maximize performance on the VSITE during their final year of training to improve the likelihood of passing the VQE. Further analysis of the predictive value of VSITE scores during the earlier years of training might allow the board certification examinations to be administered earlier in the final year of training.


Subject(s)
General Surgery , Internship and Residency , Male , Female , Humans , United States , Educational Measurement/methods , Clinical Competence , Certification , Vascular Surgical Procedures/education , General Surgery/education
15.
Front Med (Lausanne) ; 9: 840721, 2022.
Article in English | MEDLINE | ID: mdl-35355591

ABSTRACT

Background: In-training examination (ITE) has been widely adopted as an assessment tool to measure residents' competency. We incorporated different formats of assessments into the emergency medicine (EM) residency training program to form a multimodal, multistation ITE. This study was conducted to examine the cost and effectiveness of its different testing formats. Methods: We conducted a longitudinal study in a tertiary teaching hospital in Taiwan. Nine EM residents were enrolled and followed for 4 years, and the biannual ITE scores were recorded and analyzed. Each ITE consisted of 8-10 stations and was categorized into four formats: multiple-choice question (MCQ), question and answer (QA), oral examination (OE), and high-fidelity simulation (HFS) formats. The learner satisfaction, validity, reliability, and costs were analyzed. Results: 486 station scores were recorded during the 4 years. The numbers of MCQ, OE, QA, and HFS stations were 45 (9.26%), 90 (18.5%), 198 (40.7%), and 135 (27.8%), respectively. The overall Cronbach's alpha reached 0.968, indicating good overall internal consistency. The correlation with EM board examination was highest for HFS (ρ = 0.657). The average costs of an MCQ station, an OE station, and an HFS station were ~3, 14, and 21 times that of a QA station. Conclusions: Multi-dimensional assessment contributes to good reliability. HFS correlates best with the final training exam score but is also the most expensive format among ITEs. Increased testing domains with various formats improve ITE's overall reliability. Program directors must understand each test format's strengths and limitations to bring forth the best combination of exams under the local context.

16.
J Surg Educ ; 79(3): 775-782, 2022.
Article in English | MEDLINE | ID: mdl-35086789

ABSTRACT

OBJECTIVE: To examine the impact of access to and utilization of a commercially available question bank (TrueLearn) for in-training examination (ITE) preparation in Obstetrics and Gynecology (OBGYN). DESIGN: This was a retrospective cohort study examining the impact of TrueLearn usage on ITE examination performance outcomes. Produced by the educational arm of the American College of Obstetricians and Gynecologists, the Council on Resident Education in Obstetrics and Gynecology (CREOG) exam is a multiple-choice test given to all residents annually. Residency programs participating in this study provided residency program mean CREOG scores from the year prior (2015), and the first (2016) and second (2017) years of TrueLearn usage. Programs also contributed resident-specific CREOG scores for each resident for 2016 and 2017. This data was combined with each resident's TrueLearn usage data that was provided by TrueLearn with residency program consent. The CREOG scores consisted of the CREOG score standardized to all program years, the CREOG score standardized to the same program year (PGY) and the total percent (%) correct. TrueLearn usage data included number of practice questions completed, number of practice tests taken, average number of days between successive tests, and percent correct of answered practice questions. SETTING: OBGYN Residency Training Programs. PARTICIPANTS: OBGYN residency programs that purchased and utilized TrueLearn for the 2016 CREOG examination were eligible for participation (n = 14). Ten residency programs participated, which consisted of 212 residents in 2016 and 218 residents in 2017. RESULTS: TrueLearn was used by 78.8% (167/212) of the residents in 2016 and 84.9% (185/218) of the residents in 2017. No significant difference was seen in the average CREOG scores available on a per- program level before versus after the first year of implementation either using the CREOG score standardized to all PGYs (mean difference 1.0; p = 0.58) or standardized to the same PGY (mean difference 3.1; p = 0.25). Using resident-level data, there was no significant difference in mean CREOG score standardized to all PGYs between users and non-users of TrueLearn in 2016 (mean, 199.4 vs 196.7; p = 0.41) or 2017 (mean, 198.2 vs 203.4; p = 0.19). The percent of practice questions answered correctly on TrueLearn was positively correlated with the CREOG score standardized to all PGYs (r = 0.47 for 2016 and r = 0.60 for 2017), as well as with the CREOG total percent correct (r = 0.47 for 2016 and r = 0.61 for 2017). Based on a simple linear regression, for every 500 practice questions completed, the CREOG score significantly increased for PGY-2 residents by an average (±SE) of 7.3 ± 2.8 points (p = 0.013); the average increase was 0.7 ± 2.5 (p = 0.79) for PGY-3 residents and 5.8 ± 3.3 points (p = 0.09) for PGY-4 residents. CONCLUSIONS: Adoption of an online question bank did not result in higher mean CREOG scores at participating institutions. However, performance on the TrueLearn questions correlated with ITE performance, supporting predictive validity and the use of this question bank as a formative assessment for resident education and exam preparation.


Subject(s)
Gynecology , Internship and Residency , Obstetrics , Clinical Competence , Educational Measurement , Gynecology/education , Humans , Obstetrics/education , Retrospective Studies
17.
J Surg Educ ; 79(1): 266-273, 2022.
Article in English | MEDLINE | ID: mdl-34509414

ABSTRACT

OBJECTIVE: This study examines the role of electronic learning platforms for medical knowledge acquisition in orthopedic surgery residency training. This study hypothesizes that all methods of medical knowledge acquisition will achieve similar levels of improvement in medical knowledge as measured by change in orthopedic in-training examination (OITE) percentile scores. Our secondary hypothesis is that residents will equally value all study resources for usefulness in acquisition of medical knowledge, preparation for the OITE, and preparation for surgical practice. DESIGN: 9 ACGME accredited orthopedic surgery programs participated with 95% survey completion rate. Survey ranked sources of medical knowledge acquisition and study habits for OITE preparation. Survey results were compared to OITE percentile rank scores. PARTICIPANTS: 386 orthopedic surgery residents SETTING: 9 ACGME accredited orthopaedic surgery residency programs RESULTS: 82% of participants were utilizing online learning resources (Orthobullets, ResStudy, or JBJS Clinical Classroom) as primary sources of learning. All primary resources showed a primary positive change in OITE score from 2018 to 2019. No specific primary source improved performance more than any other sources. JBJS clinical classroom rated highest for improved medical knowledge and becoming a better surgeon while journal reading was rated highest for OITE preparation. Orthopedic surgery residents' expectation for OITE performance on the 2019 examination was a statistically significant predictor of their change (decrease, stay the same, improve) in OITE percentile scores (p<0.001). CONCLUSIONS: Our results showed that no specific preferred study source outperformed other sources. Significantly 82% of residents listed an online learning platform as their primary source which is a significant shift over the last decade. Further investigation into effectiveness of methodologies for electronic learning platforms in medical knowledge acquisition and in improving surgical competency is warranted.


Subject(s)
Internship and Residency , Orthopedics , Clinical Competence , Education, Medical, Graduate/methods , Educational Measurement/methods , Humans , Orthopedics/education
18.
JSES Rev Rep Tech ; 2(3): 340-344, 2022 Aug.
Article in English | MEDLINE | ID: mdl-37588876

ABSTRACT

Background: It is critical for orthopedic surgery residents and residency programs to have a current understanding of the content and resources utilized by the Orthopedic In-Training Examination (OITE) to continuously guide study and educational efforts. This study presents an updated analysis of the shoulder and elbow section of the OITE. Methods: All OITE questions, answers, and references from 2013 to 2019 were reviewed. The number of shoulder and elbow questions per year was recorded, and questions were analyzed for topic, imaging modalities, cognitive taxonomy, and references. We compared our data to the results of a previous study that analyzed shoulder and elbow OITE questions from 2002 to 2007 to examine trends and changes in this domain overtime. Results: There were 177 shoulder and elbow questions (126 shoulder, 71.2%; 51 elbow, 28.8%) of 1863 OITE questions (9.5%) over a 7-year period. The most commonly tested topics included degenerative joint disease/stiffness/arthroplasty (31.6%), anatomy/biomechanics (16.9%), instability/athletic injury (15.3%), trauma (14.7%), and rotator cuff (13.6%). Half of all questions involved clinical management decisions (49.7%). A total of 417 references were cited from 56 different sources, the most common of which were the Journal of Shoulder and Elbow Surgery (23.3%), Journal of the American Academy of Orthopaedic Surgeons (20.4%), and Journal of Bone and Joint Surgery (American Volume) (16%). The average time lag from article publication to OITE reference was 7.7 years. Compared with a prior analysis from 2002 to 2007, there was a significant increase in the number of shoulder and elbow questions on the OITE (5.5% to 9.5%; P < .001). Recent exams incorporated more complex multistep treatment questions (4.4% vs. 49.7%; P < .001) and fewer recall questions (42.2% vs. 22%; P < .001). There was a significant increase in the use of imaging modalities (53.3% vs. 79.1%; P < .001). No significant differences in the distribution of question topics were found. Conclusions: The percentage of shoulder and elbow questions on the OITE has nearly doubled over the past decade with greater emphasis on critical thinking (eg, clinical management decisions) over recall of facts. These findings should prompt educators to direct didactic efforts (eg, morning conferences and journal club) toward case-based learning to foster critical thinking and clinical reasoning skills.

19.
Med Teach ; 44(4): 433-440, 2022 04.
Article in English | MEDLINE | ID: mdl-34818129

ABSTRACT

PURPOSE: The relationship between duty hours (DH) and the performance of postgraduate residents is needed to establish appropriate DH limits. This study explores their relationship using the General Medicine In-training Examination (GM-ITE). MATERIALS AND METHODS: In this cross-sectional study, GM-ITE examinees of 2019 had participated. We analyzed data from the examination and questionnaire, including DH per week (eight categories). We examined the association between DH and GM-ITE score, using random-intercept linear models with and without adjustments. RESULTS: Five thousand five hundred and ninety-three participants (50.7% PGY-1, 31.6% female, 10.0% university hospitals) were included. Mean GM-ITE scores were lower among residents in Category 2 (45-50 h; mean score difference, -1.05; p < 0.001) and Category 4 (55-60 h; -0.63; p = 0.008) compared with residents in Category 5 (60-65 h; Reference). PGY-2 residents in Categories 2-4 had lower GM-ITE scores compared to those in Category 5. University residents in Category 1 and Category 5 showed a large mean difference (-3.43; p = 0.01). CONCLUSIONS: DH <60-65 h per week was independently associated with lower resident performance, but more DH did not improve performance. DH of 60-65 h per week may be the optimal balance for a resident's education and well-being.


Subject(s)
Internship and Residency , Clinical Competence , Cross-Sectional Studies , Educational Measurement , Female , Humans , Japan , Male
20.
J Clin Anesth ; 77: 110615, 2022 05.
Article in English | MEDLINE | ID: mdl-34923227

ABSTRACT

STUDY OBJECTIVE: This study aimed to assess the impact of data-driven didactic sessions on metrics including fund of knowledge, resident confidence in clinical topics, and stress in addition to American Board of Anesthesiology In-Training Examination (ITE) percentiles. DESIGN: Observational mixed-methods study. SETTING: Classroom, video-recorded e-learning. SUBJECTS: Anesthesiology residents from two academic medical centers. INTERVENTIONS: Residents were offered a data-driven didactic session, focused on lifelong learning regarding frequently asked/missed topics based on publicly-available data. MEASUREMENTS: Residents were surveyed regarding their confidence on exam topics, organization of study plan, willingness to educate others, and stress levels. Residents at one institution were interviewed post-ITE. The level and trend in ITE percentiles were compared before and after the start of this initiative using segmented regression analysis. RESULTS: Ninety-four residents participated in the survey. A comparison of pre-post responses showed an increased mean level of confidence (4.5 ± 1.6 vs. 6.2 ± 1.4; difference in means 95% CI:1.7[1.5,1.9]), sense of study organization (3.8 ± 1.6 vs. 6.7 ± 1.3;95% CI:2.8[2.5,3.1]), willingness to educate colleagues (4.0 ± 1.7 vs. 5.7 ± 1.9;95% CI:1.7[1.4,2.0]), and reduced stress levels (5.9 ± 1.9 vs. 5.2 ± 1.7;95% CI:-0.7[-1.0,-0.4]) (all p < 0.001). Thirty-one residents from one institution participated in the interviews. Interviews exhibited qualitative themes associated with increased fund of knowledge, accessibility of high-yield resources, and domains from the Kirkpatrick Classification of an educational intervention. In an assessment of 292 residents from 2012 to 2020 at one institution, there was a positive change in mean ITE percentile (adjusted intercept shift [95% CI] 11.0[3.6,18.5];p = 0.004) and trajectory over time after the introduction of data-driven didactics. CONCLUSION: Data-driven didactics was associated with improved resident confidence, stress, and factors related to wellness. It was also associated with a change from a negative to positive trend in ITE percentiles over time. Future assessment of data-driven didactics and impact on resident outcomes are needed.


Subject(s)
Anesthesiology , Internship and Residency , Anesthesiology/education , Clinical Competence , Educational Measurement/methods , Educational Status , Humans , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...