Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 398
Filter
1.
South Med J ; 117(6): 342-344, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38830589

ABSTRACT

OBJECTIVES: This study assessed the content of US Medical Licensing Examination question banks with regard to out-of-hospital births and whether the questions aligned with current evidence. METHODS: Three question banks were searched for key words regarding out-of-hospital births. A thematic analysis was then utilized to analyze the results. RESULTS: Forty-seven questions were identified, and of these, 55% indicated a lack of inadequate, limited, or irregular prenatal care in the question stem. CONCLUSIONS: Systematic studies comparing prenatal care in out-of-hospital births versus hospital births are nonexistent, leading to the potential for bias and adverse outcomes. Adjustments to question stems that accurately portray current evidence are recommended.


Subject(s)
Licensure, Medical , Humans , United States , Licensure, Medical/standards , Female , Pregnancy , Prenatal Care/standards , Educational Measurement/methods , Education, Medical/methods , Education, Medical/standards
2.
BMC Med Educ ; 24(1): 504, 2024 May 07.
Article in English | MEDLINE | ID: mdl-38714975

ABSTRACT

BACKGROUND: Evaluation of students' learning strategies can enhance academic support. Few studies have investigated differences in learning strategies between male and female students as well as their impact on United States Medical Licensing Examination® (USMLE) Step 1 and preclinical performance. METHODS: The Learning and Study Strategies Inventory (LASSI) was administered to the classes of 2019-2024 (female (n = 350) and male (n = 262)). Students' performance on preclinical first-year (M1) courses, preclinical second-year (M2) courses, and USMLE Step 1 was recorded. An independent t-test evaluated differences between females and males on each LASSI scale. A Pearson product moment correlation determined which LASSI scales correlated with preclinical performance and USMLE Step 1 examinations. RESULTS: Of the 10 LASSI scales, Anxiety, Attention, Information Processing, Selecting Main Idea, Test Strategies and Using Academic Resources showed significant differences between genders. Females reported higher levels of Anxiety (p < 0.001), which significantly influenced their performance. While males and females scored similarly in Concentration, Motivation, and Time Management, these scales were significant predictors of performance variation in females. Test Strategies was the largest contributor to performance variation for all students, regardless of gender. CONCLUSION: Gender differences in learning influence performance on STEP1. Consideration of this study's results will allow for targeted interventions for academic success.


Subject(s)
Education, Medical, Undergraduate , Educational Measurement , Licensure, Medical , Students, Medical , Humans , Female , Male , Educational Measurement/methods , Education, Medical, Undergraduate/standards , Sex Factors , Licensure, Medical/standards , Learning , United States , Academic Performance , Young Adult
3.
PLoS One ; 19(4): e0302217, 2024.
Article in English | MEDLINE | ID: mdl-38687696

ABSTRACT

Efforts are being made to improve the time effectiveness of healthcare providers. Artificial intelligence tools can help transcript and summarize physician-patient encounters and produce medical notes and medical recommendations. However, in addition to medical information, discussion between healthcare and patients includes small talk and other information irrelevant to medical concerns. As Large Language Models (LLMs) are predictive models building their response based on the words in the prompts, there is a risk that small talk and irrelevant information may alter the response and the suggestion given. Therefore, this study aims to investigate the impact of medical data mixed with small talk on the accuracy of medical advice provided by ChatGPT. USMLE step 3 questions were used as a model for relevant medical data. We use both multiple-choice and open-ended questions. First, we gathered small talk sentences from human participants using the Mechanical Turk platform. Second, both sets of USLME questions were arranged in a pattern where each sentence from the original questions was followed by a small talk sentence. ChatGPT 3.5 and 4 were asked to answer both sets of questions with and without the small talk sentences. Finally, a board-certified physician analyzed the answers by ChatGPT and compared them to the formal correct answer. The analysis results demonstrate that the ability of ChatGPT-3.5 to answer correctly was impaired when small talk was added to medical data (66.8% vs. 56.6%; p = 0.025). Specifically, for multiple-choice questions (72.1% vs. 68.9%; p = 0.67) and for the open questions (61.5% vs. 44.3%; p = 0.01), respectively. In contrast, small talk phrases did not impair ChatGPT-4 ability in both types of questions (83.6% and 66.2%, respectively). According to these results, ChatGPT-4 seems more accurate than the earlier 3.5 version, and it appears that small talk does not impair its capability to provide medical recommendations. Our results are an important first step in understanding the potential and limitations of utilizing ChatGPT and other LLMs for physician-patient interactions, which include casual conversations.


Subject(s)
Physician-Patient Relations , Humans , Female , Male , Adult , Communication , Health Personnel , Licensure, Medical/standards , Artificial Intelligence , Counseling , Middle Aged
4.
J Osteopath Med ; 124(6): 257-265, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38498662

ABSTRACT

CONTEXT: The National Board of Osteopathic Medical Examiners (NBOME) administers the Comprehensive Osteopathic Medical Licensing Examination of the United States (COMLEX-USA), a three-level examination designed for licensure for the practice of osteopathic medicine. The examination design for COMLEX-USA Level 3 (L3) was changed in September 2018 to a two-day computer-based examination with two components: a multiple-choice question (MCQ) component with single best answer and a clinical decision-making (CDM) case component with extended multiple-choice (EMC) and short answer (SA) questions. Continued validation of the L3 examination, especially with the new design, is essential for the appropriate interpretation and use of the test scores. OBJECTIVES: The purpose of this study is to gather evidence to support the validity of the L3 examination scores under the new design utilizing sources of evidence based on Kane's validity framework. METHODS: Kane's validity framework contains four components of evidence to support the validity argument: Scoring, Generalization, Extrapolation, and Implication/Decision. In this study, we gathered data from various sources and conducted analyses to provide evidence that the L3 examination is validly measuring what it is supposed to measure. These include reviewing content coverage of the L3 examination, documenting scoring and reporting processes, estimating the reliability and decision accuracy/consistency of the scores, quantifying associations between the scores from the MCQ and CDM components and between scores from different competency domains of the L3 examination, exploring the relationships between L3 scores and scores from a performance-based assessment that measures related constructs, performing subgroup comparisons, and describing and justifying the criterion-referenced standard setting process. The analysis data contains first-attempt test scores for 8,366 candidates who took the L3 examination between September 2018 and December 2019. The performance-based assessment utilized as a criterion measure in this study is COMLEX-USA Level 2 Performance Evaluation (L2-PE). RESULTS: All assessment forms were built through the automated test assembly (ATA) procedure to maximize parallelism in terms of content coverage and statistical properties across the forms. Scoring and reporting follows industry-standard quality-control procedures. The inter-rater reliability of SA rating, decision accuracy, and decision consistency for pass/fail classifications are all very high. There is a statistically significant positive association between the MCQ and the CDM components of the L3 examination. The patterns of associations, both within the L3 subscores and with L2-PE domain scores, fit with what is being measured. The subgroup comparisons by gender, race, and first language showed expected small differences in mean scores between the subgroups within each category and yielded findings that are consistent with those described in the literature. The L3 pass/fail standard was established through implementation of a defensible criterion-referenced procedure. CONCLUSIONS: This study provides some additional validity evidence for the L3 examination based on Kane's validity framework. The validity of any measurement must be established through ongoing evaluation of the related evidence. The NBOME will continue to collect evidence to support validity arguments for the COMLEX-USA examination series.


Subject(s)
Educational Measurement , Licensure, Medical , Osteopathic Medicine , United States , Humans , Educational Measurement/methods , Educational Measurement/standards , Licensure, Medical/standards , Osteopathic Medicine/education , Osteopathic Medicine/standards , Reproducibility of Results , Clinical Competence/standards
8.
Acad Med ; 96(9): 1236-1238, 2021 09 01.
Article in English | MEDLINE | ID: mdl-34166234

ABSTRACT

The COVID-19 pandemic interrupted administration of the United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills (CS) exam in March 2020 due to public health concerns. As the scope and magnitude of the pandemic became clearer, the initial plans by the USMLE program's sponsoring organizations (NBME and Federation of State Medical Boards) to resume Step 2 CS in the short-term shifted to long-range plans to relaunch an exam that could harness technology and reduce infection risk. Insights about ongoing changes in undergraduate and graduate medical education and practice environments, coupled with challenges in delivering a transformed examination during a pandemic, led to the January 2021 decision to permanently discontinue Step 2 CS. Despite this, the USMLE program considers assessment of clinical skills to be critically important. The authors believe this decision will facilitate important advances in assessing clinical skills. Factors contributing to the decision included concerns about achieving desired goals within desired time frames; a review of enhancements to clinical skills training and assessment that have occurred since the launch of Step 2 CS in 2004; an opportunity to address safety and health concerns, including those related to examinee stress and wellness during a pandemic; a review of advances in the education, training, practice, and delivery of medicine; and a commitment to pursuing innovative assessments of clinical skills. USMLE program staff continue to seek input from varied stakeholders to shape and prioritize technological and methodological enhancements to guide development of clinical skills assessment. The USMLE program's continued exploration of constructs and methods by which communication skills, clinical reasoning, and physical examination may be better assessed within the remaining components of the exam provides opportunities for examinees, educators, regulators, the public, and other stakeholders to provide input.


Subject(s)
Clinical Competence/standards , Educational Measurement/methods , Licensure, Medical/standards , COVID-19/prevention & control , Educational Measurement/standards , Humans , Licensure, Medical/trends , United States
10.
Plast Reconstr Surg ; 148(1): 219-223, 2021 Jul 01.
Article in English | MEDLINE | ID: mdl-34076626

ABSTRACT

SUMMARY: The United States Medical Licensing Examination announced the changing of Step 1 score reporting from a three-digit number to pass/fail beginning on January 1, 2022. Plastic surgery residency programs have traditionally used United States Medical Licensing Examination Step 1 scores to compare plastic surgery residency applicants. Without a numerical score, the plastic surgery residency application review process will likely change. This article discusses advantages, disadvantages, and steps forward for residency programs related to the upcoming change. The authors encourage programs to continue to seek innovative methods of objectively and holistically evaluating applications.


Subject(s)
Educational Measurement/standards , Internship and Residency/organization & administration , Licensure, Medical/standards , Personnel Selection/organization & administration , Surgery, Plastic/education , Humans , Internship and Residency/standards , Personnel Selection/standards , Surgery, Plastic/standards , United States
11.
Acad Med ; 96(9): 1239-1241, 2021 09 01.
Article in English | MEDLINE | ID: mdl-34074900

ABSTRACT

The discontinuation of the United States Medical Licensing Examination Step 2 Clinical Skills (CS) in 2020 in response to the COVID-19 pandemic marked the end of a decades-long debate about the utility and value of the exam. For all its controversy, the implementation of Step 2 CS in 2004 brought about profound changes to the landscape of medical education, altering the curriculum and assessment practices of medical schools to ensure students were prepared to take and pass this licensing exam. Its elimination, while celebrated by some, is not without potential negative consequences. As the responsibility for assessing students' clinical skills shifts back to medical schools, educators must take care not to lose the ground they have gained in advancing clinical skills education. Instead, they need to innovate, collaborate, and share resources; hold themselves accountable; and ultimately rise to the challenge of ensuring that physicians have the necessary clinical skills to safely and effectively practice medicine.


Subject(s)
Clinical Competence/standards , Educational Measurement/methods , Licensure, Medical/standards , COVID-19/prevention & control , Education, Medical, Undergraduate/standards , Education, Medical, Undergraduate/trends , Educational Measurement/standards , Humans , Licensure, Medical/trends , United States
12.
Acad Med ; 96(9): 1319-1323, 2021 09 01.
Article in English | MEDLINE | ID: mdl-34133346

ABSTRACT

PURPOSE: The United States Medical Licensing Examination (USMLE) recently announced 2 policy changes: shifting from numeric score reporting on the Step 1 examination to pass/fail reporting and limiting examinees to 4 attempts for each Step component. In light of these policies, exam measures other than scores, such as the number of examination attempts, are of interest. Attempt limit policies are intended to ensure minimum standards of physician competency, yet little research has explored how Step attempts relate to physician practice outcomes. This study examined the relationship between USMLE attempts and the likelihood of receiving disciplinary actions from state medical boards. METHOD: The sample population was 219,018 graduates from U.S. and Canadian MD-granting medical schools who passed all USMLE Step examinations by 2011 and obtained a medical license in the United States, using data from the NBME and the Federation of State Medical Boards. Logistic regressions estimated how attempts on Steps 1, 2 Clinical Knowledge (CK), and 3 examinations influenced the likelihood of receiving disciplinary actions by 2018, while accounting for physician characteristics. RESULTS: A total of 3,399 physicians (2%) received at least 1 disciplinary action. Additional attempts needed to pass Steps 1, 2 CK, and 3 were associated with an increased likelihood of receiving disciplinary actions (odds ratio [OR]: 1.07, 95% confidence interval [CI]: 1.01, 1.13; OR: 1.09, 95% CI: 1.03, 1.16; OR: 1.11, 95% CI: 1.04, 1.17, respectively), after accounting for other factors. CONCLUSIONS: Physicians who took multiple attempts to pass Steps 1, 2 CK, and 3 were associated with higher estimated likelihood of receiving disciplinary actions. This study offers support for licensure and practice standards to account for physicians' USMLE attempts. The relatively small effect sizes, however, caution policy makers from placing sole emphasis on this relationship.


Subject(s)
Educational Measurement/statistics & numerical data , Employee Discipline/statistics & numerical data , Licensure, Medical/statistics & numerical data , Physicians/statistics & numerical data , Professional Misconduct/statistics & numerical data , Adult , Canada , Clinical Competence , Educational Measurement/standards , Female , Humans , Licensure, Medical/standards , Logistic Models , Male , Odds Ratio , Physicians/standards , Schools, Medical/standards , United States
18.
Teach Learn Med ; 33(1): 21-27, 2021.
Article in English | MEDLINE | ID: mdl-32928000

ABSTRACT

Phenomenon: Internal medicine physicians in the United States must pass the American Board of Internal Medicine Internal Medicine Maintenance of Certification (ABIM IM-MOC) examination as part of their ABIM IM-MOC requirements. Many of these physicians use an examination product to help them prepare, such as e-Learning products, including the ACP's MKSAP, UpToDate, and NEJM Knowledge+, yet their effectiveness remains largely unstudied. Approach: We compared ABIM IM-MOC examination performance among 177 physicians who attempted an ABIM IM-MOC examination between 2014-2017 and completed at least 75% of the NEJM Knowledge+ product prior to the ABIM IM-MOC examination and 177 very similar matched control physicians who did not use NEJM Knowledge+. Our measures of ABIM IM-MOC exam performance for NEJM Knowledge+ users were based on the results of the first attempt immediately following the NEJM Knowledge+ use and for non-users were based on the applicable matched examination performance. The three dichotomous examination performance outcomes measured on the first attempt at the ABIM IM-MOC examination included pass rate, scoring in the upper quartile, and scoring in the lower quartile. Findings: Use of NEJM Knowledge+ was associated with a regression adjusted 10.6% (5.37% to 15.8%) greater likelihood of passing the MOC examination (p < .001), 10.7% (2.61% to 18.7%) greater likelihood of having an examination score in the top quartile (p = .009), and -10.8% (-16.8% to -4.86%) lower likelihood of being in the bottom quartile of the MOC examination (p < .001) as compared to similar physicians who did not use NEJM Knowledge+. Insight: Physicians who used NEJM Knowledge+ had better ABIM IM-MOC exam performance. Further research is needed to determine what aspects of e-Learning products best prepare physicians for MOC examinations.


Subject(s)
Certification/standards , Clinical Competence/standards , Educational Measurement/statistics & numerical data , Internal Medicine/education , Licensure, Medical/standards , Specialty Boards/standards , Academic Performance , Attitude of Health Personnel , Humans , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...