Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Med Sci Educ ; 31(2): 607-613, 2021 Apr.
Article in English | MEDLINE | ID: mdl-34457915

ABSTRACT

INTRODUCTION: Medical students use self-assessments to assess their knowledge and identify areas for additional study before taking a summative examination at the end of their clinical education segment. This study extended previous research on the NBME Clinical Science Mastery Series self-assessments to investigate the utility of recently released self-assessments for students completing Family Medicine clerkships and Emergency Medicine sub-internships and preparing for summative assessments. MATERIALS: The dataset included 12,200 Family Medicine and 3919 Emergency Medicine students who took the self-assessment and corresponding subject examination from the implementation of the self-assessments in 2017 through January 2020. RESULTS: Like other self-assessments, students typically took the self-assessment within a week of their Family Medicine or Emergency Medicine subject examination using the standard-paced testing mode. The proportion of variance in subject examination scores explained by self-assessment scores was slightly higher for the standard-paced group than for the self-paced group for Family Medicine, (R 2 = .26 and .23, respectively); however, the pattern was reversed for Emergency Medicine (R 2 = .29 and .32). Further, the two pacing groups had significantly different sets of regression parameter estimates. CONCLUSION: The Family Medicine and Emergency Medicine self-assessments allow students to prepare for their summative subject examinations using formative assessments that mirror the content and pacing of the subject examinations. Students can also opt to use the self-paced mode to leverage the self-assessment as an educational tool. Although the standard-paced mode often provides better prediction of subsequent subject examination scores, the self-paced mode is also consistent with an assessment for learning framework.

2.
Acad Med ; 95(9): 1404-1410, 2020 09.
Article in English | MEDLINE | ID: mdl-32195693

ABSTRACT

PURPOSE: To identify which internal medicine clerkship characteristics may relate to NBME Medicine Subject Examination scores, given the growing trend toward earlier clerkship start dates. METHOD: The authors used linear mixed effects models (univariable and multivariable) to determine associations between medicine exam performance and clerkship characteristics (longitudinal status, clerkship length, academic start month, ambulatory clinical experience, presence of a study day, involvement in a combined clerkship, preclinical curriculum type, medicine exam timing). Additional covariates included number of NBME clinical subject exams used, number of didactic hours, use of a criterion score for passing the medicine exam, whether medicine exam performance was used to designate clerkship honors, and United States Medical Licensing Examination Step 1 performance. The sample included 24,542 examinees from 62 medical schools spanning 3 academic years (2011-2014). RESULTS: The multivariable analysis found no significant association between clerkship length and medicine exam performance (all pairwise P > .05). However, a small number of examinees beginning their academic term in January scored marginally lower than those starting in July (P < .001). Conversely, examinees scored higher on the medicine exam later in the academic year (all pairwise P < .001). Examinees from schools that used a criterion score for passing the medicine exam also scored higher than those at schools that did not (P < .05). Step 1 performance remained positively associated with medicine exam performance even after controlling for all other variables in the model (P < .001). CONCLUSIONS: In this sample, the authors found no association between many clerkship variables and medicine exam performance. Instead, Step 1 performance was the most powerful predictor of medicine exam performance. These findings suggest that medicine exam performance reflects the overall medical knowledge students accrue during their education rather than any specific internal medicine clerkship characteristics.


Subject(s)
Clinical Clerkship , Educational Measurement/methods , Internal Medicine/education , Licensure, Medical , Specialty Boards , Clinical Competence , Humans , Linear Models , Multivariate Analysis , Time Factors , United States
3.
Med Sci Educ ; 30(1): 263-269, 2020 Mar.
Article in English | MEDLINE | ID: mdl-34457666

ABSTRACT

Previous research has found a moderate relationship between performance on individual clinical science subject examinations and USMLE performance. Given the widespread use of the clinical science subject examinations and the need for measures of clinical knowledge that help predict performance on Steps 2 CK and 3 and performance in residency training, this study explores the use of composite scores based on clinical science subject examinations to predict clinical knowledge outcome measures. The data set included students who took all of the five most widely used clinical science subject examinations (medicine, obstetrics and gynecology, pediatrics, psychiatry, and surgery) between January 1, 2013, and December 31, 2017 (N = 65,516). Composite scores were calculated based on average equated percent correct scores across various combinations of clinical science subject examinations. Stepwise linear regression analyses were performed with composite score and Step 1 score as predictor variables and Step 2 CK score or Step 3 score as the dependent variable. In all cases, the proportion of variance explained (R 2 ) by the composite score (0.62-0.65 for Step 2 CK score and 0.45-0.48 for Step 3 score) was greater than R 2 for Step 1 by itself (0.52 for Step 2 CK score and 0.37 for Step 3 score). Logistic regression analyses found that higher composite scores were associated with a greater probability of passing Steps 2 CK and 3. Composite scores can be used alone or in conjunction with Step 1 to identify students at risk of failing Step 2 CK and/or Step 3 to facilitate remediation.

4.
Am J Med Qual ; 35(1): 63-69, 2020.
Article in English | MEDLINE | ID: mdl-31177823

ABSTRACT

The health systems science (HSS) framework articulates systems-relevant topics that medical trainees must learn to be prepared for physician practice. As new HSS-related curricula are developed, measures demonstrating appropriate levels of reliability and validity are needed. The authors describe a collaborative effort between a consortium of medical schools and the National Board of Medical Examiners to create a multiple-choice HSS examination in the areas of evidence-based medicine/population health, patient safety, quality improvement, and teamwork. Fifteen schools administered the 100-question examination through 2 academic years a total of 1887 times to 1837 first-time takers. Total test score mean was 67% (SD 11%). Total test reliability as measured by coefficient α was .83. This examination differentiated between medical students who completed the examination before, during, and after relevant training/instruction. This new HSS examination can support and inform the efforts of institutions as they integrate HSS-related content into their curricula.


Subject(s)
Curriculum/standards , Education, Medical/standards , Educational Measurement/standards , Patient Safety/standards , Clinical Competence , Humans , Quality Improvement/standards , Students, Medical
5.
Eval Health Prof ; 43(3): 149-158, 2020 09.
Article in English | MEDLINE | ID: mdl-31462073

ABSTRACT

Learners and educators in the health professions have called for more fine-grained information (subscores) from assessments, beyond a single overall test score. However, due to concerns over reliability, there have been limited uses of subscores in practice. Recent advances in latent class analysis have made contributions in subscore reporting by using diagnostic classification models (DCMs), which allow reliable classification of examinees into fine-grained proficiency levels (subscore profiles). This study examines the innovative and practical application of DCM framework to health professions educational assessments using retrospective large-scale assessment data from the basic and clinical sciences: National Board of Medical Examiners Subject Examinations in pathology (n = 2,006) and medicine (n = 2,351). DCMs were fit and analyzed to generate subscores and subscore profiles of examinees. Model fit indices, classification (reliability), and parameter estimates indicated that DCMs had good psychometric properties including consistent classification of examinees into subscore profiles. Results showed a range of useful information including varying levels of subscore distributions. The DCM framework can be a promising approach to report subscores in health professions education. Consistency of classification was high, demonstrating reliable results at fine-grained subscore levels, allowing for targeted and specific feedback to learners.


Subject(s)
Education, Medical/organization & administration , Educational Measurement/methods , Education, Medical/standards , Humans , Latent Class Analysis , Psychometrics , Reproducibility of Results , Retrospective Studies
6.
Med Sci Educ ; 29(3): 841-847, 2019 Sep.
Article in English | MEDLINE | ID: mdl-34457549

ABSTRACT

Prior to August 2015, the National Board of Medical Examiners' (NBME) clinical science subject examination scores were reported as a scaled score. However, the scaled scores had some undesirable properties that threatened the validity of the inferences that score users were making based on the scores. The NBME changed the score scale to equated percent correct scores to address score validity concerns and to better meet the needs of medical school faculty and students. This paper describes the validity and practical considerations associated with the implementation of equated percent correct scores for the clinical science subject examination program.

8.
Acad Med ; 90(5): 684-90, 2015 May.
Article in English | MEDLINE | ID: mdl-25629950

ABSTRACT

PURPOSE: Accreditation standards require medical schools to use comparable assessment methods to ensure students in rotation-based clerkships and longitudinal integrated clerkships (LICs) achieve the same learning objectives. The National Board of Medical Examiners (NBME) Clinical Science Subject Examinations (subject exams) are commonly used, but an integrated examination like the NBME Comprehensive Clinical Science Examination (CCSE) may be better suited for LICs. This study examined the comparability of the CCSE and five commonly required subject exams. METHOD: In 2009-2010, third-year medical students in rotation-based clerkships at the University of British Columbia Faculty of Medicine completed subject exams in medicine, obstetrics-gynecology, pediatrics, psychiatry, and surgery for summative purposes following each rotation and a year-end CCSE for formative purposes. Data for 205 students were analyzed to determine the relationship between scores on the CCSE (and its five discipline subscales) and the five subject exams and the impact of clerkship rotation order. RESULTS: The correlation between the CCSE score and the average score on the five subject exams was high (0.80-0.93). Four subject exam scores were significant predictors of the CCSE score, and scores on the subject exams explained 65%-87% of CCSE score variance. Scores on each subject exam-but not rotation order-were statistically significant in predicting corresponding CCSE discipline subscale scores. CONCLUSIONS: The results provide evidence that these five subject exams and the CCSE measure similar constructs. This suggests that assessment of clerkship-year students' knowledge using the CCSE is comparable to assessment using this set of subject exams.


Subject(s)
Clinical Clerkship/methods , Clinical Competence , Clinical Medicine/education , Education, Medical/methods , Educational Measurement/methods , Schools, Medical , Students, Medical , British Columbia , Educational Measurement/standards , Humans , Learning , Retrospective Studies
9.
Teach Learn Med ; 26(4): 373-8, 2014.
Article in English | MEDLINE | ID: mdl-25318033

ABSTRACT

BACKGROUND: The Comprehensive Clinical Science Self-Assessment (CCSSA) is a web-administered multiple-choice examination that includes content that is typically covered during the core clinical clerkships in medical school. Because the content of CCSSA items resembles the content of the items on Step 2 Clinical Knowledge (CK), CCSSA is intended to be a tool for students to help assess whether they are prepared for Step 2 CK and to become familiar with its content, format, and pacing. PURPOSES: This study examined the relationship between performance on the National Board of Medical Examiners® CCSSA and performance on the United States Medical Licensing Examination® Step 2 CK for U.S./Canadian (USMGs) and international medical school students/graduates (IMGs). METHODS: The study included 9,789 participants who took CCSSA prior to their first Step 2 CK attempt. Linear and logistic regression analyses investigated the relationship between CCSSA performance and performance on Step 2 CK for both USMGs and IMGs. RESULTS: CCSSA scores explained 58% of the variation in first Step 2 CK scores for USMGs and 60% of the variation for IMGs; the relationship was somewhat different for the two groups as indicated by statistically different intercepts and slopes for the regression lines based on each group. Logistic regression results showed that examinees in both groups with low scores on CCSSA were at a higher risk of failing their first Step 2 CK attempt. CONCLUSIONS: RESULTS suggest that CCSSA can provide students with a valuable practice tool and a realistic self-assessment of their readiness to take Step 2 CK.


Subject(s)
Clinical Competence , Education, Medical, Undergraduate/standards , Educational Measurement/methods , Self-Assessment , Clinical Clerkship , Female , Humans , Internet , Male , United States , Young Adult
10.
Acad Med ; 85(10 Suppl): S98-101, 2010 Oct.
Article in English | MEDLINE | ID: mdl-20881715

ABSTRACT

BACKGROUND: This study examined the relationship between performance on the National Board of Medical Examiners Comprehensive Basic Science Self-Assessment (CBSSA) and performance on United States Medical Licensing Examination Step 1. METHOD: The study included 12,224 U.S. and Canadian medical school students who took CBSSA prior to their first Step 1 attempt. Linear and logistic regression analyses investigated the relationship between CBSSA performance and performance on Step 1, and how that relationship was related to interval between exams. RESULTS: CBSSA scores explained 67% of the variation in first Step 1 scores as the sole predictor variable and 69% of the variation when time between CBSSA attempt and first Step 1 attempt was also included as a predictor. Logistic regression results showed that examinees with low scores on CBSSA were at higher risk of failing their first Step 1 attempt. CONCLUSIONS: Results suggest that CBSSA can provide students with a realistic self-assessment of their readiness to take Step 1.


Subject(s)
Educational Measurement/methods , Educational Status , Licensure, Medical , Science/education , Self-Evaluation Programs , Canada , Clinical Competence , Education, Medical, Undergraduate , Humans , Regression Analysis , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...