Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38753202

ABSTRACT

Medical sciences education emphasizes basic science learning as a prerequisite to clinical learning. Studies exploring relationships between achievement in the basic sciences and subsequent achievement in the clinical sciences generally suggest a significant positive relationship. Basic science knowledge and clinical experience are theorized to combine to form encapsulated knowledge- a dynamic mix of information that is useful for solving clinical problems. This study explores the relationship between basic science knowledge (BSK), clinical science knowledge (CSK), and clinical problem-solving ability, as measured within the context of four veterinary colleges using both college-specific measures and professionally validated, standardized measures of basic and clinical science knowledge and problem-solving ability. Significant correlations existed among all variables. Structural equation modeling and confirmatory factor analysis were used to produce models showing that newly acquired BSK directly and significantly predicted BSK retained over time and newly acquired CSK, as well as indirectly predicted clinical problem-solving ability (mediated by newly acquired CSK and BSK retained over time). These findings likely suggest a gradual development of schema (encapsulated knowledge) and not an isolated development of biomedical versus clinical knowledge over time. A broader implication of these results is that explicitly teaching basic science knowledge positively and durably affects subsequent clinical knowledge and problem-solving ability independent of instructional strategy or curricular approach. Furthermore, for veterinary colleges specifically, student performance as measured by both course-level and standardized tests are likely to prove useful for predicting subsequent academic achievement in classroom and clinical settings, licensing examination performance, and/or for identifying students likely in need of remediation in clinical knowledge.

2.
J Vet Med Educ ; 45(3): 381-387, 2018.
Article in English | MEDLINE | ID: mdl-29393767

ABSTRACT

Individuals who want to become licensed veterinarians in North America must complete several qualifying steps including obtaining a passing score on the North American Veterinary Licensing Examination (NAVLE). Given the high-stakes nature of the NAVLE, it is essential to provide evidence supporting the validity of the reported test scores. One important way to assess validity is to evaluate the degree to which scores are impacted by the allotted testing time which, if inadequate, can hinder examinees from demonstrating their true level of proficiency. We used item response data from the November-December 2014 and April 2015 NAVLE administrations (n =5,292), to conduct timing analyses comparing performance across several examinee subgroups. Our results provide evidence that conditions were sufficient for most examinees, thereby supporting the current time limits. For the relatively few examinees who may have been impacted, results suggest the cause is not a bias with the test but rather the effect of poor pacing behavior combined with knowledge deficits.


Subject(s)
Educational Measurement , Licensure , Animals , Canada , Education, Veterinary , Humans , Reproducibility of Results , Time Factors , United States
3.
Acad Med ; 93(4): 636-641, 2018 04.
Article in English | MEDLINE | ID: mdl-29028636

ABSTRACT

PURPOSE: Increasing criticism of maintenance of certification (MOC) examinations has prompted certifying boards to explore alternative assessment formats. The purpose of this study was to examine the effect of allowing test takers to access reference material while completing their MOC Part III standardized examination. METHOD: Item response data were obtained from 546 physicians who completed a medical subspecialty MOC examination between 2013 and 2016. To investigate whether accessing references was related to better performance, an analysis of covariance was conducted on the MOC examination scores with references (access or no access) as the between-groups factor and scores from the physicians' initial certification examination as a covariate. Descriptive analyses were conducted to investigate how the new feature of accessing references influenced time management within the test day. RESULTS: Physicians scored significantly higher when references were allowed (mean = 534.44, standard error = 6.83) compared with when they were not (mean = 472.75, standard error = 4.87), F(1, 543) = 60.18, P < .001, ω(2) = 0.09. However, accessing references affected pacing behavior; physicians were 13.47 times more likely to finish with less than a minute of test time remaining per section when reference material was accessible. CONCLUSIONS: Permitting references caused an increase in performance, but also a decrease in the perception that the test has sufficient time limits. Implications for allowing references are discussed, including physician time management, impact on the construct assessed by the test, and the importance of providing validity evidence for all test design decisions.


Subject(s)
Attitude of Health Personnel , Physicians , Specialty Boards , Analysis of Variance , Certification , Clinical Competence , Education, Medical, Continuing , Humans , Time Factors , United States
4.
J Grad Med Educ ; 8(4): 541-545, 2016 Oct.
Article in English | MEDLINE | ID: mdl-27777664

ABSTRACT

BACKGROUND: In graduate medical education, assessment results can effectively guide professional development when both assessment and feedback support a formative model. When individuals cannot directly access the test questions and responses, a way of using assessment results formatively is to provide item keyword feedback. OBJECTIVE: The purpose of the following study was to investigate whether exposure to item keyword feedback aids in learner remediation. METHODS: Participants included 319 trainees who completed a medical subspecialty in-training examination (ITE) in 2012 as first-year fellows, and then 1 year later in 2013 as second-year fellows. Performance on 2013 ITE items in which keywords were, or were not, exposed as part of the 2012 ITE score feedback was compared across groups based on the amount of time studying (preparation). For the same items common to both 2012 and 2013 ITEs, response patterns were analyzed to investigate changes in answer selection. RESULTS: Test takers who indicated greater amounts of preparation on the 2013 ITE did not perform better on the items in which keywords were exposed compared to those who were not exposed. The response pattern analysis substantiated overall growth in performance from the 2012 ITE. For items with incorrect responses on both attempts, examinees selected the same option 58% of the time. CONCLUSIONS: Results from the current study were unsuccessful in supporting the use of item keywords in aiding remediation. Unfortunately, the results did provide evidence of examinees retaining misinformation.


Subject(s)
Education, Medical, Graduate/methods , Educational Measurement/methods , Feedback , Fellowships and Scholarships , Humans , Internship and Residency
5.
J Gen Intern Med ; 27(1): 65-70, 2012 Jan.
Article in English | MEDLINE | ID: mdl-21879372

ABSTRACT

BACKGROUND: The United States Medical Licensing Examination® (USMLE®) Step 3® examination is a computer-based examination composed of multiple choice questions (MCQ) and computer-based case simulations (CCS). The CCS portion of Step 3 is unique in that examinees are exposed to interactive patient-care simulations. OBJECTIVE: The purpose of the following study is to investigate whether the type and length of examinees' postgraduate training impacts performance on the CCS component of Step 3, consistent with previous research on overall Step 3 performance. DESIGN: Retrospective cohort study PARTICIPANTS: Medical school graduates from U.S. and Canadian institutions completing Step 3 for the first time between March 2007 and December 2009 (n = 40,588). METHODS: Post-graduate training was classified as either broadly focused for general areas of medicine (e.g. pediatrics) or narrowly focused for specific areas of medicine (e.g. radiology). A three-way between-subjects MANOVA was utilized to test for main and interaction effects on Step 3 and CCS scores between the demographic characteristics of the sample and type of residency. Additionally, to examine the impact of postgraduate training, CCS scores were regressed on Step 1 and Step 2 Clinical Knowledge (CK) scores. Residuals from the resulting regressions were plotted. RESULTS: There was a significant difference in CCS scores between broadly focused (µ = 216, σ = 17) and narrowly focused (µ=211, σ = 16) residencies (p < 0.001). Examinees in broadly focused residencies performed better overall and as length of training increased, compared to examinees in narrowly focused residencies. Predictors of Step 1 and Step 2 CK explained 55% of overall Step 3 variability and 9% of CCS score variability. CONCLUSIONS: Factors influencing performance on the CCS component may be similar to those affecting Step 3 overall. Findings are supportive of the validity of the Step 3 program and may be useful to program directors and residents in considering readiness to take this examination.


Subject(s)
Clinical Competence/standards , Decision Making, Computer-Assisted , Education, Medical, Graduate/standards , Educational Measurement/standards , Internship and Residency/standards , Licensure, Medical/standards , Canada , Education, Medical, Graduate/methods , Educational Measurement/methods , Female , Humans , Internship and Residency/methods , Male , Retrospective Studies , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...