Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Acad Med ; 86(9): 1148-54, 2011 Sep.
Article in English | MEDLINE | ID: mdl-21785314

ABSTRACT

PURPOSE: Little is known about the acquisition of clinical reasoning skills in medical school, the development of clinical reasoning over the medical curriculum as a whole, and the impact of various curricular methodologies on these skills. This study investigated (1) whether there are differences in clinical reasoning skills between learners at different years of medical school, and (2) whether there are differences in performance between students at schools with various curricular methodologies. METHOD: Students (n = 2,394) who had completed zero to three years of medical school at five U.S. medical schools participated in a cross-sectional study in 2008. Students took the same diagnostic pattern recognition (DPR) and clinical data interpretation (CDI) tests. Percent correct scores were used to determine performance differences. Data from all schools and students at all levels were aggregated for further analysis. RESULTS: Student performance increased substantially as a result of each year of training. Gains in DPR and CDI performance during the third year of medical school were not as great as in previous years across the five schools. CDI performance and performance gains were lower than DPR performance and gains. Performance gains attributable to training at each of the participating medical schools were more similar than different. CONCLUSIONS: Years of training accounted for most of the variation in DPR and CDI performance. As a rule, students at higher training levels performed better on both tests, though the expected larger gains during the third year of medical school did not materialize.


Subject(s)
Clinical Competence , Diagnostic Techniques and Procedures , Problem-Based Learning , Cross-Sectional Studies , Education, Medical, Undergraduate , Educational Measurement , Humans , Schools, Medical , Students, Medical , United States
2.
Teach Learn Med ; 14(4): 211-7, 2002.
Article in English | MEDLINE | ID: mdl-12395481

ABSTRACT

BACKGROUND: Problem-based learning (PBL) is being incorporated into more medical curricula, but its influence on subsequent clinical performance remains unclear. PURPOSE: To determine if PBL leads to better scores for fund of knowledge or clinical problem-solving skills in required clerkships taken early in the 3rd year at Penn State College of Medicine. METHODS: Data were collected from 6 class years, for clinical clerkship subscores completed during the first 4 months of the 3rd year, of students completing 1 or 2 years in a PBL or traditional track. Clerkship scores were analyzed as individual clerkships and as the average across clerkships for each student. Statistical analysis included a comparison of clerkship scores between the 2 tracks; using a 2-sample t test, and calculation of effect sizes. A multiple regression model was also employed to adjust for age, gender, race, preadmission grade point average, and Medical College Admission Test (MCAT). RESULTS: Mean scores of individual clerkships taken by problem-based or lecture-based students differed significantly in some clerkships, but the effect size was small. The effect sizes for fund of knowledge for the 6 clerkships ranged from 0.20 to 0.41; for clinical problem-solving skills, they ranged from 0.26 to 0.39. These differences between the problem-based and lecture-based students were of the same magnitude as the difference at the start of medical school on the MCAT, namely d = 0.31. There was a trend toward higher effect sizes in students having 2 rather than 1 year of PBL, and in later iterations of the track. CONCLUSION: PBL effect size on students' scores for fund of knowledge and clinical problem-solving skills was small to moderate in various years.


Subject(s)
Clinical Clerkship/standards , Curriculum , Problem-Based Learning , Professional Competence/statistics & numerical data , Clinical Clerkship/statistics & numerical data , Humans , Pennsylvania , Schools, Medical
3.
Article in English | MEDLINE | ID: mdl-11912333

ABSTRACT

Problem-based learning (PBL) is widely used in medical education. In some cases, facilitators assign a grade to reflect a student's performance in small-group sessions. In our PBL track, facilitators were asked to assess student knowledge base independent of their group participatory skills. To determine if facilitators' grades were correlated with student performance in written exams, a retrospective study of data from our PBL track was undertaken. Data from 156 students and 107 facilitators in six years of a PBL track at Penn State College of Medicine was analyzed by the Pearson correlation after pairing facilitator grades with written exam grades for each of the eight blocks of the curriculum. Exam reliability and validity were assessed by Cronbach's alpha and correlation with USMLE I board scores. The mean alpha was 0.549 +/- 0.221. The mean correlation with USMLE scores was 0.558 +/- 0.151. Facilitators' scores for knowledge were positively associated with students' exam grades. The corresponding significant Pearson correlation coefficients were between 0.342-0.622. However, the coefficients of determination showed that the correlation was not significant. Coefficients of determination showed that the knowledge scores explained only 12 to 39% of the variance in exam scores. Overestimation by facilitators was significantly (p < 0.0001) greater for students in the bottom 25% of the class by exam score than for students in the top 25% of the class. On the basis of this study, we concluded that facilitator assessment of student knowledge base is not useful.


Subject(s)
Education, Medical, Undergraduate , Educational Measurement/methods , Problem-Based Learning , Curriculum , Humans , Reproducibility of Results , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...