Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 61
Filter
1.
Perspect Med Educ ; 8(5): 298-304, 2019 10.
Article in English | MEDLINE | ID: mdl-31562635

ABSTRACT

INTRODUCTION: A perennial difficultly for remediation programmes in medical school is early identification of struggling learners so that resources and assistance can be applied as quickly as is practical. Our study investigated if early academic performance has predictive validity above and beyond pre-matriculation variables. METHODS: Using three cohorts of medical students, we used logistic regression modelling and negative binomial regression modelling to assess the strength of the relationships between measures of early academic performance and outcomes-later referral to the academic review and performance committee and total module score. RESULTS: We found performance on National Board of Medical Examiners (NBME) exams at approximately 5 months into the pre-clerkship curriculum was predictive of any referral as well as the total number of referrals to an academic review and performance committee during medical school (MS)1, MS2, MS3 and/or MS4 years. DISCUSSION: NBME exams early in the curriculum may be an additional tool for early identification of struggling learners.


Subject(s)
Learning Disabilities/therapy , Students, Medical/statistics & numerical data , Adult , Cohort Studies , Education, Medical, Undergraduate , Educational Measurement/methods , Female , Humans , Learning Disabilities/psychology , Logistic Models , Male , Risk Assessment/methods , Risk Assessment/standards
2.
Mil Med ; 183(11-12): e680-e684, 2018 11 01.
Article in English | MEDLINE | ID: mdl-29718290

ABSTRACT

Introduction: This is an empirical study to better understand commonly used medical school admission measures and disenrollment decisions during undergraduate medical education as well as graduate medical education (GME) probation or termination decisions. Materials and Methods: Based on the data of USUHS medical students matriculating between 1998 and 2011 (N = 2,460), we compared medical school graduates and those disenrolled from medical school on MCAT scores, undergraduate BCPM (Biology, Chemistry, Physics, Math) GPA, and undergraduate overall GPA. We also reported more specific reasons for disenrollment decisions. Next, we compared the students who were referred to the student promotion committee (SPC) with other students on these measures. Moving onto GME, we compared trainees who were put on probation or terminated from training with those who were not on MCAT and undergraduate GPA measures. In addition, we examined the association between being referred to the SPC and GME probation or termination. Results: There were 2,347 graduates and 113 disenrolled students from medical school (4.8%). For the disenrolled students, 43 (38.7%) students were disenrolled for exclusively (or primarily) non-academic reasons, and 68 (61.3%) were disenrolled for exclusively (or primarily) academic reasons. The t-tests showed statistically significant differences on the MCAT score of the first attempt (t(2,449) = 7.22, P < 0.01, Cohen's d = 0.70), average MCAT score (t(2,449) = 4.22, P < 0.01, Cohen's d = 0.41), and highest MCAT score (t(2,449) = 3.51, P < 0.01, Cohen's d = 0.34). Logistic regression model selection also revealed that the best predictor for disenrollment was the first MCAT score (exp(b) = 0.83, 95% CI = (0.78, 0.88)). No significant differences on these measures were found from the group comparisons on SPC and GME probation or termination. There was no significant association between SPC appearance and GME probation or termination. Conclusions: Academic difficulties, especially in the basic sciences, appear to be the most common factor for disenrollment from medical school. These students also had lower MCAT scores, particularly on the first attempt. The MCAT performance indicators and undergraduate GPA were consistently lower, but not statistically significant, for those who appeared before SPC or were put on probation or terminated from training during GME.


Subject(s)
Education, Medical, Graduate/methods , Education, Medical, Undergraduate/methods , Personnel Selection/methods , Students, Medical/statistics & numerical data , Adult , Cohort Studies , College Admission Test/statistics & numerical data , Education, Medical, Graduate/statistics & numerical data , Education, Medical, Undergraduate/statistics & numerical data , Educational Measurement/methods , Educational Measurement/statistics & numerical data , Female , Humans , Logistic Models , Longitudinal Studies , Male , Personnel Selection/statistics & numerical data , School Admission Criteria/statistics & numerical data , Schools, Medical/organization & administration , Schools, Medical/statistics & numerical data
3.
Mil Med ; 180(4 Suppl): 1-3, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25850119

ABSTRACT

In 1980, the Uniformed Services University of the Health Sciences (USU) graduated its first class of medical students. As a national university intended to produce "career-committed" military officers and future leaders of the Military Health System, USU functions as the service academy for military medicine and public health. More than 40 years after the school's charter and 5,000 graduates since the first class, we describe the original purpose of USU and provide an update on its achievements. In particular, we address the question of the "staying power" of the University's alumni-the degree to which graduation from the nation's military medical school is associated with long years of devoted service to military medicine. At a time when the MHS is confronting the challenge of extended deployments, rising health care costs, and a growing array of threats to our nation's health, we suggest that America needs USU now more than ever.


Subject(s)
Military Medicine/history , Schools, Medical/history , History, 20th Century , History, 21st Century , Humans , Military Medicine/education , Military Medicine/statistics & numerical data , Schools, Medical/statistics & numerical data , United States
4.
Mil Med ; 180(4 Suppl): 4-11, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25850120

ABSTRACT

BACKGROUND: The Medical College Admissions Test (MCAT) is a high-stakes test required for entry to most U. S. medical schools; admissions committees use this test to predict future accomplishment. Although there is evidence that the MCAT predicts success on multiple choice-based assessments, there is little information on whether the MCAT predicts clinical-based assessments of undergraduate and graduate medical education performance. This study looked at associations between the MCAT and medical school grade point average (GPA), Medical Licensing Examination (USMLE) scores, observed patient care encounters, and residency performance assessments. METHODS: This study used data collected as part of the Long-Term Career Outcome Study to determine associations between MCAT scores, USMLE Step 1, Step 2 clinical knowledge and clinical skill, and Step 3 scores, Objective Structured Clinical Examination performance, medical school GPA, and PGY-1 program director (PD) assessment of physician performance for students graduating 2010 and 2011. RESULTS: MCAT data were available for all students, and the PGY PD evaluation response rate was 86.2% (N = 340). All permutations of MCAT scores (first, last, highest, average) were weakly associated with GPA, Step 2 clinical knowledge scores, and Step 3 scores. MCAT scores were weakly to moderately associated with Step 1 scores. MCAT scores were not significantly associated with Step 2 clinical skills Integrated Clinical Encounter and Communication and Interpersonal Skills subscores, Objective Structured Clinical Examination performance or PGY-1 PD evaluations. DISCUSSION: MCAT scores were weakly to moderately associated with assessments that rely on multiple choice testing. The association is somewhat stronger for assessments occurring earlier in medical school, such as USMLE Step 1. The MCAT was not able to predict assessments relying on direct clinical observation, nor was it able to predict PD assessment of PGY-1 performance.


Subject(s)
Clinical Competence/statistics & numerical data , College Admission Test/statistics & numerical data , Forecasting , Schools, Medical , Students, Medical/statistics & numerical data , Achievement , Adult , Female , Humans , Male , United States
5.
Mil Med ; 180(4 Suppl): 18-23, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25850122

ABSTRACT

PURPOSE: To determine if there is an association between several commonly obtained premedical school and medical school measures and board certification performance. We specifically included measures from our institution for which we have predictive validity evidence into the internship year. We hypothesized that board certification would be most likely to be associated with clinical measures of performance during medical school, and with scores on standardized tests, whether before or during medical school. METHODS: Achieving board certification in an American Board of Medical Specialties specialty was used as our outcome measure for a 7-year cohort of graduates (1995-2002). Age at matriculation, Medical College Admissions Test (MCAT) score, undergraduate college grade point average (GPA), undergraduate college science GPA, Uniformed Services University (USU) cumulative GPA, USU preclerkship GPA, USU clerkship year GPA, departmental competency committee evaluation, Internal Medicine (IM) clerkship clinical performance rating (points), IM total clerkship points, history of Student Promotion Committee review, and United States Medical Licensing Examination (USMLE) Step 1 score and USMLE Step 2 clinical knowledge score were associated with this outcome. RESULTS: Ninety-three of 1,155 graduates were not certified, resulting in an average rate of board certification of 91.9% for the study cohort. Significant small correlations were found between board certification and IM clerkship points (r = 0.117), IM clerkship grade (r = 0.108), clerkship year GPA (r = 0.078), undergraduate college science GPA (r = 0.072), preclerkship GPA and medical school GPA (r = 0.068 for both), USMLE Step 1 (r = 0.066), undergraduate college total GPA (r = 0.062), and age at matriculation (r = -0.061). In comparing the two groups (board certified and not board certified cohorts), significant differences were seen for all included variables with the exception of MCAT and USMLE Step 2 clinical knowledge scores. All the variables put together could explain 4.1% of the variance of board certification by logistic regression. CONCLUSIONS: This investigation provides some additional validity evidence that measures collected for purposes of student evaluation before and during medical school are warranted.


Subject(s)
Achievement , Educational Measurement/statistics & numerical data , Licensure, Medical/statistics & numerical data , Schools, Medical , Students, Medical/statistics & numerical data , Adult , Clinical Clerkship/statistics & numerical data , Clinical Competence , Cohort Studies , Educational Measurement/methods , Female , Humans , Male , Specialty Boards , United States
6.
Mil Med ; 180(4 Suppl): 104-8, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25850136

ABSTRACT

BACKGROUND: Using a previously developed postgraduate year (PGY)-1 program director's evaluation survey, we developed a parallel form to assess more senior residents (PGY-3). The PGY-3 survey, which aligns with the core competencies established by the Accreditation Council for Graduate Medical Education, also includes items that reflect our institution's military-unique context. PURPOSE: To collect feasibility, reliability, and validity evidence for the new PGY-3 evaluation. METHODS: We collected PGY-3 data from program directors who oversee the education of military residents. The current study's cohort consisted of Uniformed Services University of the Health Sciences students graduating in 2008, 2009, and 2010. We performed exploratory factor analysis (EFA) to examine the internal structure of the survey and subjected each of the factors identified in the EFA to an internal consistency reliability analysis. We then performed correlation analysis to examine the relationships between PGY-3 ratings and several OUTCOMES: PGY-1 ratings, cumulative medical school grade point average (GPA), and performance on U.S. Medical Licensing Examinations (USMLE) Step 1, Step 2 Clinical Knowledge, and Step 3. RESULTS: Of the 510 surveys we distributed, 388 (76%) were returned. Results from the EFA suggested four factors: "Medical Expertise," "Professionalism," "Military-unique Practice," and "Systems-based Practice." Scores on these four factors showed good internal consistency reliability, as measured by Cronbach's α (α ranged from 0.92 to 0.98). Further, as expected, "Medical Expertise" and "Professionalism" had small to moderate correlations with cumulative medical school GPA and performance on the USMLE Step examinations. CONCLUSIONS: The new program director's evaluation survey instrument developed in this study appears to be feasible, and the scores that emerged have reasonable evidence of reliability and validity in a sample of third-year residents.


Subject(s)
Clinical Competence , Education, Medical, Graduate/standards , Educational Measurement/methods , Internship and Residency/statistics & numerical data , Surveys and Questionnaires/standards , Adult , Education, Medical, Graduate/methods , Factor Analysis, Statistical , Faculty, Medical , Feasibility Studies , Female , Humans , Licensure, Medical/statistics & numerical data , Male , Military Medicine/education , Reproducibility of Results , Schools, Medical , United States
7.
Mil Med ; 180(4 Suppl): 109-12, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25850137

ABSTRACT

PURPOSE: To report accomplishments of graduates of the F. Edward Hébert School of Medicine who have left, retired, or are near the end of their uniformed career in several professional domains: military career milestones, medical professional education, academic landmarks, and leadership. METHODS: This study utilized an earlier questionnaire that was modified to capture additional career landmarks and improve the clarity of several items. The modified survey was sent electronically to alumni who graduated from 1980-2001 in March, 2012. RESULTS: The questionnaire was sent to 2,825 alumni for whom we had e-mail addresses. We estimate that we reached 2,400 alumni. A total of 1,189 alumni returned the questionnaire, yielding an estimated response rate of 50%. For this cohort, the board certification was 95%, over 20% obtained additional degrees, 92.8% had worked as a full-time physician, nearly two-thirds had deployed for combat, 13.9% had received the Legion of Merit, and 68.6% had published at least one peer-reviewed manuscript. CONCLUSION: Many accomplishments including board certification rates, deployment experience, academic and military leadership positions, military awards, promotion rates, and academic medicine contributions are indicators that USU is continuing to meet its unique mission.


Subject(s)
Achievement , Clinical Competence , Employment/statistics & numerical data , Military Medicine/education , Students, Medical/psychology , Adult , Education, Medical/statistics & numerical data , Employment/psychology , Female , Humans , Leadership , Male , Schools, Medical/statistics & numerical data , Surveys and Questionnaires , United States
8.
Mil Med ; 180(4 Suppl): 113-28, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25850138

ABSTRACT

PURPOSE: This study assessed alumni perceptions of their preparedness for clinical practice using the Accreditation Council for Graduate Medical Education (ACGME) competencies. We hypothesized that our alumni's perception of preparedness would be highest for military-unique practice and professionalism and lowest for system-based practice and practice-based learning and improvement. METHOD: 1,189 alumni who graduated from the Uniformed Services University (USU) between 1980 and 2001 completed a survey modeled to assess the ACGME competencies on a 5-point, Likert-type scale. Specifically, self-reports of competencies related to patient care, communication and interpersonal skills, medical knowledge, professionalism, systems-based practice, practice-based learning and improvement, and military-unique practice were evaluated. RESULTS: Consistent with our expectations as the nation's military medical school, our graduates were most confident in their preparedness for military-unique practice, which included items assessing military leadership (M = 4.30, SD = 0.65). USU graduates also indicated being well prepared for the challenges of residency education in the domain of professionalism (M = 4.02, SD = 0.72). Self-reports were also high for competencies related to patient care (M = 3.86, SD = 0.68), communication and interpersonal skills (M = 3.88, SD = 0.66), and medical knowledge (M = 3.78, SD = 0.73). Consistent with expectations, systems-based practice (M = 3.50, SD = 0.70) and practice-based learning and improvement (M = 3.57, SD = 0.62) were the lowest rated competencies, although self-reported preparedness was still quite high. DISCUSSION: Our findings suggest that, from the perspective of our graduates, USU is providing both an effective military-unique curriculum and is preparing trainees for residency training. Further, these results support the notion that graduates are prepared to lead and to practice medicine in austere environments. Compared to other competencies that were assessed, self-ratings for systems-based practice and practice-based learning and improvement were the lowest, which suggests the need to continue to improve USU education in these areas.


Subject(s)
Clinical Competence , Curriculum/standards , Education, Medical, Graduate/standards , Military Medicine/education , Students, Medical/psychology , Adult , Communication , Female , Humans , Internship and Residency/standards , Leadership , Male , Patient Care/standards , Problem-Based Learning/standards , Professionalism/education , Schools, Medical , Surveys and Questionnaires , United States
9.
Mil Med ; 180(4 Suppl): 164-70, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25850148

ABSTRACT

The work of the Long-Term Career Outcome Study has been a program of scholarship spanning 10 years. Borrowing from established quality assurance literature, the Long-Term Career Outcome Study team has organized its scholarship into three phases; before medical school, during medical school, and after medical school. The purpose of this commentary is to address two fundamental questions: (1) what has been learned? and (2) how does this knowledge translate to educational practice and policy now and into the future? We believe that answers to these questions are relevant not only to our institution but also to other educational institutions seeking to provide high-quality health professions education.


Subject(s)
Achievement , Education, Medical , Educational Measurement , Employment/trends , Clinical Clerkship/trends , Humans , Internship and Residency/trends , Models, Educational , School Admission Criteria/trends , United States
10.
Teach Learn Med ; 26(4): 379-86, 2014.
Article in English | MEDLINE | ID: mdl-25318034

ABSTRACT

BACKGROUND: Recently, there has been a surge in the use of objective structured clinical examinations (OSCEs) at medical schools around the world, and with this growth has come the concomitant need to validate such assessments. PURPOSES: The current study examined the associations between student performance on several school-level clinical skills and knowledge assessments, including two OSCEs, the National Board of Medical Examiners® (NBME) Subject Examinations, and the United States Medical Licensing Examination® (USMLE) Step 2 Clinical Skills (CS) and Step 3 assessments. METHODS: The sample consisted of 806 medical students from the Uniformed Services University of the Health Sciences. We conducted Pearson correlation analysis as well as stepwise multiple linear regression modeling to examine the strength of associations between students' performance on 2nd- and 3rd-year OSCEs and their two Step 2 CS component scores and Step 3 scores. RESULTS: Positive associations were found between the OSCE variables and the USMLE scores; in particular, student performance on both the 2nd- and 3rd-year OSCEs was more strongly associated with the two Step 2 CS component scores than with Step 3 scores. CONCLUSIONS: These findings, although preliminary, provide some predictive validity evidence for the use of OSCEs in determining readiness of medical students for clinical practice and licensure.


Subject(s)
Clinical Competence , Education, Medical, Undergraduate/standards , Educational Measurement/methods , Female , Humans , Male , United States , Young Adult
11.
Acad Med ; 89(10): 1408-15, 2014 Oct.
Article in English | MEDLINE | ID: mdl-25054420

ABSTRACT

PURPOSE: To study medical students' letters of recommendation (LORs) from their applications to medical school to determine whether these predicted medical school performance, because many researchers have questioned LORs' predictive validity. METHOD: A retrospective cohort study of three consecutive graduating classes (2007-2009) at the Uniformed Services University of the Health Sciences was performed. In each class, the 27 students who had been elected into the Alpha Omega Alpha (AOA) Honor Medical Society were defined as top graduates, and the 27 students with the lowest cumulative grade point average (GPA) were designated as "bottom of the class" graduates. For each student, the first three LORs (if available) in the application packet were independently coded by two blinded investigators using a comprehensive list of 76 characteristics. Each characteristic was compared with graduation status (top or bottom of the class), and those with statistical significance related to graduation status were inserted into a logistic regression model, with undergraduate GPA and Medical College Admission Test score included as control variables. RESULTS: Four hundred thirty-seven LORs were included. Of 76 LOR characteristics, 7 were associated with graduation status (P ≤ .05), and 3 remained significant in the regression model. Being rated as "the best" among peers and having an employer or supervisor as the LOR author were associated with induction into AOA, whereas having nonpositive comments was associated with bottom of the class students. CONCLUSIONS: LORs have limited value to admission committees, as very few LOR characteristics predict how students perform during medical school.


Subject(s)
Correspondence as Topic , Educational Measurement , School Admission Criteria , Schools, Medical , Cohort Studies , Forecasting , Humans , Logistic Models , Maryland , Retrospective Studies , Societies
12.
Acad Med ; 89(5): 762-6, 2014 May.
Article in English | MEDLINE | ID: mdl-24667514

ABSTRACT

PURPOSE: To investigate the association between poor performance on National Board of Medical Examiners clinical subject examinations across six core clerkships and performance on the United States Medical Licensing Examination Step 3 examination. METHOD: In 2012, the authors studied matriculants from the Uniformed Services University of the Health Sciences with available Step 3 scores and subject exam scores on all six clerkships (Classes of 2007-2011, N = 654). Poor performance on subject exams was defined as scoring one standard deviation (SD) or more below the mean using the national norms of the corresponding test year. The association between poor performance on the subject exams and the probability of passing or failing Step 3 was tested using contingency table analyses and logistic regression modeling. RESULTS: Students performing poorly on one subject exam were significantly more likely to fail Step 3 (OR 14.23 [95% CI 1.7-119.3]) compared with students with no subject exam scores that were 1 SD below the mean. Poor performance on more than one subject exam further increased the chances of failing (OR 33.41 [95% CI 4.4-254.2]). This latter group represented 27% of the entire cohort, yet contained 70% of the students who failed Step 3. CONCLUSIONS: These findings suggest that individual schools could benefit from a review of subject exam performance to develop and validate their own criteria for identifying students at risk for failing Step 3.


Subject(s)
Clinical Clerkship , Education, Medical, Undergraduate/standards , Educational Measurement , Licensure, Medical , Confidence Intervals , Female , Humans , Logistic Models , Male , Needs Assessment , Odds Ratio , United States , Young Adult
13.
Teach Learn Med ; 25(1): 55-8, 2013.
Article in English | MEDLINE | ID: mdl-23330895

ABSTRACT

BACKGROUND: There is a paucity of research on whether application essays are a valid indicator of medical students' future performance. PURPOSE: The goal is to score medical school application essays systematically and examine the correlations between these essay scores and several indicators of student performance during medical school and internship. METHODS: A journalist created a scoring rubric based on the journalism literature and scored 2 required essays of students admitted to our university in 1 year (N = 145). We picked 7 indicators of medical school and internship performance and correlated these measures with overall essay scores: preclinical medical school grade point average (GPA), clinical medical school GPA, cumulative medical school GPA, U.S. Medical Licensing Exam (USMLE) Step 1 and 2 scores, and scores on a program director's evaluation measuring intern professionalism and expertise. We then examined the Pearson and Spearman correlations between essay scores and the outcomes. RESULTS: Essay scores did not vary widely. American Medical College Application Service essay scores ranged from 3.3 to 4.5 (M = 4.11, SD = 0.15), and Uniformed Services University of the Health Sciences essay scores ranged from 2.9 to 4.5 (M = 4.09, SD = 0.17). None of the medical school or internship performance indicators was significantly correlated with the essay scores. CONCLUSIONS: These findings raise questions about the utility of matriculation essays, a resource-intensive admission requirement.


Subject(s)
Aptitude , Educational Measurement/methods , School Admission Criteria , Schools, Medical , Students, Medical , Writing , Education, Medical, Undergraduate , Forecasting , Humans , Statistics, Nonparametric
14.
Mil Med ; 177(9 Suppl): 7-10, 2012 Sep.
Article in English | MEDLINE | ID: mdl-23029853

ABSTRACT

BACKGROUND: Medical schools are increasing class size to meet future health care needs for our nation. This may lead to more students being accepted from an alternate list (vs. primary acceptances). Given these trends, performance outcomes were compared for alternate list matriculants and primary acceptances. Our hypothesis was that those students accepted from an alternate list would perform equally to the primary acceptances on these outcomes. METHOD: We compared medical school performance of students who received a primary recommendation of "accept" and compared them to those who received a recommendation of "alternate" over a 10-year period. Given the small sample size of this alternate list group (N = 23), descriptive statistics are reported. RESULTS: No consistent differences between alternate and primary acceptance matriculants in terms of cumulative medical school grade point average, United States Medical Licensing Examination (USMLE) Step 1 scores and USMLE Step 2 Clinical Knowledge scores were found. Only three alternates (13.0%) were presented to student promotion committee compared to 17.2% for matriculants who were primary acceptances. Three alternates were required to repeat a year (average percentage of 8.7%) compared to 5.6% of matriculants who were primary acceptances. CONCLUSIONS: This observational study provides some reassurance that as long as the qualifications of the applicant pool remain adequate, admissions policies that provide for alternate list acceptances may not produce poorer performing students, at least by our current outcome measures.


Subject(s)
Educational Measurement , School Admission Criteria , Students, Medical , Adult , Educational Measurement/statistics & numerical data , Humans , Military Medicine , Personnel Selection , School Admission Criteria/statistics & numerical data , Schools, Medical , Students, Medical/statistics & numerical data , United States , Young Adult
15.
Mil Med ; 177(9 Suppl): 11-5, 2012 Sep.
Article in English | MEDLINE | ID: mdl-23029854

ABSTRACT

PURPOSE: To investigate the relationship between self-reported research experience and medical students' performance in medical school and internship. METHODS: We collected data from seven year-groups (1993-1999; N = 1,112) and examined 7 performance outcomes: medical school preclinical grade point average (GPA), medical school clinical GPA, cumulative medical school GPA, U.S. Medical Licensing Examination Step 1 and 2 scores, and scores on a previously validated program director's survey of intern professionalism and expertise. We then conducted a series of multiple linear regressions to determine the relations between self-reported research experience and our seven outcomes. RESULTS: When compared to those who reported no prior research experience, students who reported research experience performed significantly better on U.S. Medical Licensing Examination Step 1 and had a higher medical school preclinical GPA. However, these same students scored significantly lower on intern professionalism and expertise ratings. Self-reported research experience did not show statistically significant correlations with the other outcome variables. CONCLUSIONS: The results from our large, multiyear, cohort study suggest that prior research experience may account for some variance in outcomes in the early stages of medical school education, but that variance explained diminishes considerably as trainees progress into the more senior phases of education. On the other hand, prior research experience may be negatively related to students' performance in internship. In all cases, however, effect sizes are small.


Subject(s)
Biomedical Research , Internship and Residency , School Admission Criteria , Students, Medical , Adult , Female , Humans , Male , Schools, Medical , Self Report , Young Adult
16.
Mil Med ; 177(9 Suppl): 3-6, 2012 Sep.
Article in English | MEDLINE | ID: mdl-23029852

ABSTRACT

In 2005, the Long-Term Career Outcome Study (LTCOS) was established by the Dean, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences (USU). The original charge to the LTCOS team was to establish an electronic database of current and past students at USU. Since its inception, however, the LTCOS team has broadened its mission and started collecting and analyzing data on a continuous basis for the purposes of program evaluation and, in some cases, research. The purpose of this commentary is to review the history of the LTCOS, including details about USU, a brief review of prior LTCOS work, and progress made since our last essay on LTCOS efforts. This commentary also provides an introduction to the special issue, which is arranged as a series of articles that span the medical education continuum (i.e., before, during, and after medical school). The relative balance of articles in each phase of training represents the LTCOS team's efforts to address the entire continuum of medical education.


Subject(s)
Career Choice , Education, Medical , Military Medicine , Schools, Medical , Adult , Education, Medical/organization & administration , Education, Medical/standards , Humans , Military Personnel , Organizational Objectives , Program Development , Program Evaluation , Schools, Medical/organization & administration , Students, Medical , United States , Young Adult
17.
Mil Med ; 177(9 Suppl): 16-20, 2012 Sep.
Article in English | MEDLINE | ID: mdl-23029855

ABSTRACT

BACKGROUND: Admissions committees attempt to select the most qualified applicants based on many cognitive and "noncognitive" factors. PURPOSE: Identify common themes cited in the admissions committee member summaries of medical school matriculants and determine the relative frequency and importance of these themes. METHODS: After reviewing a convenience sample of 150 reviewer comments, 14 qualitative themes were identified. Utterances (thematic word strings) from each of the three reviewer comments for each matriculant for 7 academic years (1989-1996) were then categorized and coded as being positive, negative, or neutral. Intra-rater and inter-rater reliabilities were calculated. RESULTS: Utterances (n = 9299) about 981 matriculants were categorized by theme and sorted as being positive, neutral, or negative. Intra-rater reliabilities were excellent (mean K = 0.98, range 0.90-1.00). Similarly, inter-rater reliabilities were also excellent (mean K = 0.94, range 0.55-1.00 and mean K = 0.90, range 0.08-1.00). Four themes (overall summarizing comments, academic, test scores, and motivation) accounted for more than half (56%) of the utterances. CONCLUSIONS: We were able to qualitatively identify themes and provide information about how one committee weighs both cognitive and "noncognitive" factors. Admission committees should consider reexamining their process and potentially expanding, eliminating, or modifying application components.


Subject(s)
School Admission Criteria , Schools, Medical , Adult , Decision Making , Education, Medical, Undergraduate , Humans , School Admission Criteria/statistics & numerical data , Students, Medical , Young Adult
18.
Mil Med ; 177(9 Suppl): 21-5, 2012 Sep.
Article in English | MEDLINE | ID: mdl-23029856

ABSTRACT

PURPOSE: To investigate the association between tertiary reviewer (admissions committee member) comments and medical students' performance during medical school and into internship. METHODS: We collected data from seven year-groups (1993-1999) and coded tertiary reviewer comments into 14 themes. We then conducted an exploratory factor analysis to reduce the dimensions of the themes (excluding the Overall impression theme). Subsequently, we performed Pearson correlation analyses and multiple linear regression analysis to examine the relationship between the factors and seven outcome measures: medical school preclinical grade point average (GPA), medical school clinical GPA, cumulative medical school GPA, U.S. Medical Licensing Examination Step 1 and 2 scores, and scores on a program director's evaluation measuring intern professionalism and expertise. RESULTS: We extracted seven factors from the 13 themes and found small-to-moderate, significant correlations between the factors, the Overall impression theme, and the outcome measures. In particular, positive comments on Test and Maturity were associated with higher U.S. Medical Licensing Examination Step 1 and 2 scores. Negative comments on Interview and Recommendations were associated with lower ratings of professionalism during internship. Comments on Overall impression were significantly associated with all the outcome measures. CONCLUSIONS: Tertiary reviewer comments were weakly associated with performance in medical school and internship. Compared with positive comments, negative comments had stronger associations with medical school and internship performance measures.


Subject(s)
School Admission Criteria , Students, Medical , Adult , Clinical Competence , Humans , Internship and Residency , Principal Component Analysis , Schools, Medical , Students, Medical/statistics & numerical data , Young Adult
19.
Mil Med ; 177(9 Suppl): 31-7, 2012 Sep.
Article in English | MEDLINE | ID: mdl-23029858

ABSTRACT

Self-efficacy is a personal belief in one's capability to successfully execute the behaviors necessary to attain designated types of performances. Sometimes described as task-specific self-confidence, self-efficacy is a key component in many contemporary theories of motivation and learning. The purpose of this study was to develop a survey for measuring students' medical skills self-efficacy and to collect reliability and validity evidence for the instrument. A secondary purpose was to explore differences in students' self-efficacy from year 1 of medical school to year 4. We created 19 survey items based on the 6 core competencies of the Accreditation Council for Graduate Medical Education, and we collected data from 304 medical students. Results from an exploratory factor analysis suggested three interpretable factors: patient care self-efficacy (eight items, Cronbach's alpha = 0.92), interpersonal skills self-efficacy (three items, Cronbach's alpha = 0.76), and evidence-based medicine self-efficacy (three items, Cronbach's alpha = 0.79). We then compared students' self-efficacy at different stages of training using a one-way multivariate analysis of variance. Consistent with our expectations, we found several statistically significant differences, suggesting students' self-efficacy increased considerably from year 1 of medical school to year 4, F(9, 725) = 30.58, p < 0.001, Wilks' lambda = 0.46. Using this survey, medical educators and researchers have a psychometrically sound tool for measuring students' medical skills self-efficacy during undergraduate medical education. Practical implications and future directions are discussed.


Subject(s)
Self Efficacy , Students, Medical/psychology , Adult , Factor Analysis, Statistical , Female , Humans , Male , Multivariate Analysis , Psychometrics , Surveys and Questionnaires , Young Adult
20.
Mil Med ; 177(9 Suppl): 61-7, 2012 Sep.
Article in English | MEDLINE | ID: mdl-23029864

ABSTRACT

UNLABELLED: The Uniformed Services University of the Health Sciences (USU) houses the nation's only federal medical school, the F. Edward Hébert School of Medicine. A key aspect of the curriculum at USU is leadership education as graduates go on to serve the Department of Defense through a variety of senior positions in the military. We surveyed a specific group of USU graduates who have achieved the rank of General or Admiral ("flag officers") to enhance our understanding of successful leadership for military physicians and to gain an understanding of how USU might shape its curriculum in the future. METHODS: We sent an Internet-based survey to 13 flag officer graduates. The first section of the survey contained items from the multifactor leadership questionnaire-6S, a questionnaire with evidence of reliability and validity for evaluating leadership styles. The second section of the survey contained open-ended questions addressing key characteristics of an effective leader in the Military Health System, experiences that prepared them for leadership, USU's role in leadership positions, and advice for USU for better educating future leaders. The second section of the survey was coded using the constant comparative method. RESULTS: Eight flag officers (63%) responded to the survey. They all scored highly on transformational leadership style. Qualitative themes reached saturation for each open-ended question. The flag officers identified characteristics consistent with published literature from other fields regarding effective leadership. They endorsed USU's role in achieving their leadership positions and suggested areas for improvement. CONCLUSIONS: Characteristics of effective leadership (transformational leadership style) identified by the flag officers surveyed in this study are consistent with the literature from other fields. These finding have important implications for leadership education at USU and potentially other institutions. The results also provide additional data to support the notion that USU is meeting its societal obligation to educate future leaders in military medicine.


Subject(s)
Leadership , Military Personnel/psychology , Schools, Medical , Adult , Curriculum , Humans , Military Medicine/organization & administration , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...