Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 37
Filter
1.
BMC Med Educ ; 24(1): 339, 2024 Mar 26.
Article in English | MEDLINE | ID: mdl-38532412

ABSTRACT

BACKGROUND: Computer-based assessment for sampling personal characteristics (Casper), an online situational judgement test, is a broad measure of personal and professional qualities. We examined the impact of Casper in the residency selection process on professionalism concerns, learning interventions and resource utilization at an institution. METHODS: In 2022, admissions data and information in the files of residents in difficulty (over three years pre- and post- Casper implementation) was used to determine the number of residents in difficulty, CanMEDS roles requiring a learning intervention, types of learning interventions (informal learning plans vs. formal remediation or probation), and impact on the utilization of institutional resource (costs and time). Professionalism concerns were mapped to the 4I domains of a professionalism framework, and their severity was considered in mild, moderate, and major categories. Descriptive statistics and between group comparisons were used for quantitative data. RESULTS: In the pre- and post- Casper cohorts the number of residents in difficulty (16 vs. 15) and the number of learning interventions (18 vs. 16) were similar. Professionalism concerns as an outcome measure decreased by 35% from 12/16 to 6/15 (p < 0.05), were reduced in all 4I domains (involvement, integrity, interaction, introspection) and in their severity. Formal learning interventions (15 vs. 5) and informal learning plans (3 vs. 11) were significantly different in the pre- and post-Casper cohorts respectively (p < 0.05). This reduction in formal learning interventions was associated with a 96% reduction in costs f(rom hundreds to tens of thousands of dollars and a reduction in time for learning interventions (from years to months). CONCLUSIONS: Justifiable from multiple stakeholder perspectives, use of an SJT (Casper) improves a clinical performance measure (professionalism concerns) and permits the institution to redirect its limited resources (cost savings and time) to enhance institutional endeavors and improve learner well-being and quality of programs.


Subject(s)
Internship and Residency , Humans , Judgment , Learning , Professionalism , Outcome Assessment, Health Care
3.
Int J Radiat Oncol Biol Phys ; 107(5): 943-948, 2020 08 01.
Article in English | MEDLINE | ID: mdl-32334033

ABSTRACT

PURPOSE: To assess the acute toxicity and quality of life (QOL) of hypofractionation compared with conventional fractionation for whole breast irradiation (WBI) after breast-conserving surgery. METHODS AND MATERIALS: Women with node-negative breast cancer who had undergone breast-conserving surgery with clear margins were randomly assigned to conventional WBI of 5000 cGy in 25 fractions over 35 days or hypofractionated WBI of 4256 cGy in 16 fractions over 22 days. Acute skin toxicity and QOL were assessed at baseline and 2, 4, 6, and 8 weeks from the start of treatment for a subgroup of patients. QOL was assessed at baseline and 4 weeks posttreatment for all patients. In the acute toxicity substudy, repeated measures modeling was used to investigate treatment by time interactions over the 8-week period for acute toxicity and QOL mean change score. QOL mean change score from baseline to 4 weeks posttreatment was compared for all patients. RESULTS: In the acute toxicity substudy, 161 patients participated. In the main trial, 1152 patients participated. Acute skin toxicity was initially similar between groups but was less with hypofractionation compared with conventional fractionation toward the end of the 8-week period (P < .001). QOL at 6 weeks from the start of treatment was improved with hypofractionation for the skin side effects, breast side effects, fatigue, attractiveness, and convenience domains (all P < .05). In the main trial, hypofractionation resulted in improved overall QOL and QOL attributed to skin side effects, breast side effects, and attractiveness (all P < .01). CONCLUSIONS: Hypofractionated WBI compared with conventional WBI resulted in less acute toxicity and improved QOL. This further supports the benefits of hypofractionation.


Subject(s)
Breast Neoplasms/radiotherapy , Quality of Life , Radiation Dose Hypofractionation , Radiotherapy/adverse effects , Female , Follow-Up Studies , Humans , Middle Aged , Skin/radiation effects , Treatment Outcome
4.
Acad Med ; 94(8): 1197-1203, 2019 08.
Article in English | MEDLINE | ID: mdl-31033603

ABSTRACT

PURPOSE: To examine the magnitudes of score differences across different demographic groups for three academic (grade point average [GPA], old Medical College Admission Test [MCAT], and MCAT 2015) and one nonacademic (situational judgment test [SJT]) screening measures and one nonacademic (multiple mini-interview [MMI]) interview measure (analysis 1), and the demographic implications of including an SJT in the screening stage for the pool of applicants who are invited to interview (analysis 2). METHOD: The authors ran the analyses using data from New York Medical College School of Medicine applicants from the 2015-2016 admissions cycle. For analysis 1, effect sizes (Cohen d) were calculated for GPA, old MCAT, MCAT 2015, CASPer (an online SJT), and MMI. Comparisons were made across gender, race, ethnicity (African American, Hispanic/Latino), and socioeconomic status (SES). For analysis 2, a series of simulations were conducted to estimate the number of underrepresented in medicine (UIM) applicants who would have been invited to interview with different weightings of GPA, MCAT, and CASPer scores. RESULTS: A total of 9,096 applicants were included in analysis 1. Group differences were significantly smaller or reversed for CASPer and MMI compared with the academic assessments (MCAT, GPA) across nearly all demographic variables/indicators. The simulations suggested that a higher weighting of CASPer may help increase gender, racial, and ethnic diversity in the interview pool; results for low-SES applicants were mixed. CONCLUSIONS: The inclusion of an SJT in the admissions process has the potential to widen access to medical education for a number of UIM groups.


Subject(s)
College Admission Test , Cultural Diversity , School Admission Criteria , Students, Medical/statistics & numerical data , Adult , Female , Humans , Judgment , Male , Schools, Medical
5.
Int J Surg Protoc ; 8: 1-6, 2018.
Article in English | MEDLINE | ID: mdl-31851740

ABSTRACT

INTRODUCTION: The "traditional approach" to resect synchronous colorectal cancer with liver metastases (CRLM) is to perform staged resections. Many institutions perform simultaneous resection. Disadvantages to the simultaneous approach include longer operating room times, which may increase major postoperative complication rates. Data supporting simultaneous resection are limited to retrospective studies that are subject to selection bias. Therefore, we have proposed a single-arm prospective cohort pilot study to evaluate the postoperative complications following simultaneous resection of synchronous CRLM. METHODS AND ANALYSIS: This single-arm study will be performed in five high-volume hepatobiliary centres to prospectively evaluate the following objectives: (1) To determine the 90-day postoperative complication rate of patients diagnosed with synchronous CRLM undergoing a simultaneous colorectal and liver resection, including major liver resections; (2) To determine the postoperative mortality rate at 90 days following index surgery; (3) To determine change in global health-related Quality of Life (QoL) following simultaneous resection at three months compared to baseline; and (4) To build a costing model for simultaneous resection, We will also evaluate the feasibility of performing combined resection in these patients by evaluating the number of eligible patients enrolled in the study and determining the reasons eligible patients were not enrolled. This protocol has been registered with ClinicalTrials.gov (NCT02954913). ETHICS AND DISSEMINATION: This study has been provincially approved by the central research ethics board. Study results will inform the design a randomized controlled trial by providing information about the comprehensive complication index in this patient population used to calculate the sample size for the trial.

6.
J Med Imaging Radiat Sci ; 49(3): 293-300, 2018 Sep.
Article in English | MEDLINE | ID: mdl-32074056

ABSTRACT

BACKGROUND: Pain is a common symptom for patients with pancreatic cancer and is often treated using palliative radiation therapy. Standard palliative dose regimes typically consist of 2000 cGy to 3000 cGy in 5 to 10 fractions (fx). With the recent advancements in radiation dosimetric planning and delivery, the Juravinski Cancer Centre in Hamilton, Ontario, offers a hypofractionated dose of 2500 cGy in 5 fx for the improvement of pain and tumour control in selected pancreatic cancer patients. This project reviews the safety and efficacy of this prescription. METHODS: A retrospective analysis of 24 patients diagnosed with unresectable pancreatic cancer was conducted. Patient data were collected using in-house medical record systems including MOSAIQ, Meditech, and Centricity. Nonparametric data analysis tests were conducted using Minitab17. RESULTS: Nineteen of 24 patients (79%) reported a decrease in pain levels following radiation and 13 of 18 (72%) showed good local control of the tumour on a follow-up CT scan. Around 30% of patients reported nausea and vomiting and fatigue. Only 13% reported diarrhea and 8% reported constipation. Twenty-one percent reported pain flares. All patients were able to finish the entire treatment without pauses or delays. CONCLUSION: A palliative radiotherapy dose regime of 2500 cGy/5 fx demonstrates a potential for the effective control of pain with limited acute toxicities in patients with unresectable pancreatic cancer. This study aims to indicate the need for further prospective research comparing this regime to other standard treatments in order to determine which is most beneficial for the patient.

7.
Acad Med ; 93(7): 969-971, 2018 07.
Article in English | MEDLINE | ID: mdl-29095171

ABSTRACT

The literature on multiple mini-interviews (MMIs) is replete with heterogeneous study results related to the constructs measured, correlations with other measures, and demographic relationships. Rather than view these results as contradictory, the authors ask, What if all of the results are correct? They point out that the MMI is not an assessment tool but, rather, an assessment method. Design and implementation of locally conducted MMIs in medical school admissions processes should reflect local needs. As with other local assessments, MMIs should be considered separate from nationally conducted assessments that reflect more universal competencies. With the freedom to exercise unique values in locally constructed MMIs, individual institutions, or small bands of like-minded institutions, in parallel carry the responsibility to ensure local assessment tool validity.


Subject(s)
School Admission Criteria , Schools, Medical , Freedom
8.
Adv Health Sci Educ Theory Pract ; 22(5): 1321-1322, 2017 12.
Article in English | MEDLINE | ID: mdl-29063308

ABSTRACT

In re-examining the paper "CASPer, an online pre-interview screen for personal/professional characteristics: prediction of national licensure scores" published in AHSE (22(2), 327-336), we recognized two errors of interpretation.

9.
Adv Health Sci Educ Theory Pract ; 22(2): 327-336, 2017 May.
Article in English | MEDLINE | ID: mdl-27873137

ABSTRACT

Typically, only a minority of applicants to health professional training are invited to interview. However, pre-interview measures of cognitive skills predict for national licensure scores (Gauer et al. in Med Educ Online 21 2016) and subsequently licensure scores predict for performance in practice (Tamblyn et al. in JAMA 288(23): 3019-3026, 2002; Tamblyn et al. in JAMA 298(9):993-1001, 2007). Assessment of personal and professional characteristics, with the same psychometric rigour of measures of cognitive abilities, are needed upstream in the selection to health profession training programs. To fill that need, Computer-based Assessment for Sampling Personal characteristics (CASPer)-an on-line, video-based screening test-was created. In this paper, we examine the correlation between CASPer and Canadian national licensure examination outcomes in 109 doctors who took CASPer at the time of selection to medical school. Specifically, CASPer scores were correlated against performance on cognitive and 'non-cognitive' subsections of both the Medical Council of Canada Qualifying Examination (MCCQE) Parts I (end of medical school) and Part II (18 months into specialty training). Unlike most national licensure exams, MCCQE has specific subcomponents examining personal/professional qualities, providing a unique opportunity for comparison. The results demonstrated moderate predictive validity of CASPer to national licensure outcomes of personal/professional characteristics three to six years after admission to medical school. These types of disattenuated correlations (r = 0.3-0.5) are not otherwise predicted by traditional screening measures. These data support the ability of a computer-based strategy to screen applicants in a feasible, reliable test, which has now demonstrated predictive validity, lending evidence of its validation for medical school applicant selection.


Subject(s)
Licensure/statistics & numerical data , School Admission Criteria/statistics & numerical data , Schools, Medical/statistics & numerical data , Schools, Medical/standards , Canada , Cognition , Educational Measurement , Humans , Personality , Predictive Value of Tests
10.
Acad Med ; 90(12): 1651-7, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26488572

ABSTRACT

PURPOSE: To examine whether academic scores, experience scores, and Multiple Mini Interview (MMI) core personal competencies scores vary across applicants' self-reported ethnicities, and whether changes in weighting of scores would alter the proportion of ethnicities underrepresented in medicine (URIM) in the entering class composition. METHOD: This study analyzed retrospective data from 1,339 applicants to the Rutgers Robert Wood Johnson Medical School interviewed for entering classes 2011-2013. Data analyzed included two academic scores-grade point average (GPA) and Medical College Admission Test (MCAT)-service/clinical/research (SCR) scores, and MMI scores. Independent-samples t tests evaluated whether URIM ethnicities differed from non-URIM across GPA, MCAT, SCR, and MMI scores. A series of "what-if" analyses were conducted to determine whether alternative weighting methods would have changed final admissions decisions and entering class composition. RESULTS: URIM applicants had significantly lower GPAs (P < .001), MCATs (P < .001), and SCR scores (P < .001). However, this pattern was not found with MMI score (non-URIM 10.4 [1.6], URIM 10.4 [1.3], P = .55). Alternative weighting analyses show that including academic/experiential scores impacts the percentage of URIM acceptances. URIM acceptance rate declined from 57% (100% MMI) to 43% (10% GPA/10% MCAT/10% SCR/70% MMI), 39% (30% GPA/70% MMI), to as low as 22% (50% MCAT/50% MMI). CONCLUSIONS: Sole reliance on the MMI for final admissions decisions, after threshold academic/experiential preparation are met, promotes diversity with the accepted applicant pool; weighting of "the numbers" or what is written about the application may decrease the acceptance of URIM applicants.


Subject(s)
College Admission Test , Cultural Diversity , Interviews as Topic , School Admission Criteria , Cohort Studies , Ethnicity , Female , Humans , Male , New Jersey , Racial Groups , Retrospective Studies , Schools, Medical , Students, Medical/statistics & numerical data , Young Adult
12.
JAMA ; 308(21): 2233-40, 2012 Dec 05.
Article in English | MEDLINE | ID: mdl-23212501

ABSTRACT

CONTEXT: There has been difficulty designing medical school admissions processes that provide valid measurement of candidates' nonacademic qualities. OBJECTIVE: To determine whether students deemed acceptable through a revised admissions protocol using a 12-station multiple mini-interview (MMI) outperform others on the 2 parts of the Canadian national licensing examinations (Medical Council of Canada Qualifying Examination [MCCQE]). The MMI process requires candidates to rotate through brief sequential interviews with structured tasks and independent assessment within each interview. DESIGN, SETTING, AND PARTICIPANTS: Cohort study comparing potential medical students who were interviewed at McMaster University using an MMI in 2004 or 2005 and accepted (whether or not they matriculated at McMaster) with those who were interviewed and rejected but gained entry elsewhere. The computer-based MCCQE part I (aimed at assessing medical knowledge and clinical decision making) can be taken on graduation from medical school; MCCQE part II (involving simulated patient interactions testing various aspects of practice) is based on the objective structured clinical examination and typically completed 16 months into postgraduate training. Interviews were granted to 1071 candidates, and those who gained entry could feasibly complete both parts of their licensure examination between May 2007 and March 2011. Scores could be matched on the examinations for 751 (part I) and 623 (part II) interviewees. INTERVENTION: Admissions decisions were made by combining z score transformations of scores assigned to autobiographical essays, grade point average, and MMI performance. Academic and nonacademic measures contributed equally to the final ranking. MAIN OUTCOME MEASURES: Scores on MCCQE part I (standardized cut-score, 390 [SD, 100]) and part II (standardized mean, 500 [SD, 100]). RESULTS: Candidates accepted by the admissions process had higher scores than those who were rejected for part I (mean total score, 531 [95% CI, 524-537] vs 515 [95% CI, 507-522]; P = .003) and for part II (mean total score, 563 [95% CI, 556-570] vs 544 [95% CI, 534-554]; P = .007). Among the accepted group, those who matriculated at McMaster did not outperform those who matriculated elsewhere for part I (mean total score, 524 [95% CI, 515-533] vs 546 [95% CI, 535-557]; P = .004) and for part II (mean total score, 557 [95% CI, 548-566] vs 582 [95% CI, 569-594]; P = .003). CONCLUSION: Compared with students who were rejected by an admission process that used MMI assessment, students who were accepted scored higher on Canadian national licensing examinations.


Subject(s)
Education, Medical, Undergraduate/standards , Educational Measurement , Interviews as Topic , School Admission Criteria , Schools, Medical , Cohort Studies , Humans , Licensure , Ontario
13.
Acad Med ; 87(4): 443-8, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22361795

ABSTRACT

PURPOSE: Traditional medical school admissions assessment tools may be limiting diversity. This study investigates whether the Multiple Mini-Interview (MMI) is diversity-neutral and, if so, whether applying it with greater weight would dilute the anticipated negative impact of diversity-limiting admissions measures. METHOD: Interviewed applicants to six medical schools in 2008 and 2009 underwent MMI. Predictor variables of MMI scores, grade point average (GPA), and Medical College Admission Test (MCAT) scores were correlated with diversity measures of age, gender, size of community of origin, income level, and self-declared aboriginal status. A subset of the data was then combined with variable weight assigned to predictor variables to determine whether weighting during the applicant selection process would affect diversity among chosen applicants. RESULTS: MMI scores were unrelated to gender, size of community of origin, and income level. They correlated positively with age and negatively with aboriginal status. GPA and MCAT correlated negatively with age and aboriginal status, GPA correlated positively with income level, and MCAT correlated positively with size of community of origin. Even extreme combinations of MMI and GPA weightings failed to increase diversity among applicants who would be selected on the basis of weighted criteria. CONCLUSIONS: MMI could not neutralize the diversity-limiting properties of academic scores as selection criteria to interview. Using academic scores in this way causes range restriction, counteracting attempts to enhance diversity using downstream admissions selection measures such as MMI. Diversity efforts should instead be focused upstream. These results lend further support for the development of pipeline programs.


Subject(s)
College Admission Test , Cultural Diversity , Interviews as Topic/methods , School Admission Criteria , Schools, Medical/standards , Students, Medical , Canada , Female , Humans , Male
14.
Acad Med ; 85(10 Suppl): S60-3, 2010 Oct.
Article in English | MEDLINE | ID: mdl-20881706

ABSTRACT

BACKGROUND: The Multiple Mini-Interview (MMI) is useful in selecting undergraduate medical trainees. Postgraduate applicant pools have smaller numbers of more homogeneous candidates that must be actively recruited while being assessed. This paper reports on the MMI's use in assessing residency candidates. METHOD: Canadian and international medical graduates to three residency programs--obstetrics-gynecology and pediatrics (McMaster University) and internal medicine (University of Alberta)--underwent the MMI for residency selection (n = 484) in 2008 and 2009. Reliability was determined and candidates and interviewers completed an exit survey assessing acceptability. RESULTS: Overall reliability of the MMI was acceptable, ranging from 0.55 to 0.72. Using 10 stations would increase reliability to 0.64-0.79. Eighty-eight percent of candidates believed they could accurately portray themselves, while 90% of interviewers believed they could reasonably judge candidates' abilities. CONCLUSIONS: The MMI provides a reliable way to assess residency candidates that is acceptable to both candidates and assessors across a variety of programs.


Subject(s)
College Admission Test , Gynecology/education , Internship and Residency/standards , Interview, Psychological/methods , Obstetrics/education , Pediatrics/education , Adult , Alberta , Decision Making , Education, Medical, Graduate , Female , Foreign Medical Graduates , Humans , Interviews as Topic , Male , Professional Competence , Reproducibility of Results
15.
Adv Health Sci Educ Theory Pract ; 15(3): 415-23, 2010 Aug.
Article in English | MEDLINE | ID: mdl-20013153

ABSTRACT

Most medical schools attempt to select applicants on the basis of cognitive and non-cognitive skills. Typically, interpersonal skills are assessed by interview, though relatively few applicants make it to interview. Thus, an efficient paper and pencil test of non-cognitive skills is needed. One possibility is personality tests. Tests of the five factor model of personality, and in particular the factor of conscientiousness, has proven effective in predicting future job performance. Can it serve as a screen for admissions interviews? In particular, correlation with the multiple mini-interviews (MMI) is of interest since the latter is a well validated test of non-cognitive skills. A total of 152 applicants to Michael G. DeGroote School of Medicine at McMaster completed the Neo-5 personality test voluntarily in advance of their admissions interviews. Correlations were calculated between personality factors and grade point average (GPA), medical college admissions test (MCAT) and MMI. No statistically significant correlation was found between personality factors and cognitive (GPA, MCAT) measures. More surprisingly, no statistically significant correlation was found between personality factors, including conscientiousness, and the MMI. Personality testing is not a useful screening test for the MMI.


Subject(s)
Interview, Psychological/methods , Personality Assessment , Personality Tests , School Admission Criteria , Schools, Medical , Students, Medical/psychology , Analysis of Variance , Educational Measurement , Humans , Psychometrics , Statistics as Topic , Surveys and Questionnaires
16.
Acad Med ; 84(10 Suppl): S9-12, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19907396

ABSTRACT

BACKGROUND: Most medical school candidates are excluded without benefit of noncognitive skills assessment. Is development of a noncognitive preinterview screening test that correlates with the well-validated Multiple Mini-Interview (MMI) possible? METHOD: Study 1: 110 medical school candidates completed MMI and Computer-based Multiple Sample Evaluation of Noncognitive Skills (CMSENS)-eight 1-minute video-based scenarios and four self-descriptive questions, with short-answer-response format. Seventy-eight responses were audiotaped, 32 typewritten; all were scored by two independent raters. Study 2: 167 candidates completed CMSENS-eight videos, six self-descriptive questions, typewritten responses only, scored by two raters; 88 of 167 underwent the MMI. RESULTS: Results for overall test generalizability, interrater reliability, and correlation with MMI, respectively, were, for Study 1, audio-responders: 0.86, 0.82, 0.15; typewritten-responders: 0.72, 0.81, 0.51; and for Study 2, 0.83, 0.95, 0.46 (correlation with disattenuation was 0.60). CONCLUSIONS: Strong psychometric properties, including MMI correlation, of CMSENS warrant investigation into future widespread implementation as a preinterview noncognitive screening test.


Subject(s)
College Admission Test , Computers , Educational Measurement/methods , Interviews as Topic , Schools, Medical , Psychometrics
17.
Med Educ ; 43(8): 767-75, 2009 Aug.
Article in English | MEDLINE | ID: mdl-19659490

ABSTRACT

INTRODUCTION: In this paper we report on further tests of the validity of the multiple mini-interview (MMI) selection process, comparing MMI scores with those achieved on a national high-stakes clinical skills examination. We also continue to explore the stability of candidate performance and the extent to which so-called 'cognitive' and 'non-cognitive' qualities should be deemed independent of one another. METHODS: To examine predictive validity, MMI data were matched with licensing examination data for both undergraduate (n = 34) and postgraduate (n = 22) samples of participants. To assess the stability of candidate performance, reliability coefficients were generated for eight distinct samples. Finally, correlations were calculated between 'cognitive' and 'non-cognitive' measures of ability collected in the admissions procedure, on graduation from medical school and 18 months into postgraduate training. RESULTS: The median reliability of eight administrations of the MMI in various cohorts was 0.73 when 12 10-minute stations were used with one examiner per station. The correlation between performance on the MMI and number of stations passed on an objective structured clinical examination-based licensing examination was r = 0.43 (P < 0.05) in a postgraduate sample and r = 0.35 (P < 0.05) in an undergraduate sample of subjects who sat the MMI 5 years prior to sitting the licensing examination. The correlation between 'cognitive' and 'non-cognitive' assessment instruments increased with time in training (i.e. as the focus of the assessments became more tailored to the clinical practice of medicine). DISCUSSION: Further evidence for the validity of the MMI approach to making admissions decisions has been provided. More generally, the reported findings cast further doubt on the extent to which performance can be captured with trait-based models of ability. Finally, although a complementary predictive relationship has consistently been observed between grade point average and MMI results, the extent to which cognitive and non-cognitive qualities are distinct appears to depend on the scope of practice within which the two classes of qualities are assessed.


Subject(s)
Education, Medical, Undergraduate , Educational Measurement/methods , School Admission Criteria , Adult , Clinical Competence/standards , Cognition , Female , Humans , Male , Ontario , Reproducibility of Results , Statistics as Topic , Students, Medical/psychology , Young Adult
18.
Adv Health Sci Educ Theory Pract ; 14(5): 759-75, 2009 Dec.
Article in English | MEDLINE | ID: mdl-19340597

ABSTRACT

Admissions committees and researchers around the globe have used diligence and imagination to develop and implement various screening measures with the ultimate goal of predicting future clinical and professional performance. What works for predicting future job performance in the human resources world and in most of the academic world may not, however, work for the highly competitive world of medical school applicants. For the job of differentiating within the highly range-restricted pool of medical school aspirants, only the most reliable assessment tools need apply. The tools that have generally shown predictive validity in future performance include academic scores like grade point average, aptitude tests like the Medical College Admissions Test, and non-cognitive testing like the multiple mini-interview. The list of assessment tools that have not robustly met that mark is longer, including personal interview, personal statement, letters of reference, personality testing, emotional intelligence and (so far) situational judgment tests. When seen purely from the standpoint of predictive validity, the trends over time towards success or failure of these measures provide insight into future tool development.


Subject(s)
College Admission Test , School Admission Criteria , Schools, Medical , Humans , Intelligence Tests , Interviews as Topic , Personality Inventory , Predictive Value of Tests , Problem Solving , Writing
19.
Adv Health Sci Educ Theory Pract ; 13(3): 253-61, 2008 Aug.
Article in English | MEDLINE | ID: mdl-17063382

ABSTRACT

A consistent finding from many reviews is that undergraduate Grade Point Average (uGPA) is a key predictor of academic success in medical school. Curiously, while uGPA has established predictive validity, little is known about its reliability. For a variety of reasons, medical schools use different weighting schemas to combine years of study. Additional concerns relate to the equivalence of grades obtained from different fields of study and institutions, with little hard data to guide conclusions. At the Michael G. DeGroote School of Medicine Class of 2007 at McMaster University, every undergraduate grade of 2,138 applicants, along with field of study and post-secondary educational institution, was analyzed. Individual grades were aggregated into an overall uGPA using published algorithms from several medical school, and correlated with a non-weighted sum. Correlations of the different schemas with equal weights ranged from 0.973 to 0.990. The extent of the difference between fields of study was small, accounting for only 1.5% of the variance. However, differences among 16 Ontario universities were larger, and accounted for 9.3% of the variance. The results of this study suggest that all weighting schemas are virtually equivalent, making any formulation reasonable. Differences by field of study are small, but do not show any bias against non-science students. Differences by institution are larger, amounting to a range in average score from 78.7 to 84.6; however it is not clear whether this reflects candidate ability or institutional policy, so attempts to correct for institution may be difficult.


Subject(s)
Career Choice , Educational Measurement/methods , Schools, Medical , Algorithms , Educational Measurement/statistics & numerical data , Humans , Ontario , Statistics as Topic
20.
Adv Health Sci Educ Theory Pract ; 13(1): 43-58, 2008 Mar.
Article in English | MEDLINE | ID: mdl-17009095

ABSTRACT

A major expense for most professional training programs, both financially and in terms of human resources, is the interview process used to make admissions decisions. Still, most programs view this as a necessary cost given that the personal interview provides an opportunity to recruit potential candidates, showing them what the program has to offer, and to try and gather more information about the candidates to ensure that those selected live up to the espoused values of the institution. We now have five years worth of experience with a Multiple Mini-Interview (MMI) process that, unlike traditional panel interviews, uses the OSCE model to have candidates interact with a larger number of interviewers. We have found that the MMI is more reliable and has better predictive power than our traditional panel interviews. Still, the extent to which any measurement is valuable depends also on the feasibility of use. In this paper we report on an exploration of the cost effectiveness of the MMI as compared to standard panel-based interviews by considering the generation of interview material, human resource (i.e., interviewer and support staff) use, infrastructure requirements, and other miscellaneous expenses. Our conclusion is that the MMI requires greater preparatory efforts and a larger number of rooms to carry out the interviews relative to panel-based interviews, but that these cost disadvantages are offset by the MMI requiring fewer person-hours of effort. The absolute costs will vary dependent on institution, but the framework presented in this paper will hopefully provide greater guidance regarding logistical requirements and anticipated budget.


Subject(s)
Interviews as Topic/methods , School Admission Criteria , Schools, Medical , Humans , Interpersonal Relations , Interviews as Topic/standards , Models, Educational , Observer Variation , Reproducibility of Results , Schools, Medical/economics
SELECTION OF CITATIONS
SEARCH DETAIL
...