Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
2.
Can Med Educ J ; 14(4): 123-125, 2023 09.
Article in English | MEDLINE | ID: mdl-37719406

ABSTRACT

Implication Statement: Enacting change in medical education requires effective facilitation processes. Medical education lags behind other fields in systems innovation and radically disruptive approaches to the challenges we encounter. Design thinking "sprints," widely used in many other settings, serve as an opportunity to fill the gap as a facilitation process during periods requiring extensive and/or rapid change. Though resource-intensive, our experience using design thinking sprints for a situation requiring urgent change management with high-stakes implications for Canadian medical education to demonstrate their utility. A more widespread, adoption can contribute to innovation within all aspects of education including curriculum design, policy development, and educational process renewal. Énoncé des implications de la recherche: La mise en œuvre de changements dans la formation médicale exige un processus de facilitation efficace. Comparée à d'autres disciplines, l'éducation médicale est à la traîne en ce qui concerne l'innovation des systèmes et l'adoption d'approches radicalement transformatrices en réponse aux défis rencontrés. Le sprint de conception creative (design thinking sprints), approche largement utilisée dans de nombreux contextes, pourraient permettre de combler le manque de processus de facilitation lorsque des changements importants ou rapides sont à l'œuvre. Notre expérience de l'utilisation de tels sprints dans une situation nécessitant une gestion urgente de changements à enjeux importants pour l'éducation médicale au Canada démontre son utilité, malgré les ressources considérables qui ont dû être mobilisées. Une adoption plus large de cette approche peut contribuer à l'innovation dans tous les aspects de l'éducation, y compris la conception des programmes d'études, l'élaboration de politiques et le renouvellement des processus éducatifs.


Subject(s)
Education, Medical , Internship and Residency , Canada , Change Management , Curriculum
3.
Can Med Educ J ; 11(1): e35-e45, 2020 Mar.
Article in English | MEDLINE | ID: mdl-32215141

ABSTRACT

BACKGROUND: Medical school admissions committees are seeking alternatives to traditional academic measures when selecting students; one potential measure being emotional intelligence (EI). If EI is to be used as an admissions criterion, it should predict future performance. The purpose of this study is to determine if EI scores at admissions predicts performance on a medical licensure examination. METHODS: All medical school applicants to the University of Ottawa in 2006 and 2007 were invited to complete the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT v2.0) after their interview. Students were tracked through medical school into licensure and EI scores were correlated to their scores on the Medical Council of Canada Qualifying Examination (MCCQE) attempted between 2010 and 2014. RESULTS: The correlation between the MSCEIT and the MCCQE Part I was r (200) = .01 p =. 90 The covariates of age and gender accounted for a significant amount of variance in MCCQE Part I scores (R 2 = .10, p<.001, n=202) but the addition of the MSCEIT scores was not statistically significant (R 2 change = .002, p=.56). The correlation between the MSCEIT and the MCCQE Part II was r(197) = .06, p = .41. The covariates of age and gender accounted for some variance in MCCQE Part II scores (R 2 = .05, p = .007, n=199) but the addition of the MSCEIT did not (R 2 change = .002 p =.55). CONCLUSION: The low correlations between EI and licensure scores replicates other studies that have found weak correlations between EI scores and tests administered at admissions and during medical school. These results suggest caution if one were to use EI as part of their admissions process.


CONTEXTE: Les comités d'admission des facultés de médecine sont à la recherche d'alternatives aux mesures académiques traditionnelles lors de la sélection des étudiants; une mesure potentielle étant l'intelligence émotionnelle (IÉ). Si l'IÉ doit être utilisée comme critère d'admission, elle devrait pouvoir prédire la performance future. L'objectif de cette étude est de déterminer si les résultats de l'IÉ lors des admissions prédisent la performancelors d'un examen de certification. MÉTHODES: Tous les candidats à la faculté de médecine de l'Université d'Ottawa en 2006 et en 2007 ont été invités à remplir le test d'intelligence émotionnelle Mayer-Salovey-Caruso (MSCEIT v2.0) après leur entrevue. Les étudiants ont été suivis tout au long de leurs études en médecine jusqu'à leur début de pratique et les résultats de l'IÉ ont été corrélés à leurs résultats lors de l'examen d'aptitude du Conseil médical du Canada (EACMC) tentéentre 2010 et 2014. RÉSULTATS: La corrélation entre le MSCEIT et la partie I de l'EACMC était de r (200) = 0,01 p = 0,90. Les covariables de l'âge et du genre comptaient pour une partie importante de la variance dans les résultats de la partie I de l'EACMC (R 2 = 0,10, p < 0,001, n = 202), mais l'ajout des résultats du MSCEIT n'était pas statistiquement significatif (changement de R 2 = 0,002, p = 0,56). La corrélation entre le MSCEIT et la partie II de l'EACMC était de r (197) = 0,06 p = 0,41. Les covariables de l'âge et du genre comptaient pour une certaine variance dans les résultats de la partie II de l'EACMC (R 2 = 0,05, p = 0,007, n = 199), mais l'ajout des résultats du MSCEIT ne comptait pas (changement de R 2 = 0,002, p = 0,55). CONCLUSION: Les faibles corrélations relevées entre les résultats de l'IÉ et la certificationconfirment les conclusionsd'autres études qui ont obtenu de faibles corrélations entre les résultats de l'IÉ et les tests donnés aux admissions et à la faculté de médecine. Ces résultats appellent à la prudence si quelqu'un devait utiliser l'IÉ dans son processus d'admission.

5.
MedEdPublish (2016) ; 7: 243, 2018.
Article in English | MEDLINE | ID: mdl-38089211

ABSTRACT

This article was migrated. The article was marked as recommended. Background The Association of Faculties of Medicine of Canada, Future of Medical Education in Canada report shared a collective vision to improve social accountability, including a review of admissions policies to enhance student diversity. This study explored if and how the Medical College Admissions Test (MCAT) might mediate the socioeconomic diversity of Canadian medical schools by quantifying the costs and other cost-related factors of preparing for the exam. Methods A 34-question anonymous and bilingual (English and French) online questionnaire was sent to the 2015 first-year cohort of Canadian medical students. Developed collaboratively, the survey content focused on MCAT preparation and completion activities, associated costs, and students' perceptions of MCAT costs. Findings The survey response rate was 32%. First-year medical students were more likely than the Canadian population to belong to high-income families (63% vs. 36%) and less likely to be from rural locations (4.5% vs. 19%). Use of MCAT preparation materials was reported by nearly every MCAT test-taker (95.3%): of those, 76.4% used free practice tests; 59.8% paid for practice tests; 45.1% registered for preparation courses; and 3.3% hired a private tutor. In terms of writing the MCAT, the total economic costs per respondent are estimated at $6,357 ($4,755-$7,958) and total direct costs per respondent are estimated at $2,970 ($1,882- $4,058). Opportunity costs represented the majority of economic costs, at $3,387 ($2,872 - $3,901), or 53.2%. MCAT preparation costs are estimated to be $2,372 ($1,373-$3,372), or 79.9% of total direct costs and 37.3% of economic costs. Most respondents agreed, 76%, that the MCAT posed a financial hardship. Conclusion The financial demands of preparing for and completing the MCAT quantified in this study highlight an admissions requirement that is likely contributing to the current student diversity challenges in Canadian medical schools. In the spirit of social accountability, perhaps it is time to prioritize equitable alternative for assessing applicants' academic readiness for medical school.

6.
Biol Sex Differ ; 7(Suppl 1): 41, 2016.
Article in English | MEDLINE | ID: mdl-27785344

ABSTRACT

Efforts to integrate gender medicine into medical school curricula have focused largely on the work of individual champions. Online sex and gender materials for undergraduate courses have also been developed and disseminated. Success has been sporadic, with varying uptake across schools within and between countries. International trends in medical school accreditation processes and the growing force of the millennial student voice offer untapped opportunities to promote more systematic integration of gender medicine on a national and international level. In this commentary, the president and CEO of the Association of Faculties of Medicine of Canada and the Scientific Director of the Institute of Gender and Health of the Canadian Institutes of Health Research jointly reflect on top-down and bottom-up levers for sustainable innovation in gender medicine for undergraduate medical training.

7.
Acad Med ; 91(11): 1501-1508, 2016 11.
Article in English | MEDLINE | ID: mdl-27384107

ABSTRACT

The report by the Association of Faculties of Medicine of Canada (AFMC) entitled "The Future of Medical Education in Canada: A Collective Vision for MD Education" includes recommendations to enhance admissions processes and increase national collaboration. To achieve these goals, the AFMC conducted a nationwide environmental scan appraising medical schools' readiness for national collaboration and progress toward establishing "made-in-Canada" admissions processes. A critical narrative review of the academic and gray literature was conducted as part of this environmental scan. Four core admissions practice and policy domains were identified: (1) social accountability strategies, (2) standardized admissions testing, (3) interviewing procedures, and (4) application procedures.In this article, the authors summarize and discuss the findings of this narrative review with regard to the four domains. They provide documentation of historical and present-day admissions factors relevant to Canadian medical schools' readiness for nationwide collaboration and a descriptive analysis of the facilitators and barriers to establishing "made-in-Canada" admissions processes.All four domains had facilitators and barriers. One barrier, however, cut across multiple domains-medical schools' pursuit of prestige and its potential to conflict with the goals of the other domains. The authors recommend holding a national forum to debate these issues and to advance the AFMC's goals, a process that will not be straightforward. Yet, national collaboration holds promise for applicants, medical schools, and Canada's diverse population of patients, so efforts toward this end must continue.


Subject(s)
Education, Medical, Undergraduate/organization & administration , School Admission Criteria , Schools, Medical/organization & administration , Canada , Humans , Policy , Social Responsibility
9.
Acad Med ; 90(9): 1258-63, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26177532

ABSTRACT

The Future of Medical Education in Canada Postgraduate (FMEC PG) Project was launched in 2010 by a consortium of four organizations: the Association of Faculties of Medicine of Canada, the Collège des Médecins du Québec, the College of Family Physicians of Canada, and the Royal College of Physicians and Surgeons of Canada. The FMEC PG study set out to review the state of the Canadian postgraduate medical education (PGME) system and make recommendations for improvements and changes. The extensive process included literature reviews, commissioned papers, stakeholder interviews, international consultations, and dialogue with the public and learners. The resulting key findings and 10 recommendations, published in a report in 2012, represent the collective vision of the consortium partner organizations for PGME in Canada. Implementation of the recommendations began in 2013 and will continue beyond 2016.In this article, the authors describe the complex process of developing the recommendations, highlight several recommendations, consider implementation processes and issues, and share lessons learned to date. They reflect on the ways in which the transformation of a very complex and complicated PGME system has required many stakeholders to work together on multiple interventions simultaneously. Notwithstanding the challenges for the participating organizations, changes have been introduced and sustainability is being forged. Throughout this process, the consortium partners and other stakeholders have continued to address the social accountability role of all physicians with respect to the public they serve.


Subject(s)
Clinical Competence , Education, Medical, Graduate/methods , Guidelines as Topic , Health Workforce , Accreditation , Canada , Consensus , Education, Medical, Graduate/organization & administration , Education, Medical, Graduate/standards , Health Services Needs and Demand , Humans
10.
Med Teach ; 37(11): 1032-8, 2015.
Article in English | MEDLINE | ID: mdl-25897708

ABSTRACT

BACKGROUND: Accreditation reviews of medical schools typically occur at fixed intervals and result in a summative judgment about compliance with predefined process and outcome standards. However, reviews that only occur periodically may not be optimal for ensuring prompt identification of and remediation of problem areas. AIMS: To identify the factors that affect the ability to implement a continuous quality improvement (CQI) process for the interval review of accreditation standards. METHODS: Case examples from the United States, Canada, the Republic of Korea and Taiwan, were collected and analyzed to determine the strengths and challenges of the CQI processes implemented by a national association of medical schools and several medical school accrediting bodies. The CQI process at a single medical school also was reviewed. RESULTS: A functional CQI process should be focused directly on accreditation standards so as to result in the improvement of educational quality and outcomes, be feasible to implement, avoid duplication of effort and have both commitment and resource support from the sponsoring entity and the individual medical schools. CONCLUSIONS: CQI can enhance educational program quality and outcomes, if the process is designed to collect relevant information and the results are used for program improvement.


Subject(s)
Accreditation/standards , Education, Medical, Undergraduate/standards , Quality Improvement/organization & administration , Humans , Internationality , United States
12.
Acad Med ; 89(4): 638-43, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24556771

ABSTRACT

PURPOSE: Medical school admissions committees are increasingly considering noncognitive measures like emotional intelligence (EI) in evaluating potential applicants. This study explored whether scores on an EI abilities test at admissions predicted future academic performance in medical school to determine whether EI could be used in making admissions decisions. METHOD: The authors invited all University of Ottawa medical school applicants offered an interview in 2006 and 2007 to complete the Mayer-Salovey-Caruso EI Test (MSCEIT) at the time of their interview (105 and 101, respectively), then again at matriculation (120 and 106, respectively). To determine predictive validity, they correlated MSCEIT scores to scores on written examinations and objective structured clinical examinations (OSCEs) administered during the four-year program. They also correlated MSCEIT scores to the number of nominations for excellence in clinical performance and failures recorded over the four years. RESULTS: The authors found no significant correlations between MSCEIT scores and written examination scores or number of failures. The correlations between MSCEIT scores and total OSCE scores ranged from 0.01 to 0.35; only MSCEIT scores at matriculation and OSCE year 4 scores for the 2007 cohort were significantly correlated. Correlations between MSCEIT scores and clinical nominations were low (range 0.12-0.28); only the correlation between MSCEIT scores at matriculation and number of clinical nominations for the 2007 cohort were statistically significant. CONCLUSIONS: EI, as measured by an abilities test at admissions, does not appear to reliably predict future academic performance. Future studies should define the role of EI in admissions decisions.


Subject(s)
Clinical Competence , Emotional Intelligence , School Admission Criteria , Schools, Medical , Educational Measurement , Female , Humans , Male , Ontario , Predictive Value of Tests , Young Adult
13.
BMC Med Educ ; 12: 115, 2012 Nov 15.
Article in English | MEDLINE | ID: mdl-23153359

ABSTRACT

BACKGROUND: Transformation of medical students to become medical professionals is a core competency required for physicians in the 21st century. Role modeling was traditionally the key method of transmitting this skill. Medical schools are developing medical curricula which are explicit in ensuring students develop the professional competency and understand the values and attributes of this role. The purpose of this study was to determine student perception of professionalism at the University of Ottawa and gain insights for improvement in promotion of professionalism in undergraduate medical education. METHODS: Survey on student perception of professionalism in general, the curriculum and learning environment at the University of Ottawa, and the perception of student behaviors, was developed by faculty and students and sent electronically to all University of Ottawa medical students. The survey included both quantitative items including an adapted Pritzker list and qualitative responses to eight open ended questions on professionalism at the Faculty of Medicine, University of Ottawa. All analyses were performed using SAS version 9.1 (SAS Institute Inc. Cary, NC, USA). Chi-square and Fischer's exact test (for cell count less than 5) were used to derive p-values for categorical variables by level of student learning. RESULTS: The response rate was 45.6% (255 of 559 students) for all four years of the curriculum. 63% of the responses were from students in years 1 and 2 (preclerkship). Students identified role modeling as the single most important aspect of professionalism. The strongest curricular recommendations included faculty-led case scenario sessions, enhancing interprofessional interactions and the creation of special awards to staff and students to "celebrate" professionalism. Current evaluation systems were considered least effective. The importance of role modeling and information on how to report lapses and breaches was highlighted in the answers to the open ended questions. CONCLUSIONS: Students identify the need for strong positive role models in their learning environment, and for effective evaluation of the professionalism of students and teachers. Medical school leaders must facilitate development of these components within the MD education and faculty development programs as well as in clinical milieus where student learning occurs.


Subject(s)
Clinical Competence , Education, Medical, Undergraduate , Faculty, Medical , Imitative Behavior , Mentors , Physician's Role/psychology , Social Identification , Students, Medical/psychology , Adult , Attitude of Health Personnel , Awareness , Curriculum , Female , Humans , Male , Motivation , Physician-Patient Relations , Surveys and Questionnaires , Young Adult
14.
Acad Med ; 86(10 Suppl): S39-41, 2011 Oct.
Article in English | MEDLINE | ID: mdl-21955766

ABSTRACT

BACKGROUND: As medical school admission committees are giving increased consideration to noncognitive measures, this study sought to determine how emotional intelligence (EI) scores relate to other traditional measures used in the admissions process. METHOD: EI was measured using an ability-based test (Mayer-Salovey-Caruso Emotional Intelligence Test, or MSCEIT) in two consecutive cohorts of medical school applicants (2006 and 2007) qualifying for the admission interview. Pearson correlations between EI scores and traditional measures (i.e., weighted grade point average [wGPA], autobiographical sketch scores, and interview scores) were calculated. RESULTS: Of 659 applicants, 68% participated. MSCEIT scores did not correlate with traditional measures (r = -0.06 to 0.09, P > .05), with the exception of a small correlation with wGPA in the 2007 cohort (r = -0.13, P < .05). CONCLUSIONS: The lack of substantial relationships between EI scores and traditional medical school admission measures suggests that EI evaluates a construct fundamentally different from traits captured in our admission process.


Subject(s)
Emotional Intelligence , Intelligence Tests , School Admission Criteria , Schools, Medical , Female , Humans , Male
15.
Med Educ ; 45(2): 183-91, 2011 Feb.
Article in English | MEDLINE | ID: mdl-21166691

ABSTRACT

OBJECTIVES: to help reduce pressure on faculty staff, medical students have been used as raters in objective structured clinical examinations (OSCEs). There are few studies regarding their ability to complete checklists and global rating scales, and a paucity of data on their ability to provide feedback to junior colleagues. The objectives of this study were: (i) to compare expert faculty examiner (FE) and student-examiner (SE) assessment of students' (candidates') performances on a formative OSCE; (ii) to assess SE feedback provided to candidates, and (iii) to seek opinion regarding acceptability from all participants. METHODS: year 2 medical students (candidates, n = 66) participated in a nine-station formative OSCE. Year 4 students (n = 27) acted as SEs and teaching doctors (n = 27) served as FEs. In each station, SEs and FEs independently scored the candidates using checklists and global rating scales. The SEs provided feedback to candidates after each encounter. The FEs evaluated SEs on the feedback provided using a standardised rating scale (1 = strongly disagree, 5 = strongly agree) for several categories, according to whether the feedback was: balanced; specific; accurate; appropriate; professional, and similar to feedback the FE would have provided. All participants completed questionnaires exploring perceptions and acceptability. RESULTS: there was a high correlation on the checklist items between raters on each station, ranging from 0.56 to 0.86. Correlations on the global rating for each station ranged from 0.23 to 0.78. Faculty examiners rated SE feedback highly, with mean scores ranging from 4.02 to 4.44 for all categories. There was a high degree of acceptability on the part of candidates and examiners. CONCLUSIONS: student-examiners appear to be a viable alternative to FEs in a formative OSCE in terms of their ability to both complete checklists and provide feedback.


Subject(s)
Education, Medical, Undergraduate/methods , Educational Measurement/methods , Faculty, Medical , Students, Medical/psychology , Attitude of Health Personnel , Clinical Competence , Feedback, Psychological , Humans , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...