Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
Can Med Educ J ; 13(4): 49-52, 2022 Aug.
Article in English | MEDLINE | ID: mdl-36091731

ABSTRACT

Performance on medical licensing examinations has been previously shown to be predictive of performance in practice. However, licensing examinations are closed-book and real-world medical practice increasingly requires doctors and patients to consult resources to make evidence-informed decisions. To best assess the ability of physicians and physicians-in-practice to avail themselves of point-of-care clinical resources and tools, open-book components may have an emerging role in high-stakes examinations.


Il a déjà été démontré que la performance lors des examens d'aptitude ou de certification en médecine prédit la performance dans l'exercice professionnel réel. Cependant, ces examens se déroulent à livre fermé alors que dans la pratique, les médecins et les patients sont de plus en plus appelés à consulter des ressources pour prendre des décisions fondées sur les données probantes. Pour mieux évaluer la capacité des médecins et des médecins en exercice à se servir des ressources et des outils cliniques sur le lieu de soins, les examens à livre ouvert auraient peut-être un nouveau rôle à jouer dans les examens à enjeux élevés.

2.
Acad Med ; 96(1): 118-125, 2021 01 01.
Article in English | MEDLINE | ID: mdl-32496286

ABSTRACT

PURPOSE: Educational handover (i.e., providing information about learners' past performance) is controversial. Proponents argue handover could help tailor learning opportunities. Opponents fear it could bias subsequent assessments and lead to self-fulfilling prophecies. This study examined whether raters provided with reports describing learners' minor weaknesses would generate different assessment scores or narrative comments than those who did not receive such reports. METHOD: In this 2018 mixed-methods, randomized, controlled, experimental study, clinical supervisors from 5 postgraduate (residency) programs were randomized into 3 groups receiving no educational handover (control), educational handover describing weaknesses in medical expertise, and educational handover describing weaknesses in communication. All participants watched the same videos of 2 simulated resident-patient encounters and assessed performance using a shortened mini-clinical evaluation exercise form. The authors compared mean scores, percentages of negative comments, comments focusing on medical expertise, and comments focusing on communication across experimental groups using analyses of variance. They examined potential moderating effects of supervisor experience, gender, and mindsets (fixed vs growth). RESULTS: Seventy-two supervisors participated. There was no effect of handover report on assessment scores (F(2, 69) = 0.31, P = .74) or percentage of negative comments (F(2, 60) = 0.33, P = .72). Participants who received a report indicating weaknesses in communication generated a higher percentage of comments on communication than the control group (63% vs 50%, P = .03). Participants who received a report indicating weaknesses in medical expertise generated a similar percentage of comments on expertise compared to the controls (46% vs 47%, P = .98). CONCLUSIONS: This study provides initial empirical data about the effects of educational handover and suggests it can-in some circumstances-lead to more targeted feedback without influencing scores. Further studies are required to examine the influence of reports for a variety of performance levels, areas of weakness, and learners.


Subject(s)
Clinical Competence/standards , Education, Medical, Graduate/standards , Educational Measurement/standards , Internship and Residency/standards , Adult , Clinical Competence/statistics & numerical data , Education, Medical, Graduate/statistics & numerical data , Educational Measurement/statistics & numerical data , Female , Humans , Internship and Residency/statistics & numerical data , Male , Young Adult
3.
Acad Med ; 96(2): 271-277, 2021 02 01.
Article in English | MEDLINE | ID: mdl-32769474

ABSTRACT

PURPOSE: Written examinations such as multiple-choice question (MCQ) exams are a key assessment strategy in health professions education (HPE), frequently used to provide feedback, to determine competency, or for licensure decisions. However, traditional psychometric approaches for monitoring the quality of written exams, defined as items that are discriminant and contribute to increase the overall reliability and validity of the exam scores, usually warrant larger samples than are typically available in HPE contexts. The authors conducted a descriptive exploratory study to document how undergraduate medical education (UME) programs ensure the quality of their written exams, particularly MCQs. METHOD: Using a qualitative descriptive methodology, the authors conducted semistructured interviews with 16 key informants from 10 Canadian UME programs in 2018. Interviews were transcribed, anonymized, coded by the primary investigator, and co-coded by a second team member. Data collection and analysis were conducted iteratively. Research team members engaged in analysis across phases, and consensus was reached on the interpretation of findings via group discussion. RESULTS: Participants focused their answers around MCQ-related practices, reporting using several indicators of quality such as alignment between items and course objectives and psychometric properties (difficulty and discrimination). The authors clustered findings around 5 main themes: processes for creating MCQ exams, processes for building quality MCQ exams, processes for monitoring the quality of MCQ exams, motivation to build quality MCQ exams, and suggestions for improving processes. CONCLUSIONS: Participants reported engaging multiple strategies to ensure the quality of MCQ exams. Assessment quality considerations were integrated throughout the development and validation phases, reflecting recent work regarding validity as a social imperative.


Subject(s)
Education, Medical, Undergraduate/methods , Educational Measurement/methods , Health Occupations/education , Licensure/ethics , Canada/epidemiology , Clinical Competence/statistics & numerical data , Data Collection/methods , Evaluation Studies as Topic , Feedback , Female , Humans , Interviews as Topic , Licensure/statistics & numerical data , Male , Psychometrics , Reproducibility of Results , Students, Medical/statistics & numerical data , Writing
5.
Perspect Med Educ ; 9(5): 294-301, 2020 10.
Article in English | MEDLINE | ID: mdl-32809189

ABSTRACT

INTRODUCTION: Current medical education models increasingly rely on longitudinal assessments to document learner progress over time. This longitudinal focus has re-kindled discussion regarding learner handover-where assessments are shared across supervisors, rotations, and educational phases, to support learner growth and ease transitions. The authors explored the opinions of, experiences with, and recommendations for successful implementation of learner handover among clinical supervisors. METHODS: Clinical supervisors from five postgraduate medical education programs at one institution completed an online questionnaire exploring their views regarding learner handover, specifically: potential benefits, risks, and suggestions for implementation. Survey items included open-ended and numerical responses. The authors used an inductive content analysis approach to analyze the open-ended questionnaire responses, and descriptive and correlational analyses for numerical data. RESULTS: Seventy-two participants completed the questionnaire. Their perspectives varied widely. Suggested benefits of learner handover included tailored learning, improved assessments, and enhanced patient safety. The main reported risk was the potential for learner handover to bias supervisors' perceptions of learners, thereby affecting the validity of future assessments and influencing the learner's educational opportunities and well-being. Participants' suggestions for implementation focused on who should be involved, when and for whom it should occur, and the content that should be shared. DISCUSSION: The diverse opinions of, and recommendations for, learner handover highlight the necessity for handover to maximize learning potential while minimizing potential harms. Supervisors' suggestions for handover implementation reveal tensions between assessment-of and for-learning.


Subject(s)
Education, Medical, Graduate/standards , Faculty, Medical/psychology , Adult , Curriculum/trends , Education, Medical, Graduate/methods , Education, Medical, Graduate/statistics & numerical data , Faculty, Medical/statistics & numerical data , Female , Humans , Learning , Male , Middle Aged , Qualitative Research , Surveys and Questionnaires
6.
Acad Med ; 95(9S A Snapshot of Medical Student Education in the United States and Canada: Reports From 145 Schools): S592-S595, 2020 Sep.
Article in English | MEDLINE | ID: mdl-33626776
7.
Perspect Med Educ ; 9(1): 66-70, 2020 02.
Article in English | MEDLINE | ID: mdl-31848999

ABSTRACT

INTRODUCTION: In-training assessment reports (ITARs) summarize assessment during a clinical placement to inform decision-making and provide formal feedback to learners. Faculty development is an effective but resource-intensive means of improving the quality of completed ITARs. We examined whether the quality of completed ITARs could be improved by 'nudges' from the format of ITAR forms. METHODS: Our first intervention consisted of placing the section for narrative comments at the beginning of the form, and using prompts for recommendations (Do more, Keep doing, Do less, Stop doing). In a second intervention, we provided a hyperlink to a detailed assessment rubric and shortened the checklist section. We analyzed a sample of 360 de-identified completed ITARs from six disciplines across the three academic years where the different versions of the ITAR were used. Two raters independently scored the ITARs using the Completed Clinical Evaluation Report Rating (CCERR) scale. We tested for differences between versions of the ITAR forms using a one-way ANOVA for the total CCERR score, and MANOVA for the nine CCERR item scores. RESULTS: Changes to the form structure (nudges) improved the quality of information generated as measured by the CCERR instrument, from a total score of 18.0/45 (SD 2.6) to 18.9/45 (SD 3.1) and 18.8/45 (SD 2.6), p = 0.04. Specifically, comments were more balanced, more detailed, and more actionable compared with the original ITAR. DISCUSSION: Nudge interventions, which are inexpensive and feasible, should be included in multipronged approaches to improve the quality of assessment reports.


Subject(s)
Educational Measurement/standards , Nurse Administrators/psychology , Analysis of Variance , Education, Nursing, Baccalaureate/methods , Educational Measurement/statistics & numerical data , Humans , Nurse Administrators/standards , Nurse Administrators/statistics & numerical data , Preceptorship/methods , Preceptorship/standards , Preceptorship/trends
8.
Acad Med ; 94(10): 1443-1447, 2019 10.
Article in English | MEDLINE | ID: mdl-31045600

ABSTRACT

Historically, students have been "consumers" of undergraduate medical education (UME) rather than stakeholders in its design and implementation. Student input has been retrospective, and although UME leaders have been open to feedback, matters most important to students have often been overlooked, leaving students feeling largely unheard. Student representation has also lacked structure and unity of feedback.A vision for effective student representation drove the creation of a partnered educational governance (PEG) model at McGill University in Montreal, Quebec, Canada, where sharing of expertise between student representatives and UME leadership has improved the UME program and the educational experience of students.The PEG model is grounded in the literature on student government, the student-as-partner framework, and theories of accountability. As part of the model, the student Medical Education Committee, an organized structure for discussion and reporting to student constituents, was established. This structure allows student representatives, entrusted by their peers and faculty, to proactively provide input to UME committees in the development of policies and curricula. The partnership between students and faculty facilitates a shared understanding of educational challenges and potential solutions.Within the first year, meaningful changes associated with the PEG model included increased student engagement in key program decisions, such as the redesign of a research course and an update to the absences and leaves policy. The PEG model enables unified student representation that is accountable and representative-and has a significant impact on outcomes-while maintaining the UME program's ownership of and responsibility for the curriculum and policies.


Subject(s)
Education, Medical, Undergraduate/organization & administration , Stakeholder Participation , Students, Medical , Cooperative Behavior , Humans , Models, Organizational
9.
Med Educ ; 52(12): 1259-1270, 2018 12.
Article in English | MEDLINE | ID: mdl-30430619

ABSTRACT

CONTEXT: Competency-based medical education has spurred the implementation of longitudinal workplace-based assessment (WBA) programmes to track learners' development of competencies. These hinge on the appropriate use of assessment instruments by assessors. This study aimed to validate our assessment programme and specifically to explore whether assessors' beliefs and behaviours rendered the detection of progress possible. METHODS: We implemented a longitudinal WBA programme in the third year of a primarily rotation-based clerkship. The programme used the professionalism mini-evaluation exercise (P-MEX) to detect progress in generic competencies. We used mixed methods: a retrospective psychometric examination of student assessment data in one academic year, and a prospective focus group and interview study of assessors' beliefs and reported behaviours related to the assessment. RESULTS: We analysed 1662 assessment forms for 186 students. We conducted interviews and focus groups with 21 assessors from different professions and disciplines. Scores were excellent from the outset (3.5-3.7/4), with no meaningful increase across blocks (average overall scores: 3.6 in block 1 versus 3.7 in blocks 2 and 3; F = 8.310, d.f. 2, p < 0.001). The main source of variance was the forms (47%) and only 1% of variance was attributable to students, which led to low generalisability across forms (Eρ2  = 0.18). Assessors reported using multiple observations to produce their assessments and were reluctant to harm students by consigning anything negative to writing. They justified the use of a consistent benchmark across time by citing the basic nature of the form or a belief that the 'competencies' assessed were in fact fixed attributes that were unlikely to change. CONCLUSIONS: Assessors may purposefully deviate from instructions in order to meet their ethical standards of good assessment. Furthermore, generic competencies may be viewed as intrinsic and fixed rather than as learnable. Implementing a longitudinal WBA programme is complex and requires careful consideration of assessors' beliefs and values.


Subject(s)
Clinical Competence/standards , Competency-Based Education , Educational Measurement/methods , Clinical Clerkship , Education, Medical , Focus Groups , Humans , Interviews as Topic , Longitudinal Studies
10.
J Surg Educ ; 74(6): 1135-1141, 2017.
Article in English | MEDLINE | ID: mdl-28688969

ABSTRACT

Simulation allows for learner-centered health professions training by providing a safe environment to practice and make mistakes without jeopardizing patient care. It was with this goal in mind that the McGill Medical Simulation Center was officially opened on September 14, 2006, as a partnership between McGill University, the Faculty of Medicine and its affiliated hospitals. Its mandate is to provide state-of-the-art facilities to support simulation-based medical and allied health education initiatives. Since its inception, the center, recently renamed the Steinberg Center for Simulation and Interactive Learning (SCSIL), has undergone a major expansion and logged more than 130,000 learner visits. Educational activities are offered at all levels of medical and allied health care training, and include standardized patient encounters, partial task trainers, multidisciplinary courses, and high-fidelity trainers, among many others. In addition to its educational mandate, the center also supports an active research program, programs to enhance collaboration with disciplines outside of health care to spur innovation, and community outreach initiatives.


Subject(s)
Education, Medical/organization & administration , Simulation Training/organization & administration , Total Quality Management , Universities/organization & administration , Academic Medical Centers/organization & administration , Curriculum , Female , Humans , Internship and Residency/statistics & numerical data , Male , Program Development , Program Evaluation , Quebec , Students, Medical/statistics & numerical data
11.
Perspect Med Educ ; 6(1): 21-28, 2017 Feb.
Article in English | MEDLINE | ID: mdl-28050882

ABSTRACT

INTRODUCTION: Multiple-choice questions (MCQs) are a cornerstone of assessment in medical education. Monitoring item properties (difficulty and discrimination) are important means of investigating examination quality. However, most item property guidelines were developed for use on large cohorts of examinees; little empirical work has investigated the suitability of applying guidelines to item difficulty and discrimination coefficients estimated for small cohorts, such as those in medical education. We investigated the extent to which item properties vary across multiple clerkship cohorts to better understand the appropriateness of using such guidelines with small cohorts. METHODS: Exam results for 32 items from an MCQ exam were used. Item discrimination and difficulty coefficients were calculated for 22 cohorts (n = 10-15 students). Discrimination coefficients were categorized according to Ebel and Frisbie (1991). Difficulty coefficients were categorized according to three guidelines by Laveault and Grégoire (2014). Descriptive analyses examined variance in item properties across cohorts. RESULTS: A large amount of variance in item properties was found across cohorts. Discrimination coefficients for items varied greatly across cohorts, with 29/32 (91%) of items occurring in both Ebel and Frisbie's 'poor' and 'excellent' categories and 19/32 (59%) of items occurring in all five categories. For item difficulty coefficients, the application of different guidelines resulted in large variations in examination length (number of items removed ranged from 0 to 22). DISCUSSION: While the psychometric properties of items can provide information on item and exam quality, they vary greatly in small cohorts. The application of guidelines with small exam cohorts should be approached with caution.

12.
Med Educ ; 50(9): 912-21, 2016 Sep.
Article in English | MEDLINE | ID: mdl-27562891

ABSTRACT

CONTEXT: Over the past few decades, longitudinal integrated clerkships (LICs) have been proposed to address many perceived short-coming of traditional block clerkships. This growing interest in LICs has raised broader questions regarding the role of integration, continuity and longitudinality in medical education. A study with complementary theoretical and empirical dimensions was conducted to derive a more precise way of defining these three underlying concepts within the design of medical education curricula. METHODS: The theoretical dimension involved a thematic review of the literature on integration, continuity and longitudinality in medical education. The empirical dimension surveyed all 17 Canadian medical schools on how they have operationalised integration, continuity and longitudinality in their undergraduate programmes. The two dimensions were iteratively synthesised to explore the meaning and expression of integration, continuity and longitudinality in medical education curriculum design. RESULTS: Integration, continuity and longitudinality were expressed in many ways and forms, including: integration of clinical disciplines, combined horizontal integration and vertical integration, and programme-level integration. Types of continuity included: continuity of patients, continuity of teaching, continuity of location and peer continuity. Longitudinality focused on connected or repeating episodes of training or on connecting activities, such as encounter logging across educational episodes. Twelve of the 17 schools were running an LIC of some kind, although only one school had a mandatory LIC experience. An ordinal scale of uses of integration, continuity and longitudinality during clerkships was developed, and new definitions of these concepts in the clerkship context were generated. CONCLUSIONS: Different clerkship designs embodied different forms and levels of integration, continuity and longitudinality. A dichotomous view of LICs and rotation-based clerkships was found not to represent current practices in Canada, which instead tended to fall along a continuum of integration, continuity and longitudinality.


Subject(s)
Clinical Clerkship/methods , Clinical Competence , Continuity of Patient Care , Curriculum , Learning , Canada , Education, Medical, Undergraduate , Humans , Mentors , Models, Educational , Students, Medical , Surveys and Questionnaires
13.
Med Educ ; 50(9): 922-32, 2016 Sep.
Article in English | MEDLINE | ID: mdl-27562892

ABSTRACT

CONTEXT: Longitudinal integrated clerkships (LICs) represent a model of the structural redesign of clinical education that is growing in the USA, Canada, Australia and South Africa. By contrast with time-limited traditional block rotations, medical students in LICs provide comprehensive care of patients and populations in continuing learning relationships over time and across disciplines and venues. The evidence base for LICs reveals transformational professional and workforce outcomes derived from a number of small institution-specific studies. OBJECTIVES: This study is the first from an international collaborative formed to study the processes and outcomes of LICs across multiple institutions in different countries. It aims to establish a baseline reference typology to inform further research in this field. METHODS: Data on all LIC and LIC-like programmes known to the members of the international Consortium of Longitudinal Integrated Clerkships were collected using a survey tool developed through a Delphi process and subsequently analysed. Data were collected from 54 programmes, 44 medical schools, seven countries and over 15 000 student-years of LIC-like curricula. RESULTS: Wide variation in programme length, student numbers, health care settings and principal supervision was found. Three distinct typological programme clusters were identified and named according to programme length and discipline coverage: Comprehensive LICs; Blended LICs, and LIC-like Amalgamative Clerkships. Two major approaches emerged in terms of the sizes of communities and types of clinical supervision. These referred to programmes based in smaller communities with mainly family physicians or general practitioners as clinical supervisors, and those in more urban settings in which subspecialists were more prevalent. CONCLUSIONS: Three distinct LIC clusters are classified. These provide a foundational reference point for future studies on the processes and outcomes of LICs. The study also exemplifies a collaborative approach to medical education research that focuses on typology rather than on individual programme or context.


Subject(s)
Clinical Clerkship/organization & administration , Clinical Competence , Continuity of Patient Care/trends , Education, Medical, Undergraduate/organization & administration , Australia , Clinical Clerkship/standards , Clinical Clerkship/statistics & numerical data , Continuity of Patient Care/organization & administration , Curriculum , Delphi Technique , Humans , Internationality , Learning , North America , South Africa , Students, Medical
14.
J Grad Med Educ ; 7(1): 48-52, 2015 Mar.
Article in English | MEDLINE | ID: mdl-26217422

ABSTRACT

BACKGROUND: Many countries have reduced resident duty hours in an effort to promote patient safety and enhance resident quality of life. There are concerns that reducing duty hours may impact residents' learning opportunities. OBJECTIVES: We (1) evaluated residents' perceptions of their current learning opportunities in a context of reduced duty hours, and (2) explored the perceived change in resident learning opportunities after call length was reduced from 24 continuous hours to 16 hours. METHODS: We conducted an anonymous, cross-sectional online survey of 240 first-, second-, and third-year residents rotating through 3 McGill University-affiliated intensive care units (ICUs) in Montreal, Quebec, Canada, between July 1, 2012, and June 30, 2013. The survey investigated residents' perceptions of learning opportunities in both the 24-hour and 16-hour systems. RESULTS: Of 240 residents, 168 (70%) completed the survey. Of these residents, 63 (38%) had been exposed to both 24-hour and 16-hour call schedules. The majority of respondents (83%) reported that didactic teaching sessions held by ICU staff physicians were useful. However, of the residents trained in both approaches to overnight call, 44% reported a reduction in learner attendance at didactic teaching sessions, 48% reported a reduction in attendance at midday hospital rounds, and 40% reported a perceived reduction in self-directed reading after the implementation of the new call schedule. CONCLUSIONS: A substantial proportion of residents perceived a reduction in the attendance of instructor-directed and self-directed reading after the implementation of a 16-hour call schedule in the ICU.


Subject(s)
Attitude of Health Personnel , Intensive Care Units , Internal Medicine/education , Internship and Residency , Workload , Cross-Sectional Studies , Female , Humans , Male , Patient Safety , Personnel Staffing and Scheduling , Quality of Life , Quebec , Surveys and Questionnaires , Work Schedule Tolerance
15.
Med Teach ; 35(12): 989-95, 2013 Dec.
Article in English | MEDLINE | ID: mdl-23883396

ABSTRACT

Longitudinal integrated clerkships (LICs) involve learners spending an extended time in a clinical setting (or a variety of interlinked clinical settings) where their clinical learning opportunities are interwoven through continuities of patient contact and care, continuities of assessment and supervision, and continuities of clinical and cultural learning. Our twelve tips are grounded in the lived experiences of designing, implementing, maintaining, and evaluating LICs, and in the extant literature on LICs. We consider: general issues (anticipated benefits and challenges associated with starting and running an LIC); logistical issues (how long each longitudinal experience should last, where it will take place, the number of learners who can be accommodated); and integration issues (how the LIC interfaces with the rest of the program, and the need for evaluation that aligns with the dynamics of the LIC model). Although this paper is primarily aimed at those who are considering setting up an LIC in their own institutions or who are already running an LIC we also offer our recommendations as a reflection on the broader dynamics of medical education and on the priorities and issues we all face in designing and running educational programs.


Subject(s)
Clinical Clerkship/organization & administration , Education, Medical, Undergraduate/organization & administration , Models, Educational , Clinical Competence , Educational Measurement , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...