Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
1.
Perspect Med Educ ; 6(5): 347-355, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28516341

ABSTRACT

Competency-based medical education systems allow institutions to individualize teaching practices to meet the needs of diverse learners. Yet, the focus on continuous improvement and individualization of curricula does not exempt programs from treating learners in a fair manner. When learners fail to meet key competencies and are placed on probation or dismissed from training programs, issues of fairness may form the basis of their legal claims. In a literature search, we found no in-depth examination of fairness. In this paper, we utilize a systems lens to examine fairness within postgraduate medical education contexts, focusing on educational opportunities, assessment practices, decision-making processes, fairness from a legal standpoint, and fairness in the context of the learning environment. While we provide examples of fairness issues within US training programs, concerns regarding fairness are relevant in any medical education system which utilizes a competency-based education framework.Assessment oversight committees and annual programmatic evaluations, while recommended, will not guarantee fairness within postgraduate medical education programs, but they can provide a window into 'hidden' threats to fairness, as everything from training experiences to assessment practices may be examined by these committees. One of the first steps programs can take is to recognize that threats to fairness may exist in any educational program, including their own, and begin conversations about how to address these issues.

2.
Perspect Med Educ ; 6(1): 29-35, 2017 Feb.
Article in English | MEDLINE | ID: mdl-27957671

ABSTRACT

INTRODUCTION: Stress and burnout among medical students is a well-recognized concern. A student's ability to employ resilience strategies to self-regulate behaviour is critical to the student's future career as a physician. METHODS: We retrospectively reviewed a sampling of year 1, 2 and 5 portfolio essays focused on the Personal Development competency and performance milestones, written by 49 students from three different classes in a 5-year programme devoted to training physician investigators. Two medical educators used a framework established by Jensen and colleagues (2008) to identify the nature and prevalence of various resilience strategies (valuing the physician role, self-awareness, personal arena, professional arena, professional support and personal support) medical students reported in portfolio essays. RESULTS: All students documented at least one strategy in their essays each year. In all years, the most commonly documented strategies were in the personal arena (95.7% of year 1, 98% of year 2 and 87.8% of year 5 portfolios). The least frequently documented strategy in all years was professional support (42.8% of year 1, 38.8% of year 2, and 28.6% of year 5 portfolios). Year 5 portfolios discussed personal support strategies (79.6%) more frequently than year 1 (53.1%) and year 2 (59.2%) portfolios. DISCUSSION: The results suggest that medical students can identify stressors and articulate resilience strategies that can be employed to potentially address them.

3.
Acad Med ; 91(11 Association of American Medical Colleges Learn Serve Lead: Proceedings of the 55th Annual Research in Medical Education Sessions): S44-S52, 2016 11.
Article in English | MEDLINE | ID: mdl-27779509

ABSTRACT

PURPOSE: The move toward competency-based education will require medical schools and postgraduate training programs to restructure learning environments to motivate trainees to take personal ownership for learning. This qualitative study explores how medical students select and implement study strategies while enrolled in a unique, nontraditional program that emphasizes reflection on performance and competence rather than relying on high-stakes examinations or grades to motivate students to learn and excel. METHOD: Fourteen first-year medical students volunteered to participate in three, 45-minute interviews (42 overall) scheduled three months apart during 2013-2014. Two medical educators used structured interview guides to solicit students' previous assessment experiences, preferred learning strategies, and performance monitoring processes. Interviews were digitally recorded and transcribed verbatim. Participants confirmed accuracy of transcripts. Researchers independently read transcripts and met regularly to discuss transcripts and judge when themes achieved saturation. RESULTS: Medical students can adopt an assessment for learning mind-set with faculty guidance and implement appropriate study strategies for mastery-learning demands. Though students developed new strategies at different rates during the year, they all eventually identified study and performance monitoring strategies to meet learning needs. Students who had diverse learning experiences in college embraced mastery-based study strategies sooner than peers after recognizing that the learning environment did not reward performance-based strategies. CONCLUSIONS: Medical students can take ownership for their learning and implement specific strategies to regulate behavior when learning environments contain building blocks emphasized in self-determination theory. Findings should generalize to educational programs seeking strategies to design learning environments that promote self-regulated learning.


Subject(s)
Competency-Based Education/methods , Education, Medical, Undergraduate/methods , Educational Measurement , Students, Medical/psychology , Test Taking Skills/methods , Test Taking Skills/psychology , Humans , Longitudinal Studies , Qualitative Research
4.
Perspect Med Educ ; 5(5): 276-84, 2016 Oct.
Article in English | MEDLINE | ID: mdl-27650373

ABSTRACT

INTRODUCTION: Feedback after assessment is essential to support the development of optimal performance, but often fails to reach its potential. Although different assessment cultures have been proposed, the impact of these cultures on students' receptivity to feedback is unclear. This study aimed to explore factors which aid or hinder receptivity to feedback. METHODS: Using a constructivist grounded theory approach, the authors conducted six focus groups in three medical schools, in three separate countries, with different institutional approaches to assessment, ranging from a traditional summative assessment structure to a fully implemented programmatic assessment system. The authors analyzed data iteratively, then identified and clarified key themes. RESULTS: Helpful and counterproductive elements were identified within each school's assessment system. Four principal themes emerged. Receptivity to feedback was enhanced by assessment cultures which promoted students' agency, by the provision of authentic and relevant assessment, and by appropriate scaffolding to aid the interpretation of feedback. Provision of grades and comparative ranking provided a helpful external reference but appeared to hinder the promotion of excellence. CONCLUSIONS: This study has identified important factors emerging from different assessment cultures which, if addressed by programme designers, could enhance the learning potential of feedback following assessments. Students should be enabled to have greater control over assessment and feedback processes, which should be as authentic as possible. Effective long-term mentoring facilitates this process. The trend of curriculum change towards constructivism should now be mirrored in the assessment processes in order to enhance receptivity to feedback.

5.
J Surg Educ ; 72(6): e274-9, 2015.
Article in English | MEDLINE | ID: mdl-26123726

ABSTRACT

BACKGROUND: The Accreditation Council for Graduate Medical Education's Milestones Project focuses trainee education on the formation of valued behaviors and skills believed to be necessary for trainees to become independent practitioners. The development and refinement of behaviors and skills outlined within the milestones will require learners to monitor, reflect, and assess their own performance over time. External feedback provides an opportunity for learners to recalibrate their self-assessments, thereby enabling them to develop better self-monitoring and self-assessment skills. Yet, feedback to trainees is frequently generic, such as "great job," "nice work," or "you need to read more." PURPOSE: In this article, we describe a feedback model that faculty can use to provide specific feedback, while increasing accountability for learners. We offer practical examples of its use in a variety of settings in the milestone era. INNOVATION: The Ask-Tell-Ask (ATA) patient communication skills strategy, which was adapted for use as a trainee feedback model 10 years ago at our institution, is a learner-centered approach for reinforcing and modifying behaviors. The model is efficient, promotes learner accountability, and helps trainees develop reflection and self-assessment skills. A feedback agreement further enhances ATA by establishing a shared understanding of goals for the educational encounter. CONCLUSION: The ATA feedback model, combined with a feedback agreement, encourages learners to self-identify strengths and areas for improvement, before receiving feedback. Personal monitoring, reflection, self-assessment, and increased accountability make ATA an ideal learner-centered feedback model for the milestones era, which focuses on performance improvement over time. We believe the introduction of the ATA feedback model in surgical training programs is a step in the right direction towards meaningful programmatic culture change.


Subject(s)
Education, Medical, Graduate/methods , Formative Feedback , General Surgery/education , Models, Educational , Self-Assessment , Teach-Back Communication
7.
J Gen Intern Med ; 30(9): 1339-43, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26173525

ABSTRACT

BACKGROUND: Remediation in the era of competency-based assessment demands a model that empowers students to improve performance. AIM: To examine a remediation model where students, rather than faculty, develop remedial plans to improve performance. SETTING/PARTICIPANTS: Private medical school, 177 medical students. PROGRAM DESCRIPTION: A promotion committee uses student-generated portfolios and faculty referrals to identify struggling students, and has them develop formal remediation plans with personal reflections, improvement strategies, and performance evidence. Students submit reports to document progress until formally released from remediation by the promotion committee. PROGRAM EVALUATION: Participants included 177 students from six classes (2009-2014). Twenty-six were placed in remediation, with more referrals occurring during Years 1 or 2 (n = 20, 76 %). Unprofessional behavior represented the most common reason for referral in Years 3-5. Remedial students did not differ from classmates (n = 151) on baseline characteristics (Age, Gender, US citizenship, MCAT) or willingness to recommend their medical school to future students (p < 0.05). Two remedial students did not graduate and three did not pass USLME licensure exams on first attempt. Most remedial students (92 %) generated appropriate plans to address performance deficits. DISCUSSION: Students can successfully design remedial interventions. This learner-driven remediation model promotes greater autonomy and reinforces self-regulated learning.


Subject(s)
Education, Medical, Undergraduate/methods , Internal Medicine/education , Models, Educational , Remedial Teaching , Educational Measurement , Female , Humans , Male , Ohio , Program Evaluation , Schools, Medical , Young Adult
8.
Adv Health Sci Educ Theory Pract ; 20(2): 339-54, 2015 May.
Article in English | MEDLINE | ID: mdl-25037264

ABSTRACT

This study used variables proposed in social cognitive career theory (SCCT) to focus the evaluation of a research curriculum at the Cleveland Clinic Lerner College of Medicine of Case Western Reserve University (CCLCM). Eight cohorts of CCLCM medical students completed a web-based version of the six-scale Clinical Research Appraisal Inventory-Short Form (CRAI-SF) items at matriculation (n = 128) or graduation (n = 111) during 2009-2013. Parametric statistics were used to compare CRAI-SF scales to domains proposed in SCCT: trainees' characteristics (gender, training level, advanced degree), career interests, career intentions (medical specialty), and performance (peer-reviewed publications and required thesis topic). A number of lessons emerged in using theory to frame the evaluation of a complex educational program. Graduates rated their research self-efficacy significantly higher on all six CRAI-SF scales with large effect sizes (>.90) on five scales (Conceptualizing a Study, Study Design and Analysis, Responsible Research Conduct, Collaborating with Others, and Reporting a Study). Women and men did not have significantly different scores on CRAI-SF scales (p > .05), suggesting that the research program provides adequate supports for women students. Most thesis projects addressed clinical (36.9 %, n = 41) or translational (34.2 %, n = 38) research topics. The CRAI-SF discriminated between medical school matriculates and graduates, suggesting that research self-efficacy increases with mastery experiences. No significant relationships occurred between CRAI-SF scores and graduates' thesis topics or chosen clinical specialty. Correlations demonstrated significant relationships between graduates' perceptions of research self-efficacy and their interest in clinical research careers.


Subject(s)
Biomedical Research , Career Choice , Self Efficacy , Specialization , Students, Medical/psychology , Achievement , Adult , Cooperative Behavior , Educational Measurement , Female , Humans , Male , Peer Review , Program Evaluation , Sex Factors
9.
J Surg Educ ; 71(6): e22-7, 2014.
Article in English | MEDLINE | ID: mdl-24878101

ABSTRACT

BACKGROUND: The Accreditation Council for Graduate Medical Education has offered minimal guidelines for the creation and implementation of clinical competency committees (CCCs). As surgical residency programs may differ greatly in terms of size and structure, requirements that are too specific throughout the process could place some programs at a great disadvantage. OBJECTIVE: The purpose of this article is to address some of the common considerations all surgery residency programs will face. The creation of standard operating procedures for the CCCs will allow each committee to develop internal consistency, improve productivity, maintain efficiency and quality control, facilitate training of new committee members, and cross-train other faculty and residents on the key processes to provide transparency. METHODS: This article offers recommendations on the 3 key areas of CCC implementation: the prereview, resident milestone review, and the postreview processes. Specific components related to shifting culture, committee membership and terms, assessing available evidence, and review dissemination are outlined, and example scenarios are provided throughout the article. CONCLUSION: With the implementation of CCCs and the milestones project, residency programs have an opportunity to improve the overall quality of decision making regarding residents' promotion to the next training level or independent practice. CCCs will undoubtedly be confronted with numerous challenges, as they implement the milestones project and are faced with the need to make multiple changes. Therefore, implementing milestones should be viewed as a goal to be accomplished over the long term.


Subject(s)
Clinical Competence , General Surgery/education , Internship and Residency/organization & administration , Curriculum , Humans , Internship and Residency/standards
12.
Med Teach ; 35(8): 655-60, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23641921

ABSTRACT

BACKGROUND: The widely recognized need for students to self-regulate their behavior and learning extends to the multiple dimensions of professionalism. AIM: This study examines the extent to which students self-regulate professionalism behaviors related to work habits and interpersonal skills in a PBL setting. METHODS: Formative feedback on works habits and interpersonal skills provided by peers and tutors to a Year 1 cohort (n = 32) over the course of a year-long PBL experience (5 blocks) was examined for comments on targeted areas for improvement (TAFIs) and observed improvements. We examined congruence between PBL feedback and students' self-reported TAFIs and behavioral improvements in their assessment portfolios. RESULTS: Both PBL peer and faculty feedback and portfolio self-assessments targeted Interpersonal Skills TAFIs more frequently than Work Habit-related issues. TAFIs were more frequently identified midway in PBL blocks versus the end. Students reported TAFIs in their portfolio essays, citing feedback from both peers and tutors, and provided evidence of improved performance over time. CONCLUSIONS: Students utilized external formative feedback to document their portfolio self-assessment in a system designed to support self-regulation of PBL professionalism-related behaviors. A decrease in TAFIs identified at the end of PBL blocks suggests students made use of mid-block feedback to self-regulate behaviors.


Subject(s)
Behavior , Education, Medical, Undergraduate/methods , Feedback , Problem-Based Learning , Faculty, Medical , Humans , Interpersonal Relations , Peer Group , Self-Assessment , Students, Medical
13.
Med Teach ; 35(7): 560-3, 2013 Jul.
Article in English | MEDLINE | ID: mdl-23641918

ABSTRACT

Beyond its importance in informing high-stakes decisions, the assessment process can also be designed to foster learning. To be effective, this requires developing a program in which curricular experiences, assessment practices and support activities are aligned to provide an educational culture that encourages self-regulated learning. We describe a program (based at Cleveland Clinic Lerner College of Medicine) in which explicit performance standards align these components and provide a roadmap for students to manage their learning. Information-rich assessment data, structured opportunities for reflection, and facilitated self-assessment using a portfolio approach are designed to support development of habits of reflective practice. Promotion depends on the achievement of competencies rather than grades. Preliminary evidence suggests that the program directs students towards learning, rather than on achieving a grade for grade's sake.


Subject(s)
Clinical Competence/standards , Education, Medical/trends , Educational Measurement/methods , Models, Educational , Achievement , Curriculum , Humans , Learning , Ohio
15.
Med Teach ; 34(3): 215-20, 2012.
Article in English | MEDLINE | ID: mdl-22364453

ABSTRACT

BACKGROUND: Decisions about performance in programs of assessment that provide an array of assessment evidence require judgments about the quality of different pieces of assessment data to determine which combination of data points best represent a trainee's overall performance. AIM: In this article, we examine the nature of evidence selected by first-year medical students to include in a portfolio used to make promotion decisions. METHODS: We reviewed portfolios to examine the number, type, and source of assessments selected by students (n = 32) to document their performance in seven competencies. The quality of assessment data selected for each competency was rated by promotion committee members (n = 14). RESULTS: Findings indicate that students cited multiple types and sources of available assessments. The promotion committee rated evidence quality highest for competencies where the program provided sufficient evidence for students to cite a broad range of assessments. When assessments were not provided by the program, students cited self-generated evidence. CONCLUSION: We found that when student-constructed portfolios are part of an overall assessment system, students generally select evidence in proportion to the number and types of assessments available.


Subject(s)
Education, Medical/organization & administration , Educational Measurement/methods , Professional Competence/standards , Students, Medical/psychology , Documentation/methods , Documentation/standards , Education, Medical/standards , Educational Measurement/standards , Humans , Self-Assessment
16.
Med Teach ; 34(3): 221-5, 2012.
Article in English | MEDLINE | ID: mdl-22364454

ABSTRACT

Despite considerable evidence recognizing the importance of learners' perceptions of the assessment process, there is little literature depicting the participants' experience. We aim to capture these perceptions in order to gain insights into the strengths and weaknesses of a competency-based assessment system. Cleveland Clinic Lerner College of Medicine has implemented a learner-centered portfolio assessment system built around competency standards and continuous formative feedback. Promotion of students is based upon their feedback-supported portfolio essays, but feedback itself is individualized and formative in nature under the umbrella of the competencies. Importantly, there are no grades or ranking awarded for the competencies or at promotion. Four students share personal reflections of their experience to illuminate themes from the subjective experience of the learner and to understand how to align the learners' interests with the requirements of an assessment program.


Subject(s)
Clinical Competence/standards , Competency-Based Education/organization & administration , Education, Medical/standards , Educational Measurement/methods , Students, Medical/psychology , Competency-Based Education/standards , Education, Medical/methods , Educational Measurement/standards , Humans , Program Evaluation , Self-Assessment
17.
Acad Med ; 86(6): 773-7, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21512368

ABSTRACT

PURPOSE: Measurement experts use four criteria to examine the fairness of tests: (1) equitable treatment for examinees, (2) equal outcomes for subgroups, (3) absence of bias, and (4) equal opportunity to learn. These criteria apply to portfolios just as they do to other assessments. This report examines the fairness of portfolio-based promotion decisions for medical students at the Cleveland Clinic Lerner College of Medicine. METHOD: Participants were 182 first-year medical students (97 men, 85 women) from six class cohorts (2004-2009). Chi-square statistics with Yates continuity correction were used to compare overall promotion decisions to students' gender, self-reports of language fluency, and MCAT Writing Sample score. The Cramér V statistic served as an effect size index. Post hoc power analyses identified the minimum sample size to obtain acceptable power. RESULTS: Approximately 85% of students were promoted to Year 2 of the program. Gender, U.S. citizenship, language fluency, and MCAT Writing Sample score were not significantly related to overall promotion decisions. Effect sizes were small (≤0.15) for all contingency tables, suggesting weak associations between overall promotion decisions and students' group characteristics. CONCLUSIONS: Examining fairness, although challenging, is essential to maintain professional standards and avoid potential liability. Preliminary evidence in this study suggests that students' background characteristics and verbal abilities were not strongly related to portfolio-based promotion decisions. Schools should monitor processes that may affect fairness. This study reports on just one aspect of fairness. More research is needed to evaluate other dimensions of fairness.


Subject(s)
Civil Rights , Education, Medical, Undergraduate , Educational Measurement , Social Justice , Adult , Emigrants and Immigrants , Female , Humans , Male , Multilingualism , Ohio , Sex Factors , Verbal Behavior
19.
Teach Learn Med ; 21(4): 344-50, 2009 Oct.
Article in English | MEDLINE | ID: mdl-20183362

ABSTRACT

BACKGROUND: Educators need approaches to assess medical students' abilities to apply and integrate concepts essential to medical practice. DESCRIPTION: We used a multimethod approach to examine the quality of essay questions intended to elicit medical students' ability to apply and integrate their understanding of medical concepts. EVALUATION: Three educators assigned essay questions (n = 120) to one of four levels of cognition. Kappa was computed before and after discussion. Faculty (n = 46) critiqued essay quality using a checklist (97% response), and students completed a questionnaire about the learning environment (91% response). CONCLUSIONS: We identified effective approaches to evaluate the quality of essay questions and to train faculty to write essay questions of sufficient complexity. This systematic review of essay questions also encouraged review of the curriculum to determine if core concepts were being taught. It is feasible to have faculty write and critique essay questions targeted at higher levels of cognition.


Subject(s)
Clinical Medicine/education , Education, Medical, Undergraduate/methods , Educational Measurement , Science/education , Writing , Analysis of Variance , Chi-Square Distribution , Curriculum , Humans , Surveys and Questionnaires
20.
Med Teach ; 30(7): e171-7, 2008.
Article in English | MEDLINE | ID: mdl-18777415

ABSTRACT

BACKGROUND: The Cleveland Clinic Lerner College of Medicine was designed to encourage medical students to pursue careers as physician investigators. Our faculty decided that assessment should enhance learning and adopted only formative assessments to document student performance in relation to nine broad-based competencies. No grades are used to judge student performance throughout the 5-year program. Instead, assessments are competency-based, relate directly to performance standards, and are stored in e-Portfolios to track progress and document student achievement. The class size is limited to 32 students a year. AIMS: Schools with competency-based curricula must provide students with formative feedback to identify performance gaps and monitor progress. We describe a systematic approach to assess medical knowledge using essay-type questions (CAPPs) and multiple choice questions (SAQs) to provide medical students with weekly, formative feedback about their abilities to acquire, apply and integrate basic and clinical science concepts. METHOD: Processes for developing performance standards, creating assessment items, training faculty, reporting student performance and monitoring outcomes are described. A case study of a Year 1 course is presented with specific examples of CAPPs and SAQs to illustrate how formative assessment data are interpreted and reported in students' e-Portfolios. RESULTS: Preliminary evidence suggests that CAPPs and SAQs have a positive impact on students' education, a justifiable cost in light of obtained benefits and growing acceptance among stakeholders. Two student cohorts performed significantly above the population mean on USMLE Step 1, which suggests that these assessment methods have not disadvantaged students. More evidence is needed to assess the reliability and validity of these tools for formative purposes. CONCLUSIONS: Using assessment data for formative purposes may encourage application and integration of knowledge, help students identify performance gaps, foster student development of learning plans and promote student responsibility for learning. Discussion provides applications for institutions with larger classes to consider.


Subject(s)
Competency-Based Education , Educational Measurement/methods , Students, Medical , Clinical Competence/standards , Humans , Ohio
SELECTION OF CITATIONS
SEARCH DETAIL
...