Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 45
Filter
1.
Med Teach ; 44(5): 510-518, 2022 05.
Article in English | MEDLINE | ID: mdl-34807793

ABSTRACT

INTRODUCTION: Competency-based medical education (CBME) provides a framework for describing learner progression throughout training. However, specific approaches to CBME implementation vary widely across educational settings. Alignment between various methods used across the continuum is critical to support transitions and assess learner performance. The purpose of this study was to investigate alignment between CBME frameworks used in undergraduate medical education (UME) and graduate medical education (GME) settings using the US context as a model. METHOD: The authors analyzed content from the core entrustable professional activities for entering residency (Core EPAs; UME model) and residency milestones (GME model). From that analysis, they performed a series of cross-walk activities to investigate alignment between frameworks. After independent review, authors discussed findings until consensus was reached. RESULTS: Some alignment was found for activities associated with history taking, physical examination, differential diagnosis, patient safety, and interprofessional care; however, there were far more examples of misalignment. CONCLUSIONS: These findings highlight challenges creating alignment of assessment frameworks across the continuum of training. The importance of these findings includes implications for assessment and persistence of the educational gap across UME and GME. The authors provide four next steps to improve upon the continuum of education.


Subject(s)
Education, Medical, Undergraduate , Internship and Residency , Clinical Competence , Competency-Based Education/methods , Education, Medical, Graduate , Education, Medical, Undergraduate/methods , Educational Measurement/methods , Humans , Schools, Medical
6.
Med Teach ; 39(6): 588-593, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28598747

ABSTRACT

Medical education is under increasing pressure to more effectively prepare physicians to meet the needs of patients and populations. With its emphasis on individual, programmatic, and institutional outcomes, competency-based medical education (CBME) has the potential to realign medical education with this societal expectation. Implementing CBME, however, comes with significant challenges. This manuscript describes four overarching challenges that must be confronted by medical educators worldwide in the implementation of CBME: (1) the need to align all regulatory stakeholders in order to facilitate the optimization of training programs and learning environments so that they support competency-based progression; (2) the purposeful integration of efforts to redesign both medical education and the delivery of clinical care; (3) the need to establish expected outcomes for individuals, programs, training institutions, and health care systems so that performance can be measured; and (4) the need to establish a culture of mutual accountability for the achievement of these defined outcomes. In overcoming these challenges, medical educators, leaders, and policy-makers will need to seek collaborative approaches to common problems and to learn from innovators who have already successfully made the transition to CBME.


Subject(s)
Competency-Based Education , Curriculum , Education, Medical/methods , Faculty, Medical , Models, Educational , Cooperative Behavior , Education, Medical/organization & administration , Education, Medical, Undergraduate , Humans , Learning , Physicians
7.
Acad Med ; 92(3): 394-402, 2017 03.
Article in English | MEDLINE | ID: mdl-27465231

ABSTRACT

PURPOSE: Faculty development for clinical faculty who assess trainees is necessary to improve assessment quality and impor tant for competency-based education. Little is known about what faculty plan to do differently after training. This study explored the changes faculty intended to make after workplace-based assessment rater training, their ability to implement change, predictors of change, and barriers encountered. METHOD: In 2012, 45 outpatient internal medicine faculty preceptors (who supervised residents) from 26 institutions participated in rater training. They completed a commitment to change form listing up to five commitments and ranked (on a 1-5 scale) their motivation for and anticipated difficulty implementing each change. Three months later, participants were interviewed about their ability to implement change and barriers encountered. The authors used logistic regression to examine predictors of change. RESULTS: Of 191 total commitments, the most common commitments focused on what faculty would change about their own teaching (57%) and increasing direct observation (31%). Of the 183 commitments for which follow-up data were available, 39% were fully implemented, 40% were partially implemented, and 20% were not implemented. Lack of time/competing priorities was the most commonly cited barrier. Higher initial motivation (odds ratio [OR] 2.02; 95% confidence interval [CI] 1.14, 3.57) predicted change. As anticipated difficulty increased, implementation became less likely (OR 0.67; 95% CI 0.49, 0.93). CONCLUSIONS: While higher baseline motivation predicted change, multiple system-level barriers undermined ability to implement change. Rater-training faculty development programs should address how faculty motivation and organizational barriers interact and influence ability to change.


Subject(s)
Competency-Based Education/organization & administration , Faculty/psychology , Internal Medicine/education , Internship and Residency/organization & administration , Preceptorship/organization & administration , Students, Medical/psychology , Workplace/organization & administration , Adult , Female , Humans , Male , Middle Aged , Organizational Innovation , Organizational Objectives , United States
8.
JAMA ; 316(21): 2253-2262, 2016 Dec 06.
Article in English | MEDLINE | ID: mdl-27923089

ABSTRACT

Importance: US internal medicine residency programs are now required to rate residents using milestones. Evidence of validity of milestone ratings is needed. Objective: To compare ratings of internal medicine residents using the pre-2015 resident annual evaluation summary (RAES), a nondevelopmental rating scale, with developmental milestone ratings. Design, Setting, and Participants: Cross-sectional study of US internal medicine residency programs in the 2013-2014 academic year, including 21 284 internal medicine residents (7048 postgraduate-year 1 [PGY-1], 7233 PGY-2, and 7003 PGY-3). Exposures: Program director ratings on the RAES and milestone ratings. Main Outcomes and Measures: Correlations of RAES and milestone ratings by training year; correlations of medical knowledge ratings with American Board of Internal Medicine (ABIM) certification examination scores; rating of unprofessional behavior using the 2 systems. Results: Corresponding RAES ratings and milestone ratings showed progressively higher correlations across training years, ranging among competencies from 0.31 (95% CI, 0.29 to 0.33) to 0.35 (95% CI, 0.33 to 0.37) for PGY-1 residents to 0.43 (95% CI, 0.41 to 0.45) to 0.52 (95% CI, 0.50 to 0.54) for PGY-3 residents (all P values <.05). Linear regression showed ratings differed more between PGY-1 and PGY-3 years using milestone ratings than the RAES (all P values <.001). Of the 6260 residents who attempted the certification examination, the 618 who failed had lower ratings using both systems for medical knowledge than did those who passed (RAES difference, -0.9; 95% CI, -1.0 to -0.8; P < .001; milestone medical knowledge 1 difference, -0.3; 95% CI, -0.3 to -0.3; P < .001; and medical knowledge 2 difference, -0.2; 95% CI, -0.3 to -0.2; P < .001). Of the 26 PGY-3 residents with milestone ratings indicating deficiencies on either of the 2 medical knowledge subcompetencies, 12 failed the certification examination. Correlation of RAES ratings for professionalism with residents' lowest professionalism milestone ratings was 0.44 (95% CI, 0.43 to 0.45; P < .001). Conclusions and Relevance: Among US internal medicine residents in the 2013-2014 academic year, milestone-based ratings correlated with RAES ratings but with a greater difference across training years. Both rating systems for medical knowledge correlated with ABIM certification examination scores. Milestone ratings may better detect problems with professionalism. These preliminary findings may inform establishment of the validity of milestone-based assessment.


Subject(s)
Certification/standards , Clinical Competence/statistics & numerical data , Internal Medicine/education , Internship and Residency/statistics & numerical data , Adult , Educational Measurement , Female , Humans , Male , Professional Misconduct , Specialty Boards , United States
9.
J Grad Med Educ ; 8(2): 156-64, 2016 May.
Article in English | MEDLINE | ID: mdl-27168881

ABSTRACT

Background The expectation for graduate medical education programs to ensure that trainees are progressing toward competence for unsupervised practice prompted requirements for a committee to make decisions regarding residents' progress, termed a clinical competency committee (CCC). The literature on the composition of these committees and how they share information and render decisions can inform the work of CCCs by highlighting vulnerabilities and best practices. Objective We conducted a narrative review of the literature on group decision making that can help characterize the work of CCCs, including how they are populated and how they use information. Methods English language studies of group decision making in medical education, psychology, and organizational behavior were used. Results The results highlighted 2 major themes. Group member composition showcased the value placed on the complementarity of members' experience and lessons they had learned about performance review through their teaching and committee work. Group processes revealed strengths and limitations in groups' understanding of their work, leader role, and information-sharing procedures. Time pressure was a threat to the quality of group work. Conclusions Implications of the findings include the risks for committees that arise with homogeneous membership, limitations to available resident performance information, and processes that arise through experience rather than deriving from a well-articulated purpose of their work. Recommendations are presented to maximize the effectiveness of CCC processes, including their membership and access to, and interpretation of, information to yield evidence-based, well-reasoned judgments.


Subject(s)
Decision Making , Internship and Residency/organization & administration , Clinical Competence , Education, Medical, Graduate/organization & administration , Group Processes , Humans
10.
Ann Intern Med ; 165(5): 356-62, 2016 09 06.
Article in English | MEDLINE | ID: mdl-27159244

ABSTRACT

BACKGROUND: High-quality assessment of resident performance is needed to guide individual residents' development and ensure their preparedness to provide patient care. To facilitate this aim, reporting milestones are now required across all internal medicine (IM) residency programs. OBJECTIVE: To describe initial milestone ratings for the population of IM residents by IM residency programs. DESIGN: Cross-sectional study. SETTING: IM residency programs. PARTICIPANTS: All IM residents whose residency program directors submitted milestone data at the end of the 2013-2014 academic year. MEASUREMENTS: Ratings addressed 6 competencies and 22 subcompetencies. A rating of "not assessable" indicated insufficient information to evaluate the given subcompetency. Descriptive statistics were calculated to describe ratings across competencies and training years. RESULTS: Data were available for all 21 774 U.S. IM residents from all 383 programs. Overall, 2889 residents (1621 in postgraduate year 1 [PGY-1], 902 in PGY-2, and 366 in PGY-3) had at least 1 subcompetency rated as not assessable. Summaries of average ratings by competency and training year showed higher ratings for PGY-3 residents in all competencies. Overall ratings for each of the 6 individual competencies showed that fewer than 1% of third-year residents were rated as "unsatisfactory" or "conditional on improvement." However, when subcompetency milestone ratings were used, 861 residents (12.8%) who successfully completed training had at least 1 competency with all corresponding subcompetencies graded below the threshold of "readiness for unsupervised practice." LIMITATION: Data were derived from a point in time in the first reporting period in which milestones were used. CONCLUSION: The initial milestone-based evaluations of IM residents nationally suggest that documenting developmental progression of competency is possible over training years. Subcompetencies may identify areas in which residents might benefit from additional feedback and experience. Future work is needed to explore how milestones are used to support residents' development and enhance residency curricula. PRIMARY FUNDING SOURCE: None.


Subject(s)
Clinical Competence , Competency-Based Education , Educational Measurement , Internal Medicine/education , Internship and Residency/standards , Cross-Sectional Studies , Humans , United States
11.
Acad Med ; 91(2): 191-8, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26630606

ABSTRACT

The decision to trust a medical trainee with the critical responsibility to care for a patient is fundamental to clinical training. When carefully and deliberately made, such decisions can serve as significant stimuli for learning and also shape the assessment of trainees. Holding back entrustment decisions too much may hamper the trainee's development toward unsupervised practice. When carelessly made, however, they jeopardize patient safety. Entrustment decision-making processes, therefore, deserve careful analysis.Members (including the authors) of the International Competency-Based Medical Education Collaborative conducted a content analysis of the entrustment decision-making process in health care training during a two-day summit in September 2013 and subsequently reviewed the pertinent literature to arrive at a description of the critical features of this process, which informs this article.The authors discuss theoretical backgrounds and terminology of trust and entrustment in the clinical workplace. The competency-based movement and the introduction of entrustable professional activities force educators to rethink the grounds for assessment in the workplace. Anticipating a decision to grant autonomy at a designated level of supervision appears to align better with health care practice than do most current assessment practices. The authors distinguish different modes of trust and entrustment decisions and elaborate five categories, each with related factors, that determine when decisions to trust trainees are made: the trainee, supervisor, situation, task, and the relationship between trainee and supervisor. The authors' aim in this article is to lay a theoretical foundation for a new approach to workplace training and assessment.


Subject(s)
Clinical Competence , Competency-Based Education/methods , Decision Making , Education, Medical, Graduate/methods , Internship and Residency/methods , Interprofessional Relations , Humans
13.
Perspect Med Educ ; 4(4): 165-167, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26183245
15.
Med Educ ; 49(7): 692-708, 2015 Jul.
Article in English | MEDLINE | ID: mdl-26077217

ABSTRACT

CONTEXT: Direct observation of clinical skills is a common approach in workplace-based assessment (WBA). Despite widespread use of the mini-clinical evaluation exercise (mini-CEX), faculty development efforts are typically required to improve assessment quality. Little consensus exists regarding the most effective training methods, and few studies explore faculty members' reactions to rater training. OBJECTIVES: This study was conducted to qualitatively explore the experiences of faculty staff with two rater training approaches - performance dimension training (PDT) and a modified approach to frame of reference training (FoRT) - to elucidate how such faculty development can be optimally designed. METHODS: In a qualitative study of a multifaceted intervention using complex intervention principles, 45 out-patient resident faculty preceptors from 26 US internal medicine residency programmes participated in a rater training faculty development programme. All participants were interviewed individually and in focus groups during and after the programme to elicit how the training influenced their approach to assessment. A constructivist grounded theory approach was used to analyse the data. RESULTS: Many participants perceived that rater training positively influenced their approach to direct observation and feedback, their ability to use entrustment as the standard for assessment, and their own clinical skills. However, barriers to implementation and change included: (i) a preference for holistic assessment over frameworks; (ii) challenges in defining competence; (iii) difficulty in changing one's approach to assessment, and (iv) concerns about institutional culture and buy-in. CONCLUSIONS: Rater training using PDT and a modified approach to FoRT can provide faculty staff with assessment skills that are congruent with principles of criterion-referenced assessment and entrustment, and foundational principles of competency-based education, while providing them with opportunities to reflect on their own clinical skills. However, multiple challenges to incorporating new forms of training exist. Ongoing efforts to improve WBA are needed to address institutional and cultural contexts, and systems of care delivery.


Subject(s)
Clinical Competence/standards , Faculty, Medical/standards , Internal Medicine/education , Workplace , Adult , Competency-Based Education/standards , Education, Medical, Graduate/methods , Educational Measurement/methods , Educational Measurement/standards , Feedback , Female , Grounded Theory , Humans , Internal Medicine/standards , Male , Middle Aged , Qualitative Research , United States
16.
Acad Med ; 90(8): 1084-92, 2015 Aug.
Article in English | MEDLINE | ID: mdl-25901876

ABSTRACT

PURPOSE: Clinical competency committees (CCCs) are now required in graduate medical education. This study examined how residency programs understand and operationalize this mandate for resident performance review. METHOD: In 2013, the investigators conducted semistructured interviews with 34 residency program directors at five public institutions in California, asking about each institution's CCCs and resident performance review processes. They used conventional content analysis to identify major themes from the verbatim interview transcripts. RESULTS: The purpose of resident performance review at all institutions was oriented toward one of two paradigms: a problem identification model, which predominated; or a developmental model. The problem identification model, which focused on identifying and addressing performance concerns, used performance data such as red-flag alerts and informal information shared with program directors to identify struggling residents.In the developmental model, the timely acquisition and synthesis of data to inform each resident's developmental trajectory was challenging. Participants highly valued CCC members' expertise as educators to corroborate the identification of struggling residents and to enhance credibility of the committee's outcomes. Training in applying the milestones to the CCC's work was minimal.Participants were highly committed to performance review and perceived the current process as adequate for struggling residents but potentially not for others. CONCLUSIONS: Institutions orient resident performance review toward problem identification; a developmental approach is uncommon. Clarifying the purpose of resident performance review and employing efficient information systems that synthesize performance data and engage residents and faculty in purposeful feedback discussions could enable the meaningful implementation of milestones-based assessment.


Subject(s)
Clinical Competence , Employee Performance Appraisal , Internship and Residency , Peer Review, Health Care , Adult , Aged , California , Committee Membership , Education, Medical, Graduate , Educational Measurement , Female , Humans , Interviews as Topic , Male , Middle Aged , Qualitative Research
17.
Acad Med ; 90(8): 1054-60, 2015 Aug.
Article in English | MEDLINE | ID: mdl-25830535

ABSTRACT

PROBLEM: The scope and scale of developments in health care redesign have not been sufficiently adopted in primary care residency programs. APPROACH: The interdisciplinary Primary Care Faculty Development Initiative was created to teach faculty how to accelerate revisions in primary care residency training. The program focused on skill development in teamwork, change management, leadership, population management, clinical microsystems, and competency assessment. The 2013 pilot program involved 36 family medicine, internal medicine, and pediatric faculty members from 12 residencies in four locations. OUTCOMES: The percentage of participants rating intention to implement what was learned as "very likely to" or "absolutely will" was 16/32 (50%) for leadership, 24/33 (72.7%) for change management, 23/33 (69.7%) for systems thinking, 25/32 (75.8%) for population management, 28/33 (84.9%) for teamwork, 29/33 (87.8%) for competency assessment, and 30/31 (96.7%) for patient centeredness.Content analysis revealed five key themes: leadership skills are key drivers of change, but program faculty face big challenges in changing culture and engaging stakeholders; access to data from electronic health records for population management is a universal challenge; readiness to change varies among the three disciplines and among residencies within each discipline; focusing on patients and their needs galvanizes collaborative efforts across disciplines and within residencies; and collaboration among disciplines to develop and use shared measures of residency programs and learner outcomes can guide and inspire program changes and urgently needed educational research. NEXT STEPS: Revise and reevaluate this rapidly evolving program toward widespread engagement with family medicine, internal medicine, and pediatric residencies.


Subject(s)
Education, Medical, Graduate/trends , Faculty, Medical , Family Practice/education , Internal Medicine/education , Pediatrics/education , Access to Information , Cooperative Behavior , Curriculum , Diffusion of Innovation , Female , Humans , Internship and Residency , Leadership , Male , Organizational Culture , Patient-Centered Care , Primary Health Care , Program Development , Program Evaluation
20.
Acad Med ; 89(5): 721-7, 2014 May.
Article in English | MEDLINE | ID: mdl-24667513

ABSTRACT

The public is calling for the U.S. health care and medical education system to be accountable for ensuring high-quality, safe, effective, patient-centered care. As medical education shifts to a competency-based training paradigm, clinician educators' assessment of and feedback to trainees about their developing clinical skills becomes paramount. However, there is substantial variability in the accuracy, reliability, and validity of the assessments faculty make when they directly observe trainees with patients. These difficulties have been treated primarily as a rater cognition problem focusing on the inability of the assessor to make reliable and valid assessments of the trainee.The authors' purpose is to reconceptualize the rater cognition problem as both an educational and clinical care problem. The variable quality of faculty assessments is not just a psychometric predicament but also an issue that has implications for decisions regarding trainee supervision and the delivery of quality patient care. The authors suggest that the frame of reference for rating performance during workplace-based assessments be the ability to provide safe, effective, patient-centered care. The authors developed the Accountable Assessment for Quality Care and Supervision equation to remind faculty that supervision is a dynamic, complex process essential for patients to receive high-quality care. This fundamental shift in how assessment is conceptualized requires new models of faculty development and emphasizes the essential and irreplaceable importance of the clinician educator in trainee assessment.


Subject(s)
Clinical Competence , Clinical Medicine/organization & administration , Education, Medical/organization & administration , Educational Measurement , Quality Assurance, Health Care , Delivery of Health Care/organization & administration , Faculty, Medical/organization & administration , Female , Humans , Male , Observer Variation , Patient-Centered Care/organization & administration , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...