Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 115
Filter
1.
Pediatr Dent ; 46(2): 121-134, 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38664905

ABSTRACT

Purpose: To acquire comments on pediatric dentistry entrustable professional activities (EPAs) from pediatric dentistry residency program directors (PDs). Methods: An electronic survey invited PDs to evaluate 16 previously developed EPAs on whether they were critical to patient safety, resident education, or both. PDs were asked to evaluate a fully developed EPA to assess structure and clarity and describe barriers to EPA. Descriptive statistics were completed. Results: Forty-one of 103 PDs completed the entire survey. Eighty-five percent (36 of 42) of PDs believed EPAs are critical to pediatric dentistry education, and 81 percent (34 of 42) believed EPAs are critical to patient safety. Eighty-one percent of PDs would likely use EPAs when available. Seventy-five percent (31 of 41) of PDs reported that they have had a resident who would have benefited from a longer duration of training. Conclusions: The majority of pediatric dentistry residency program director participants surveyed reported that entrustable professional activities are critical to patient safety and resident education. EPAs may be a valuable option for assessing residents' readiness for graduation.


Subject(s)
Attitude of Health Personnel , Internship and Residency , Pediatric Dentistry , Pediatric Dentistry/education , Humans , Surveys and Questionnaires , Clinical Competence , Patient Safety
2.
Acad Pediatr ; 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38631477

ABSTRACT

OBJECTIVES: To compare level of supervision (LOS) ratings of graduating pediatric residents with their assessments as fellows for the five Entrustable Professional Activities (EPAs) common to general pediatrics and the subspecialties and to determine if the difference between ratings from residency to fellowship is less for the QI and Practice Management EPAs, since the skills needed to perform these may be less context-dependent. METHODS: We compared ratings of graduating residents with their assessments as fellows using LOS data from two sequential EPA studies. RESULTS: There were 65 ratings from 41 residents at the first fellow assessment. At graduation, most residents needed little to no supervision for all EPAs with 94% (61/65) of ratings level four or five. In contrast, only 5/65 (8%) of the first fellow assessments were level four or five. The ratings difference for the QI and Practice Management EPAs was similar to the others. CONCLUSIONS: LOS ratings for the EPAs common to generalists and subspecialists reset as residents become fellows. There was no evidence that the QI and Practice Management EPAs are less context-dependent. This study provides additional validity evidence for using these LOS scales to assess trainees in pediatric residency and fellowship.

4.
Med Educ ; 2024 Jan 18.
Article in English | MEDLINE | ID: mdl-38238042

ABSTRACT

INTRODUCTION: Health professions education (HPE) has adopted the conceptualization of validity as an argument. However, the theoretical and practical aspects of how validity arguments should be developed, used and evaluated in HPE have not been deeply explored. Articulating the argumentation theory undergirding validity and validation can help HPE better operationalise validity as an argument. To better understand this, the authors explored how HPE validity scholars conceptualise assessment validity arguments and argumentation, seeking to understand potential consequences of these views on validation practices. METHODS: The authors used critical case sampling to identify HPE assessment validity experts in three ways: (1) participation in a prominent validity research group, (2) appearing in a bibliometric study of HPE validity publications and (3) authorship of recent HPE validity literature. Qualitative semi-structured interviews were conducted with 16 experts in HPE assessment validity from four different countries. The authors used reflexive thematic analysis to develop themes relevant to their research question. RESULTS: The authors developed three themes grounded in participants' responses: (1) In theory, HPE validity is a social and situated argument. (2) In practice, the absence of audience and evaluation stymies the social nature of HPE validity. (3) Lack of validity argumentation creates and maintains power differentials within HPE. Participants articulated that current HPE validation practices are rooted in post-positivist epistemology when they should be situated (i.e. context-dependent), audience-centric and inclusive. DISCUSSION: When discussing validity argumentation in theory, participants' descriptions reflect an interpretivist lens for evaluation that is misaligned with real-world validity practices. This misalignment likely arises from HPE's adoption of "validity as an argument" as a slogan, without integrating theoretical and practical principles of argumentation theory.

5.
Perspect Med Educ ; 13(1): 12-23, 2024.
Article in English | MEDLINE | ID: mdl-38274558

ABSTRACT

Assessment in medical education has evolved through a sequence of eras each centering on distinct views and values. These eras include measurement (e.g., knowledge exams, objective structured clinical examinations), then judgments (e.g., workplace-based assessments, entrustable professional activities), and most recently systems or programmatic assessment, where over time multiple types and sources of data are collected and combined by competency committees to ensure individual learners are ready to progress to the next stage in their training. Significantly less attention has been paid to the social context of assessment, which has led to an overall erosion of trust in assessment by a variety of stakeholders including learners and frontline assessors. To meaningfully move forward, the authors assert that the reestablishment of trust should be foundational to the next era of assessment. In our actions and interventions, it is imperative that medical education leaders address and build trust in assessment at a systems level. To that end, the authors first review tenets on the social contextualization of assessment and its linkage to trust and discuss consequences should the current state of low trust continue. The authors then posit that trusting and trustworthy relationships can exist at individual as well as organizational and systems levels. Finally, the authors propose a framework to build trust at multiple levels in a future assessment system; one that invites and supports professional and human growth and has the potential to position assessment as a fundamental component of renegotiating the social contract between medical education and the health of the public.


Subject(s)
Curriculum , Education, Medical , Humans , Competency-Based Education , Workplace , Trust
6.
Acad Med ; 99(1): 28-34, 2024 Jan 01.
Article in English | MEDLINE | ID: mdl-37643579

ABSTRACT

ABSTRACT: Competency-based medical education (CBME) depends on effective programs of assessment to achieve the desired outcomes and goals of training. Residency programs must be able to defend clinical competency committee (CCC) group decisions about learner readiness for practice, including decisions about time-variable resident promotion and graduation. In this article, the authors describe why CCC group decision-making processes should be supported by theory and review 3 theories they used in designing their group processes: social decision scheme theory, functional theory, and wisdom of crowds. They describe how these theories were applied in a competency-based, time-variable training pilot-Transitioning in Internal Medicine Education Leveraging Entrustment Scores Synthesis (TIMELESS) at the University of Cincinnati internal medicine residency program in 2020-2022-to increase the defensibility of their CCC group decision-making. This work serves as an example of how use of theory can bolster validity arguments supporting group decisions about resident readiness for practice.


Subject(s)
Education, Medical, Graduate , Internship and Residency , Humans , Clinical Competence , Decision Making , Dissent and Disputes , Competency-Based Education
7.
Med Teach ; 46(1): 140-146, 2024 01.
Article in English | MEDLINE | ID: mdl-37463405

ABSTRACT

High-value care is what patients deserve and what healthcare professionals should deliver. However, it is not what happens much of the time. Quality improvement master Dr. Don Berwick argued more than two decades ago that American healthcare needs an escape fire, which is a new way of seeing and acting in a crisis situation. While coined in the U.S. context, the analogy applies in other Western healthcare contexts as well. Therefore, in this paper, the authors revisit Berwick's analogy, arguing that medical education can, and should, provide the spark for such an escape fire across the globe. They assert that medical education can achieve this by fully embracing competency-based medical education (CBME) as a way to place medicine's focus on the patient. CBME targets training outcomes that prepare graduates to optimize patient care. The authors use the escape fire analogy to argue that medical educators must drop long-held approaches and tools; treat CBME implementation as an adaptive challenge rather than a technical fix; demand genuine, rich discussions and engagement about the path forward; and, above all, center the patient in all they do.


Subject(s)
Competency-Based Education , Education, Medical , Humans , Health Personnel , Delivery of Health Care , Health Facilities
9.
Acad Med ; 99(3): 243-246, 2024 Mar 01.
Article in English | MEDLINE | ID: mdl-38011041

ABSTRACT

ABSTRACT: In this commentary, the authors explore the tension of balancing high performance standards in medical education with the acceptability of those standards to stakeholders (e.g., learners and patients). The authors then offer a lens through which this tension might be considered and ways forward that focus on both patient outcomes and learner needs.In examining this phenomenon, the authors argue that high performance standards are often necessary. Societal accountability is key to medical education, with the public demanding that training programs prepare physicians to provide high-quality care. Medical schools and residency programs, therefore, require rigorous standards to ensure graduates are ready to care for patients. At the same time, learners' experience is important to consider. Making sure that performance standards are acceptable to stakeholders supports the validity of assessment decisions.Equity should also be central to program evaluation and validity arguments when considering performance standards. Currently, learners across the continuum are variably prepared for the next phase in training and often face inequities in resource availability to meet high passing standards, which may lead to learner attrition. Many students who face these inequities come from underrepresented or disadvantaged backgrounds and are essential to ensuring a diverse medical workforce to meet the needs of patients and society. When these students struggle, it contributes to the leaky pipeline of more socioeconomically and racially diverse applicants.The authors posit that 4 key factors can balance the tension between high performance standards and stakeholder acceptability: standards that are acceptable and defensible, progression that is time variable, requisite support structures that are uniquely tailored for each learner, and assessment systems that are equitably designed.


Subject(s)
Chemistry, Organic , Education, Medical , Humans , Students , Program Evaluation , Health Personnel
11.
Acad Med ; 99(4S Suppl 1): S35-S41, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38109661

ABSTRACT

ABSTRACT: Precision education (PE) leverages longitudinal data and analytics to tailor educational interventions to improve patient, learner, and system-level outcomes. At present, few programs in medical education can accomplish this goal as they must develop new data streams transformed by analytics to drive trainee learning and program improvement. Other professions, such as Major League Baseball (MLB), have already developed extremely sophisticated approaches to gathering large volumes of precise data points to inform assessment of individual performance.In this perspective, the authors argue that medical education-whose entry into precision assessment is fairly nascent-can look to MLB to learn the possibilities and pitfalls of precision assessment strategies. They describe 3 epochs of player assessment in MLB: observation, analytics (sabermetrics), and technology (Statcast). The longest tenured approach, observation, relies on scouting and expert opinion. Sabermetrics brought new approaches to analyzing existing data in a way that better predicted which players would help the team win. Statcast created precise, granular data about highly attributable elements of player performance while helping to account for nonplayer factors that confound assessment such as weather, ballpark dimensions, and the performance of other players. Medical education is progressing through similar epochs marked by workplace-based assessment, learning analytics, and novel measurement technologies. The authors explore how medical education can leverage intersectional concepts of MLB player and medical trainee assessment to inform present and future directions of PE.


Subject(s)
Baseball , Education, Medical , Humans , Educational Status , Workplace
12.
Acad Med ; 99(4S Suppl 1): S7-S13, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38109659

ABSTRACT

ABSTRACT: Previous eras of assessment in medical education have been defined by how assessment is done, from knowledge exams popularized in the 1960s to the emergence of work-based assessment in the 1990s to current efforts to integrate multiple types and sources of performance data through programmatic assessment. Each of these eras was a response to why assessment was performed (e.g., assessing medical knowledge with exams; assessing communication, professionalism, and systems competencies with work-based assessment). Despite the evolution of assessment eras, current evidence highlights the graduation of trainees with foundational gaps in the ability to provide high-quality care to patients presenting with common problems, and training program leaders report they graduate trainees they would not trust to care for themselves or their loved ones. In this article, the authors argue that the next era of assessment should be defined by why assessment is done: to ensure high-quality, equitable care. Assessment should place focus on demanding graduates possess the knowledge, skills, attitudes, and adaptive expertise to meet the needs of all patients and ensuring that graduates are able to do this in an equitable fashion. The authors explore 2 patient-focused assessment approaches that could help realize the promise of this envisioned era: entrustable professional activities (EPAs) and resident sensitive quality measures (RSQMs)/TRainee Attributable and Automatable Care Evaluations in Real-time (TRACERs). These examples illustrate how the envisioned next era of assessment can leverage existing and new data to provide precision education assessment that focuses on providing formative and summative feedback to trainees in a manner that seeks to ensure their learning outcomes prepare them to ensure high-quality, equitable patient outcomes.


Subject(s)
Internship and Residency , Quality of Health Care , Humans , Curriculum , Competency-Based Education , Patient Care , Clinical Competence , Education, Medical, Graduate
13.
Med Educ ; 2023 Dec 13.
Article in English | MEDLINE | ID: mdl-38088227

ABSTRACT

INTRODUCTION: The real-world mechanisms underlying prospective entrustment decision making (PEDM) by entrustment or clinical competency committees (E/CCCs) are poorly understood. To advance understanding in this area, the authors conducted a realist synthesis of the published literature to address the following research question: In E/CCC efforts to make defensible prospective entrustment decisions (PEDs), what works, for whom, under what circumstances and why? METHODS: Realist work seeks to understand the contexts (C), mechanisms (M) and outcomes (O) that explain how and why things work (or do not). In the authors' study, contexts included individual E/CCC members, E/CCC structures and processes, and training programmes. The outcome (i.e. desired outcome) was a PED. Mechanisms were a substantial focus of the analysis and informed the core findings. To define a final corpus of 52 included papers, the authors searched four databases, screened all results from those searches and performed a full-text review of a subset of screened papers. Data extraction focused on developing context-mechanism-outcome configurations from the papers, which were used to create a theory for how PEDM leads to PEDs. RESULTS: PEDM is often driven by default (non-deliberate) decision making rather than a deliberate process of deciding whether a trainee should be entrusted or not. When defaulting, some E/CCCs find red flags that sometimes lead to being more deliberate with decision making. E/CCCs that seek to be deliberate describe PEDM that can be effortful (when data are insufficient or incongruent) or effortless (when data are robust and tell a congruent story about a trainee). Both information about trainee trustworthiness and the sufficiency of data about trainee performance influence PEDM. Several moderators influence what is considered to be sufficient data, how trustworthiness data are viewed and how PEDM is carried out. These include perceived consequences and associated risks, E/CCC member trust propensity, E/CCC member personal knowledge of and experience with trainees and E/CCC structures and processes. DISCUSSION: PEDM is rarely deliberate but should be. Data about trainee trustworthiness are foundational to making PEDs. Bias, equity and fairness are nearly absent from the papers in this synthesis, and future efforts must seek to advance understanding and practice regarding the roles of bias, equity and fairness in PEDM.

14.
JMIR Med Educ ; 9: e50373, 2023 Dec 25.
Article in English | MEDLINE | ID: mdl-38145471

ABSTRACT

BACKGROUND: The rapid trajectory of artificial intelligence (AI) development and advancement is quickly outpacing society's ability to determine its future role. As AI continues to transform various aspects of our lives, one critical question arises for medical education: what will be the nature of education, teaching, and learning in a future world where the acquisition, retention, and application of knowledge in the traditional sense are fundamentally altered by AI? OBJECTIVE: The purpose of this perspective is to plan for the intersection of health care and medical education in the future. METHODS: We used GPT-4 and scenario-based strategic planning techniques to craft 4 hypothetical future worlds influenced by AI's integration into health care and medical education. This method, used by organizations such as Shell and the Accreditation Council for Graduate Medical Education, assesses readiness for alternative futures and effectively manages uncertainty, risk, and opportunity. The detailed scenarios provide insights into potential environments the medical profession may face and lay the foundation for hypothesis generation and idea-building regarding responsible AI implementation. RESULTS: The following 4 worlds were created using OpenAI's GPT model: AI Harmony, AI conflict, The world of Ecological Balance, and Existential Risk. Risks include disinformation and misinformation, loss of privacy, widening inequity, erosion of human autonomy, and ethical dilemmas. Benefits involve improved efficiency, personalized interventions, enhanced collaboration, early detection, and accelerated research. CONCLUSIONS: To ensure responsible AI use, the authors suggest focusing on 3 key areas: developing a robust ethical framework, fostering interdisciplinary collaboration, and investing in education and training. A strong ethical framework emphasizes patient safety, privacy, and autonomy while promoting equity and inclusivity. Interdisciplinary collaboration encourages cooperation among various experts in developing and implementing AI technologies, ensuring that they address the complex needs and challenges in health care and medical education. Investing in education and training prepares professionals and trainees with necessary skills and knowledge to effectively use and critically evaluate AI technologies. The integration of AI in health care and medical education presents a critical juncture between transformative advancements and significant risks. By working together to address both immediate and long-term risks and consequences, we can ensure that AI integration leads to a more equitable, sustainable, and prosperous future for both health care and medical education. As we engage with AI technologies, our collective actions will ultimately determine the state of the future of health care and medical education to harness AI's power while ensuring the safety and well-being of humanity.


Subject(s)
Artificial Intelligence , Education, Medical , Humans , Software , Educational Status , Humanities
16.
Acad Med ; 98(11S): S98-S107, 2023 11 01.
Article in English | MEDLINE | ID: mdl-37983402

ABSTRACT

PURPOSE: The process of screening and selecting trainees for postgraduate training has evolved significantly in recent years, yet remains a daunting task. Postgraduate training directors seek ways to feasibly and defensibly select candidates, which has resulted in an explosion of literature seeking to identify root causes for the problems observed in postgraduate selection and generate viable solutions. The authors therefore conducted a scoping review to analyze the problems and priorities presented within the postgraduate selection literature to explore practical implications and present a research agenda. METHOD: Between May 2021 and February 2022, the authors searched PubMed, EMBASE, Web of Science, ERIC, and Google Scholar for English language literature published after 2000. Articles that described postgraduate selection were eligible for inclusion. 2,273 articles were ultimately eligible for inclusion. Thematic analysis was performed on a subset of 100 articles examining priorities and problems within postgraduate selection. Articles were sampled to ensure broad thematic and geographical variation across the breadth of articles that were eligible for inclusion. RESULTS: Five distinct perspectives or value statements were identified in the thematic analysis: (1) Using available metrics to predict performance in postgraduate training; (2) identifying the best applicants via competitive comparison; (3) seeking alignment between applicant and program in the selection process; (4) ensuring diversity, mitigation of bias, and equity in the selection process; and (5) optimizing the logistics or mechanics of the selection process. CONCLUSIONS: This review provides insight into the framing and value statements authors use to describe postgraduate selection within the literature. The identified value statements provide a window into the assumptions and subsequent implications of viewing postgraduate selection through each of these lenses. Future research must consider the outcomes and consequences of the value statement chosen and the impact on current and future approaches to postgraduate selection.


Subject(s)
Education, Medical , Humans , Education, Medical/methods , Language
17.
Adv Med Educ Pract ; 14: 901-911, 2023.
Article in English | MEDLINE | ID: mdl-37614829

ABSTRACT

Background: Early identification of shock is vital in decreasing morbidity and mortality in the pediatric population. Although residents are taught the perfusion portion of the rapid cardiopulmonary assessment at our institution, they perform it at the bedside with 8.4% completing 1 part of the assessment and 9.7% verbalizing their findings. Newer technologies, including virtual reality (VR), offer immersive training to close this clinical gap. Objective: To assess senior pediatric residents' performance of a perfusion exam and verbalization of their perfusion assessment following VR-based Just-in-Time/Just-in-Place (JITP) training compared to video-based JITP training. We hypothesized that JITP media training was feasible, and VR JITP was more effective than video-based training. Methods: Residents were randomized to VR or video-based training during shifts in the emergency department. Clinical performance was assessed by review of a video-recorded patient encounter using a standardized assessment tool and by an in-person, two question shock assessment. Residents completed a survey assessing attitudes toward their intervention at the time of training. Results: Eighty-five senior pediatric residents were enrolled; 84 completed training. Sixty-four (76%) residents had a patient encounter available for video review (VR 33; Video 31). Fourteen residents in the VR group (42.4%, 95% CI 25.5% to 60.8%) and 13 residents in the video group (41.9%, 95% CI 24.6% to 60.9%) completed a perfusion exam AND verbalized an assessment during their next clinical encounter (X2 p-value 1.00). Fifty-one of 64 residents (79.7%) completed the two-step shock assessment; 50 (98%) agreed with supervising physician's assessment. VR was rated more effective than reading, low-fidelity manikin, standardized patient encounters, traditional didactic teaching, and online learning. Video was rated more effective than online learning, traditional didactic teaching, and reading. Conclusion: Novel video and VR JITP perfusion exam and assessment trainings are impactful and well-received by senior pediatric residents.

18.
Perspect Med Educ ; 12(1): 282-293, 2023.
Article in English | MEDLINE | ID: mdl-37520509

ABSTRACT

Coaching is proposed as a means of improving the learning culture of medicine. By fostering trusting teacher-learner relationships, learners are encouraged to embrace feedback and make the most of failure. This paper posits that a cultural shift is necessary to fully harness the potential of coaching in graduate medical education. We introduce the deliberately developmental organization framework, a conceptual model focusing on three core dimensions: developmental communities, developmental aspirations, and developmental practices. These dimensions broaden the scope of coaching interactions. Implementing this organizational change within graduate medical education might be challenging, yet we argue that embracing deliberately developmental principles can embed coaching into everyday interactions and foster a culture in which discussing failure to maximize learning becomes acceptable. By applying the dimensions of developmental communities, aspirations, and practices, we present a six-principle roadmap towards transforming graduate medical education training programs into deliberately developmental organizations.

19.
Adv Health Sci Educ Theory Pract ; 28(5): 1697-1709, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37140661

ABSTRACT

In this perspective, the authors critically examine "rater training" as it has been conceptualized and used in medical education. By "rater training," they mean the educational events intended to improve rater performance and contributions during assessment events. Historically, rater training programs have focused on modifying faculty behaviours to achieve psychometric ideals (e.g., reliability, inter-rater reliability, accuracy). The authors argue these ideals may now be poorly aligned with contemporary research informing work-based assessment, introducing a compatibility threat, with no clear direction on how to proceed. To address this issue, the authors provide a brief historical review of "rater training" and provide an analysis of the literature examining the effectiveness of rater training programs. They focus mainly on what has served to define effectiveness or improvements. They then draw on philosophical and conceptual shifts in assessment to demonstrate why the function, effectiveness aims, and structure of rater training requires reimagining. These include shifting competencies for assessors, viewing assessment as a complex cognitive task enacted in a social context, evolving views on biases, and reprioritizing which validity evidence should be most sought in medical education. The authors aim to advance the discussion on rater training by challenging implicit incompatibility issues and stimulating ways to overcome them. They propose that "rater training" (a moniker they suggest be reserved for strong psychometric aims) be augmented with "assessor readiness" programs that link to contemporary assessment science and enact the principle of compatibility between that science and ways of engaging with advances in real-world faculty-learner contexts.


Subject(s)
Education, Medical , Educational Measurement , Humans , Reproducibility of Results
20.
Perspect Med Educ ; 12(1): 149-159, 2023.
Article in English | MEDLINE | ID: mdl-37215538

ABSTRACT

Competency-based medical education (CBME) is an outcomes-based approach to education and assessment that focuses on what competencies trainees need to learn in order to provide effective patient care. Despite this goal of providing quality patient care, trainees rarely receive measures of their clinical performance. This is problematic because defining a trainee's learning progression requires measuring their clinical performance. Traditional clinical performance measures (CPMs) are often met with skepticism from trainees given their poor individual-level attribution. Resident-sensitive quality measures (RSQMs) are attributable to individuals, but lack the expeditiousness needed to deliver timely feedback and can be difficult to automate at scale across programs. In this eye opener, the authors present a conceptual framework for a new type of measure - TRainee Attributable & Automatable Care Evaluations in Real-time (TRACERs) - attuned to both automation and trainee attribution as the next evolutionary step in linking education to patient care. TRACERs have five defining characteristics: meaningful (for patient care and trainees), attributable (sufficiently to the trainee of interest), automatable (minimal human input once fully implemented), scalable (across electronic health records [EHRs] and training environments), and real-time (amenable to formative educational feedback loops). Ideally, TRACERs optimize all five characteristics to the greatest degree possible. TRACERs are uniquely focused on measures of clinical performance that are captured in the EHR, whether routinely collected or generated using sophisticated analytics, and are intended to complement (not replace) other sources of assessment data. TRACERs have the potential to contribute to a national system of high-density, trainee-attributable, patient-centered outcome measures.


Subject(s)
Education, Medical, Graduate , Internship and Residency , Humans , Educational Measurement , Learning , Feedback
SELECTION OF CITATIONS
SEARCH DETAIL
...