Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 89
Filter
1.
Acad Med ; 99(4S Suppl 1): S57-S63, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38166205

ABSTRACT

ABSTRACT: High-quality precision education (PE) aims to enhance outcomes for learners and society by incorporating longitudinal data and analytics to shape personalized learning strategies. However, existing educational data collection methods often suffer from fragmentation, leading to gaps in understanding learner and program performance. In this article, the authors present a novel approach to PE at the University of Cincinnati, focusing on the Ambulatory Long Block, a year-long continuous ambulatory group-practice experience. Over the last 17 years, the Ambulatory Long Block has evolved into a sophisticated data collection and analysis system that integrates feedback from various stakeholders, as well as learner self-assessment, electronic health record utilization information, and clinical throughput metrics. The authors detail their approach to data prioritization, collection, analysis, visualization, and feedback, providing a practical example of PE in action. This model has been associated with improvements in both learner performance and patient care outcomes. The authors also highlight the potential for real-time data review through automation and emphasize the importance of collaboration in advancing PE. Generalizable principles include designing learning environments with continuity as a central feature, gathering both quantitative and qualitative performance data from interprofessional assessors, using this information to supplement traditional workplace-based assessments, and pairing it with self-assessments. The authors advocate for criterion referencing over normative comparisons, using user-friendly data visualizations, and employing tailored coaching strategies for individual learners. The Ambulatory Long Block model underscores the potential of PE to drive improvements in medical education and health care outcomes.


Subject(s)
Education, Medical , Learning , Humans , Feedback , Benchmarking
2.
Acad Med ; 99(1): 28-34, 2024 Jan 01.
Article in English | MEDLINE | ID: mdl-37643579

ABSTRACT

ABSTRACT: Competency-based medical education (CBME) depends on effective programs of assessment to achieve the desired outcomes and goals of training. Residency programs must be able to defend clinical competency committee (CCC) group decisions about learner readiness for practice, including decisions about time-variable resident promotion and graduation. In this article, the authors describe why CCC group decision-making processes should be supported by theory and review 3 theories they used in designing their group processes: social decision scheme theory, functional theory, and wisdom of crowds. They describe how these theories were applied in a competency-based, time-variable training pilot-Transitioning in Internal Medicine Education Leveraging Entrustment Scores Synthesis (TIMELESS) at the University of Cincinnati internal medicine residency program in 2020-2022-to increase the defensibility of their CCC group decision-making. This work serves as an example of how use of theory can bolster validity arguments supporting group decisions about resident readiness for practice.


Subject(s)
Education, Medical, Graduate , Internship and Residency , Humans , Clinical Competence , Decision Making , Dissent and Disputes , Competency-Based Education
3.
Med Teach ; 46(3): 330-336, 2024 03.
Article in English | MEDLINE | ID: mdl-37917988

ABSTRACT

Despite the numerous calls for integrating quality improvement and patient safety (QIPS) curricula into health professions education, there are limited examples of effective implementation for early learners. Typically, pre-clinical QIPS experiences involve lectures or lessons that are disconnected from the practice of medicine. Consequently, students often prioritize other content they consider more important. As a result, they may enter clinical settings without essential QIPS skills and struggle to incorporate these concepts into their early professional identity formation. In this paper, we present twelve tips aimed at assisting educators in developing QIPS education early in the curricula of health professions students. These tips address various key issues, including aligning incentives, providing longitudinal experiences, incorporating real-world care outcomes, optimizing learning environments, communicating successes, and continually enhancing education and care delivery processes.


Subject(s)
Medicine , Students, Health Occupations , Humans , Quality Improvement , Curriculum , Learning
4.
Acad Med ; 99(4S Suppl 1): S35-S41, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38109661

ABSTRACT

ABSTRACT: Precision education (PE) leverages longitudinal data and analytics to tailor educational interventions to improve patient, learner, and system-level outcomes. At present, few programs in medical education can accomplish this goal as they must develop new data streams transformed by analytics to drive trainee learning and program improvement. Other professions, such as Major League Baseball (MLB), have already developed extremely sophisticated approaches to gathering large volumes of precise data points to inform assessment of individual performance.In this perspective, the authors argue that medical education-whose entry into precision assessment is fairly nascent-can look to MLB to learn the possibilities and pitfalls of precision assessment strategies. They describe 3 epochs of player assessment in MLB: observation, analytics (sabermetrics), and technology (Statcast). The longest tenured approach, observation, relies on scouting and expert opinion. Sabermetrics brought new approaches to analyzing existing data in a way that better predicted which players would help the team win. Statcast created precise, granular data about highly attributable elements of player performance while helping to account for nonplayer factors that confound assessment such as weather, ballpark dimensions, and the performance of other players. Medical education is progressing through similar epochs marked by workplace-based assessment, learning analytics, and novel measurement technologies. The authors explore how medical education can leverage intersectional concepts of MLB player and medical trainee assessment to inform present and future directions of PE.


Subject(s)
Baseball , Education, Medical , Humans , Educational Status , Workplace
5.
J Grad Med Educ ; 15(6): 758, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38045933
6.
JMIR Med Educ ; 9: e50373, 2023 Dec 25.
Article in English | MEDLINE | ID: mdl-38145471

ABSTRACT

BACKGROUND: The rapid trajectory of artificial intelligence (AI) development and advancement is quickly outpacing society's ability to determine its future role. As AI continues to transform various aspects of our lives, one critical question arises for medical education: what will be the nature of education, teaching, and learning in a future world where the acquisition, retention, and application of knowledge in the traditional sense are fundamentally altered by AI? OBJECTIVE: The purpose of this perspective is to plan for the intersection of health care and medical education in the future. METHODS: We used GPT-4 and scenario-based strategic planning techniques to craft 4 hypothetical future worlds influenced by AI's integration into health care and medical education. This method, used by organizations such as Shell and the Accreditation Council for Graduate Medical Education, assesses readiness for alternative futures and effectively manages uncertainty, risk, and opportunity. The detailed scenarios provide insights into potential environments the medical profession may face and lay the foundation for hypothesis generation and idea-building regarding responsible AI implementation. RESULTS: The following 4 worlds were created using OpenAI's GPT model: AI Harmony, AI conflict, The world of Ecological Balance, and Existential Risk. Risks include disinformation and misinformation, loss of privacy, widening inequity, erosion of human autonomy, and ethical dilemmas. Benefits involve improved efficiency, personalized interventions, enhanced collaboration, early detection, and accelerated research. CONCLUSIONS: To ensure responsible AI use, the authors suggest focusing on 3 key areas: developing a robust ethical framework, fostering interdisciplinary collaboration, and investing in education and training. A strong ethical framework emphasizes patient safety, privacy, and autonomy while promoting equity and inclusivity. Interdisciplinary collaboration encourages cooperation among various experts in developing and implementing AI technologies, ensuring that they address the complex needs and challenges in health care and medical education. Investing in education and training prepares professionals and trainees with necessary skills and knowledge to effectively use and critically evaluate AI technologies. The integration of AI in health care and medical education presents a critical juncture between transformative advancements and significant risks. By working together to address both immediate and long-term risks and consequences, we can ensure that AI integration leads to a more equitable, sustainable, and prosperous future for both health care and medical education. As we engage with AI technologies, our collective actions will ultimately determine the state of the future of health care and medical education to harness AI's power while ensuring the safety and well-being of humanity.


Subject(s)
Artificial Intelligence , Education, Medical , Humans , Software , Educational Status , Humanities
7.
Appl Clin Inform ; 14(5): 996-1007, 2023 10.
Article in English | MEDLINE | ID: mdl-38122817

ABSTRACT

OBJECTIVES: Clinical Competency Committee (CCC) members employ varied approaches to the review process. This makes the design of a competency assessment dashboard that fits the needs of all members difficult. This work details a user-centered evaluation of a dashboard currently utilized by the Internal Medicine Clinical Competency Committee (IM CCC) at the University of Cincinnati College of Medicine and generated design recommendations. METHODS: Eleven members of the IM CCC participated in semistructured interviews with the research team. These interviews were recorded and transcribed for analysis. The three design research methods used in this study included process mapping (workflow diagrams), affinity diagramming, and a ranking experiment. RESULTS: Through affinity diagramming, the research team identified and organized opportunities for improvement about the current system expressed by study participants. These areas include a time-consuming preprocessing step, lack of integration of data from multiple sources, and different workflows for each step in the review process. Finally, the research team categorized nine dashboard components based on rankings provided by the participants. CONCLUSION: We successfully conducted user-centered evaluation of an IM CCC dashboard and generated four recommendations. Programs should integrate quantitative and qualitative feedback, create multiple views to display these data based on user roles, work with designers to create a usable, interpretable dashboard, and develop a strong informatics pipeline to manage the system. To our knowledge, this type of user-centered evaluation has rarely been attempted in the medical education domain. Therefore, this study provides best practices for other residency programs to evaluate current competency assessment tools and to develop new ones.


Subject(s)
Internship and Residency , Humans , Clinical Competence , Research Design
9.
J Grad Med Educ ; 15(3): 303-305, 2023 06.
Article in English | MEDLINE | ID: mdl-37363663
10.
Perspect Med Educ ; 12(1): 149-159, 2023.
Article in English | MEDLINE | ID: mdl-37215538

ABSTRACT

Competency-based medical education (CBME) is an outcomes-based approach to education and assessment that focuses on what competencies trainees need to learn in order to provide effective patient care. Despite this goal of providing quality patient care, trainees rarely receive measures of their clinical performance. This is problematic because defining a trainee's learning progression requires measuring their clinical performance. Traditional clinical performance measures (CPMs) are often met with skepticism from trainees given their poor individual-level attribution. Resident-sensitive quality measures (RSQMs) are attributable to individuals, but lack the expeditiousness needed to deliver timely feedback and can be difficult to automate at scale across programs. In this eye opener, the authors present a conceptual framework for a new type of measure - TRainee Attributable & Automatable Care Evaluations in Real-time (TRACERs) - attuned to both automation and trainee attribution as the next evolutionary step in linking education to patient care. TRACERs have five defining characteristics: meaningful (for patient care and trainees), attributable (sufficiently to the trainee of interest), automatable (minimal human input once fully implemented), scalable (across electronic health records [EHRs] and training environments), and real-time (amenable to formative educational feedback loops). Ideally, TRACERs optimize all five characteristics to the greatest degree possible. TRACERs are uniquely focused on measures of clinical performance that are captured in the EHR, whether routinely collected or generated using sophisticated analytics, and are intended to complement (not replace) other sources of assessment data. TRACERs have the potential to contribute to a national system of high-density, trainee-attributable, patient-centered outcome measures.


Subject(s)
Education, Medical, Graduate , Internship and Residency , Humans , Educational Measurement , Learning , Feedback
11.
Acad Med ; 98(8S): S50-S56, 2023 Aug 01.
Article in English | MEDLINE | ID: mdl-37071695

ABSTRACT

Inequity in assessment has been described as a "wicked problem"-an issue with complex roots, inherent tensions, and unclear solutions. To address inequity, health professions educators must critically examine their implicit understandings of truth and knowledge (i.e., their epistemologies) with regard to educational assessment before jumping to solutions. The authors use the analogy of a ship (program of assessment) sailing on different seas (epistemologies) to describe their journey in seeking to improve equity in assessment. Should the education community repair the ship of assessment while sailing or should the ship be scrapped and built anew? The authors share a case study of a well-developed internal medicine residency program of assessment and describe efforts to evaluate and enable equity using various epistemological lenses. They first used a postpositivist lens to evaluate if the systems and strategies aligned with best practices, but found they did not capture important nuances of what equitable assessment entails. Next, they used a constructivist approach to improve stakeholder engagement, but found they still failed to question the inequitable assumptions inherent to their systems and strategies. Finally, they describe a shift to critical epistemologies, seeking to understand who experiences inequity and harm to dismantle inequitable systems and create better ones. The authors describe how each unique sea promoted different adaptations to their ship, and challenge programs to sail through new epistemological waters as a starting point for making their own ships more equitable.


Subject(s)
Educational Measurement , Ships , Humans
12.
J Contin Educ Health Prof ; 43(1): 52-59, 2023 01 01.
Article in English | MEDLINE | ID: mdl-36849429

ABSTRACT

ABSTRACT: The information systems designed to support clinical care have evolved separately from those that support health professions education. This has resulted in a considerable digital divide between patient care and education, one that poorly serves practitioners and organizations, even as learning becomes ever more important to both. In this perspective, we advocate for the enhancement of existing health information systems so that they intentionally facilitate learning. We describe three well-regarded frameworks for learning that can point toward how health care information systems can best evolve to support learning. The Master Adaptive Learner model suggests ways that the individual practitioner can best organize their activities to ensure continual self-improvement. The PDSA cycle similarly proposes actions for improvement but at a health care organization's workflow level. Senge's Five Disciplines of the Learning Organization, a more general framework from the business literature, serves to further inform how disparate information and knowledge flows can be managed for continual improvement. Our main thesis holds that these types of learning frameworks should inform the design and integration of information systems serving the health professions. An underutilized mediator of educational improvement is the ubiquitous electronic health record. The authors list learning analytic opportunities, including potential modifications of learning management systems and the electronic health record, that would enhance health professions education and support the shared goal of delivering high-quality evidence-based health care.


Subject(s)
Electronic Health Records , Learning , Humans , Health Occupations , Knowledge
13.
Acad Med ; 98(7): 828-835, 2023 07 01.
Article in English | MEDLINE | ID: mdl-36656286

ABSTRACT

PURPOSE: As competency-based medical education has become the predominant graduate medical education training model, interest in time-variable training has grown. Despite multiple competency-based time-variable training (CBTVT) pilots ongoing in the United States, little is known about how this training approach impacts learners. The authors aim to explore how their CBTVT pilot program impacted resident motivation for learning, assessment, and feedback. METHOD: The authors performed a qualitative educational case study on the Transitioning in Internal Medicine Education Leveraging Entrustment Scores Synthesis (TIMELESS) program at the University of Cincinnati from October 2020 through March 2022. Semistructured interviews were conducted with TIMELESS residents (n = 9) approximately every 6 months to capture experiences over time. The authors used inductive thematic analysis to develop themes and compared their findings with existing theories of learner motivation. RESULTS: The authors developed 2 themes: TIMELESS had variable effects on residents' motivation for learning and TIMELESS increased resident engagement with and awareness of the program of assessment. Participants reported increased motivation to learn and seek assessment, though some felt a tension between performance (e.g., advancement through the residency program) and growth (e.g., improvement as a physician). Participants became more aware of the quality of assessments they received, in part due to TIMELESS increasing the perceived stakes of assessment, and reported being more deliberate when assessing other residents. CONCLUSIONS: Resident motivation for learning, assessment, and feedback was impacted in ways that the authors contextualize using current theories of learner motivation (i.e., goal orientation theory and attribution theory). Future research should investigate how interventions, such as coaching, guided learner reflection, or various CBTVT implementation strategies, can help keep learners oriented toward mastery learning rather than toward performance.


Subject(s)
Internship and Residency , Motivation , Humans , United States , Feedback , Learning , Education, Medical, Graduate , Competency-Based Education , Clinical Competence
15.
Can Med Educ J ; 13(4): 82-91, 2022 Aug.
Article in English | MEDLINE | ID: mdl-36091737

ABSTRACT

Competency-based medical education (CBME) shifts us from static assessment of learning to developmental assessment for learning. However, implementation challenges associated with CBME remain a major hurdle, especially after training and into practice. The full benefit of developmental assessment for learning over time requires collaboration, cooperation, and trust among learners, regulators, and the public that transcends each individual phase. The authors introduce the concept of an "Education Passport" that provides evidence of readiness to travel across the boundaries between undergraduate medical education, graduate medical education, and the expanse of practice. The Education Passport uses programmatic assessment, a process of collecting numerous low stakes assessments from multiple sources over time, judging these data using criterion-referencing, and enhancing this with coaching and competency committees to understand, process, and accelerate growth without end. Information in the Passport is housed on a cloud-based server controlled by the student/physician over the course of training and practice. These data are mapped to various educational frameworks such Entrustable Professional Activities or milestones for ease of longitudinal performance tracking. At each stage of education and practice the student/physician grants Passport access to all entities that can provide data on performance. Database managers use learning analytics to connect and display information over time that are then used by the student/physician, their assigned or chosen coaches, and review committees to maintain or improve performance. Global information is also collected and analyzed to improve the entire system of learning and care. Developing a true continuum that embraces performance and growth will be a long-term adaptive challenge across many organizations and jurisdictions and will require coordination from regulatory and national agencies. An Education Passport could also serve as an organizing tool and will require research and high-value communication strategies to maximize public trust in the work.


La formation médicale fondée sur les compétences (FMFC) nous fait passer d'une évaluation statique à une évaluation évolutive de l'apprentissage. Cependant, les défis qui accompagnent sa mise en œuvre demeurent un obstacle majeur, en particulier après la formation et dans la pratique. Pour tirer pleinement parti de l'évaluation évolutive de l'apprentissage au fil du temps, il faut une collaboration, une coopération et une confiance entre les apprenants, les organismes de réglementation et le public qui transcendent chaque phase individuelle. Les auteurs présentent le concept de «passeport éducatif¼ en guise de titre attestant que l'on est prêt à franchir les frontières entre la formation médicale de premier cycle, la formation postdoctorale et l'étendue de la pratique.Dans le passeport éducatif, on utilise l'évaluation programmatique, un processus qui consiste à rassembler de nombreuses évaluations à faible enjeu provenant de sources multiples au fil du temps, dont les données sont évaluées à l'aide de critères de référence et améliorées par un encadrement et un examen par des comités de compétences afin de comprendre, de développer et d'accélérer la croissance de façon continue. Les informations contenues dans le passeport sont hébergées sur un serveur nuagique contrôlé par l'étudiant/médecin au cours de sa formation et de sa pratique. Ces données sont cartographies en fonction de divers cadres éducatifs comme les activités professionnelles confiables ou des jalons pour faciliter le suivi longitudinal des performances. À chaque étape de la formation et de la pratique, l'étudiant/médecin accorde l'accès au passeport à toutes les entités qui peuvent fournir des données sur ses performances. Les gestionnaires de la base de données utilisent l'analyse de l'apprentissage pour recouper et afficher les informations au fil du temps, informations qui sont ensuite utilisées par l'étudiant/médecin, les coachs qu'on lui a désignés ou qu'il a choisis, et les comités d'examen pour maintenir ou améliorer les performances. Des informations globales sont également recueillies et analysées pour améliorer l'ensemble du système d'apprentissage et de soins.L'élaboration d'un véritable continuum qui englobe la performance et la croissance constituera un défi d'adaptation à long terme pour les organisations et les provinces, et nécessitera une coordination entre instances réglementaires à l'échelle du pays. Le passeport éducatif pourrait également servir d'outil d'organisation, mais il impliquera des recherches et des stratégies de communication importantes pour maximiser la confiance du public dans ce travail.

17.
J Gen Intern Med ; 37(14): 3670-3675, 2022 11.
Article in English | MEDLINE | ID: mdl-35377114

ABSTRACT

BACKGROUND: Clinical competency committees (CCCs) and residency program leaders may find it difficult to interpret workplace-based assessment (WBA) ratings knowing that contextual factors and bias play a large role. OBJECTIVE: We describe the development of an expected entrustment score for resident performance within the context of our well-developed Observable Practice Activity (OPA) WBA system. DESIGN: Observational study PARTICIPANTS: Internal medicine residents MAIN MEASURE: Entrustment KEY RESULTS: Each individual resident had observed entrustment scores with a unique relationship to the expected entrustment scores. Many residents' observed scores oscillated closely around the expected scores. However, distinct performance patterns did emerge. CONCLUSIONS: We used regression modeling and leveraged large numbers of historical WBA data points to produce an expected entrustment score that served as a guidepost for performance interpretation.


Subject(s)
Internship and Residency , Humans , Clinical Competence
19.
Acad Med ; 97(2): 193-199, 2022 02 01.
Article in English | MEDLINE | ID: mdl-34166233

ABSTRACT

Once medical students attain a certain level of medical knowledge, success in residency often depends on noncognitive attributes, such as conscientiousness, empathy, and grit. These traits are significantly more difficult to assess than cognitive performance, creating a potential gap in measurement. Despite its promise, competency-based medical education (CBME) has yet to bridge this gap, partly due to a lack of well-defined noncognitive observable behaviors that assessors and educators can use in formative and summative assessment. As a result, typical undergraduate to graduate medical education handovers stress standardized test scores, and program directors trust little of the remaining information they receive, sometimes turning to third-party companies to better describe potential residency candidates. The authors have created a list of noncognitive attributes, with associated definitions and noncognitive skills-called observable practice activities (OPAs)-written for learners across the continuum to help educators collect assessment data that can be turned into valuable information. OPAs are discrete work-based assessment elements collected over time and mapped to larger structures, such as milestones, entrustable professional activities, or competencies, to create learning trajectories for formative and summative decisions. Medical schools and graduate medical education programs could adapt these OPAs or determine ways to create new ones specific to their own contexts. Once OPAs are created, programs will have to find effective ways to assess them, interpret the data, determine consequence validity, and communicate information to learners and institutions. The authors discuss the need for culture change surrounding assessment-even for the adoption of behavior-based tools such as OPAs-including grounding the work in a growth mindset and the broad underpinnings of CBME. Ultimately, improving assessment of noncognitive capacity should benefit learners, schools, programs, and most importantly, patients.


Subject(s)
Clinical Competence/statistics & numerical data , Competency-Based Education/standards , Education, Medical, Graduate/statistics & numerical data , Education, Medical, Undergraduate/statistics & numerical data , Physicians/standards , Clinical Competence/standards , Internship and Residency/statistics & numerical data , Physicians/statistics & numerical data
SELECTION OF CITATIONS
SEARCH DETAIL
...