Your browser doesn't support javascript.
An implementation and longitudinal evaluation framework of remote quality improvement initiatives
American Journal of Respiratory and Critical Care Medicine ; 203(9), 2021.
Article in English | EMBASE | ID: covidwho-1277466
ABSTRACT

Objective:

Virtual learning experiences have become widely used during the ongoing COVID-19 global crisis. Given its cost-effectiveness, accessibility, and flexibility, remote training experiences are likely to assume a permanent and expanded role in medical education and quality improvement initiatives. However, little is known about how best to measure the effectiveness of remote training interventions. The Checklist for Early Recognition and Treatment of Acute Illness and Injury (CERTAIN) is an established critical care quality improvement program with evidence of improved care processes and patient outcomes in an international quality improvement trial. Our aim was to develop a structured implementation and longitudinal evaluation framework that measures the complex contributors to the impact of this remote training program, including incorporation into processes of care and sustainment over time.

Methods:

We convened an international topic review group that included individuals with diversity in clinical expertise, nationality, and experience in medical education, quality improvement, implementation science, and research methodology. We recruited individuals with experience designing and participating in various medical remote training programs, including teleconferences, tele-consults, online video/chat platforms, and virtual simulation classrooms. Through a series of facilitated discussions, we directed the group to develop a conceptual framework to guide the development of remote learning programs and accompanying evaluation tools to measure their impact.

Results:

The review group members included education experts and continuing medical education participants from China and the United States with practice backgrounds in Critical Care, Internal Medicine, Anesthesiology and Emergency Medicine. The group developed a conceptual framework based on the CIPP (context-input-process-product) quality evaluation model. The framework includes three phases before, during, and after the remote training. The proposed quantitative and qualitative evaluation tools blend the Proctor taxonomy, an expansion of the popular RE-AIM framework used to categorize implementation outcomes, to include early (i.e. acceptability, appropriateness, feasibility), mid (i.e. adoptions, fidelity), and late (i.e. sustainability) stage outcomes to provide a more complete understanding of the implementation process and facilitate generalization of our findings. Elements of the Logic Model were also used to guide the program development process.

Conclusions:

We propose a dynamic, longitudinal implementation evaluation framework that has sufficient rigor and flexibility to meet the needs of the existing and emerging remote medical training programs in global practice settings. The outcomes from these mixed-methods analyses will provide a robust toolbox to guide the design, delivery, implementation, and sustainment of remote medical educational programs.

Full text: Available Collection: Databases of international organizations Database: EMBASE Type of study: Experimental Studies / Prognostic study Language: English Journal: American Journal of Respiratory and Critical Care Medicine Year: 2021 Document Type: Article

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: Databases of international organizations Database: EMBASE Type of study: Experimental Studies / Prognostic study Language: English Journal: American Journal of Respiratory and Critical Care Medicine Year: 2021 Document Type: Article