Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 61
Filter
1.
medRxiv ; 2024 Mar 18.
Article in English | MEDLINE | ID: mdl-38562678

ABSTRACT

Suicide prevention requires risk identification, appropriate intervention, and follow-up. Traditional risk identification relies on patient self-reporting, support network reporting, or face-to-face screening with validated instruments or history and physical exam. In the last decade, statistical risk models have been studied and more recently deployed to augment clinical judgment. Models have generally been found to be low precision or problematic at scale due to low incidence. Few have been tested in clinical practice, and none have been tested in clinical trials to our knowledge. Methods: We report the results of a pragmatic randomized controlled trial (RCT) in three outpatient adult Neurology clinic settings. This two-arm trial compared the effectiveness of Interruptive and Non-Interruptive Clinical Decision Support (CDS) to prompt further screening of suicidal ideation for those predicted to be high risk using a real-time, validated statistical risk model of suicide attempt risk, with the decision to screen as the primary end point. Secondary outcomes included rates of suicidal ideation and attempts in both arms. Manual chart review of every trial encounter was used to determine if suicide risk assessment was subsequently documented. Results: From August 16, 2022, through February 16, 2023, our study randomized 596 patient encounters across 561 patients for providers to receive either Interruptive or Non-Interruptive CDS in a 1:1 ratio. Adjusting for provider cluster effects, Interruptive CDS led to significantly higher numbers of decisions to screen (42%=121/289 encounters) compared to Non-Interruptive CDS (4%=12/307) (odds ratio=17.7, p-value <0.001). Secondarily, no documented episodes of suicidal ideation or attempts occurred in either arm. While the proportion of documented assessments among those noting the decision to screen was higher for providers in the Non-Interruptive arm (92%=11/12) than in the Interruptive arm (52%=63/121), the interruptive CDS was associated with more frequent documentation of suicide risk assessment (63/289 encounters compared to 11/307, p-value<0.001). Conclusions: In this pragmatic RCT of real-time predictive CDS to guide suicide risk assessment, Interruptive CDS led to higher numbers of decisions to screen and documented suicide risk assessments. Well-powered large-scale trials randomizing this type of CDS compared to standard of care are indicated to measure effectiveness in reducing suicidal self-harm. ClinicalTrials.gov Identifier: NCT05312437.

2.
JAMA Netw Open ; 6(11): e2342750, 2023 Nov 01.
Article in English | MEDLINE | ID: mdl-37938841

ABSTRACT

Importance: Suicide remains an ongoing concern in the US military. Statistical models have not been broadly disseminated for US Navy service members. Objective: To externally validate and update a statistical suicide risk model initially developed in a civilian setting with an emphasis on primary care. Design, Setting, and Participants: This retrospective cohort study used data collected from 2007 through 2017 among active-duty US Navy service members. The external civilian model was applied to every visit at Naval Medical Center Portsmouth (NMCP), its NMCP Naval Branch Health Clinics (NBHCs), and TRICARE Prime Clinics (TPCs) that fall within the NMCP area. The model was retrained and recalibrated using visits to NBHCs and TPCs and updated using Department of Defense (DoD)-specific billing codes and demographic characteristics, including expanded race and ethnicity categories. Domain and temporal analyses were performed with bootstrap validation. Data analysis was performed from September 2020 to December 2022. Exposure: Visit to US NMCP. Main Outcomes and Measures: Recorded suicidal behavior on the day of or within 30 days of a visit. Performance was assessed using area under the receiver operating curve (AUROC), area under the precision recall curve (AUPRC), Brier score, and Spiegelhalter z-test statistic. Results: Of the 260 583 service members, 6529 (2.5%) had a recorded suicidal behavior, 206 412 (79.2%) were male; 104 835 (40.2%) were aged 20 to 24 years; and 9458 (3.6%) were Asian, 56 715 (21.8%) were Black or African American, and 158 277 (60.7%) were White. Applying the civilian-trained model resulted in an AUROC of 0.77 (95% CI, 0.74-0.79) and an AUPRC of 0.004 (95% CI, 0.003-0.005) at NBHCs with poor calibration (Spiegelhalter P < .001). Retraining the algorithm improved AUROC to 0.92 (95% CI, 0.91-0.93) and AUPRC to 0.66 (95% CI, 0.63-0.68). Number needed to screen in the top risk tiers was 366 for the external model and 200 for the retrained model; the lower number indicates better performance. Domain validation showed AUROC of 0.90 (95% CI, 0.90-0.91) and AUPRC of 0.01 (95% CI, 0.01-0.01), and temporal validation showed AUROC of 0.75 (95% CI, 0.72-0.78) and AUPRC of 0.003 (95% CI, 0.003-0.005). Conclusions and Relevance: In this cohort study of active-duty Navy service members, a civilian suicide attempt risk model was externally validated. Retraining and updating with DoD-specific variables improved performance. Domain and temporal validation results were similar to external validation, suggesting that implementing an external model in US Navy primary care clinics may bypass the need for costly internal development and expedite the automation of suicide prevention in these clinics.


Subject(s)
Models, Statistical , Suicide, Attempted , Humans , Male , Female , Cohort Studies , Retrospective Studies , Primary Health Care
4.
JAMIA Open ; 6(2): ooad028, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37152469

ABSTRACT

Artificial intelligence-based algorithms are being widely implemented in health care, even as evidence is emerging of bias in their design, problems with implementation, and potential harm to patients. To achieve the promise of using of AI-based tools to improve health, healthcare organizations will need to be AI-capable, with internal and external systems functioning in tandem to ensure the safe, ethical, and effective use of AI-based tools. Ideas are starting to emerge about the organizational routines, competencies, resources, and infrastructures that will be required for safe and effective deployment of AI in health care, but there has been little empirical research. Infrastructures that provide legal and regulatory guidance for managers, clinician competencies for the safe and effective use of AI-based tools, and learner-centric resources such as clear AI documentation and local health ecosystem impact reviews can help drive continuous improvement.

5.
Popul Health Manag ; 26(3): 157-167, 2023 06.
Article in English | MEDLINE | ID: mdl-37092962

ABSTRACT

Health outcomes are markedly influenced by health-related social needs (HRSN) such as food insecurity and housing instability. Under new Joint Commission requirements, hospitals have recently increased attention to HRSN to reduce health disparities. To evaluate prevailing attitudes and guide hospital efforts, the authors conducted a systematic review to describe patients' and health care providers' perceptions related to screening for and addressing patients' HRSN in US hospitals. Articles were identified through PubMed and by expert recommendations, and synthesized by relevance of findings and basic study characteristics. The review included 22 articles, which showed that most health care providers believed that unmet social needs impact health and that screening for HRSN should be a standard part of hospital care. Notable differences existed between perceived importance of HRSN and actual screening rates, however. Patients reported high receptiveness to screening in hospital encounters, but cautioned to avoid stigmatization and protect privacy when screening. Limited knowledge of resources available, lack of time, and lack of actual resources were the most frequently reported barriers to screening for HRSN. Hospital efforts to screen and address HRSN will likely be facilitated by stakeholders' positive perceptions, but common barriers to screening and referral will need to be addressed to effectively scale up efforts and impact health disparities.


Subject(s)
Health Personnel , Hospitals , Humans , Attitude of Health Personnel , Mass Screening
6.
J Med Internet Res ; 25: e43251, 2023 03 24.
Article in English | MEDLINE | ID: mdl-36961506

ABSTRACT

The potential of artificial intelligence (AI) to reduce health care disparities and inequities is recognized, but it can also exacerbate these issues if not implemented in an equitable manner. This perspective identifies potential biases in each stage of the AI life cycle, including data collection, annotation, machine learning model development, evaluation, deployment, operationalization, monitoring, and feedback integration. To mitigate these biases, we suggest involving a diverse group of stakeholders, using human-centered AI principles. Human-centered AI can help ensure that AI systems are designed and used in a way that benefits patients and society, which can reduce health disparities and inequities. By recognizing and addressing biases at each stage of the AI life cycle, AI can achieve its potential in health care.


Subject(s)
Artificial Intelligence , Machine Learning , Humans , Healthcare Disparities , Bias
7.
Acad Med ; 98(3): 348-356, 2023 03 01.
Article in English | MEDLINE | ID: mdl-36731054

ABSTRACT

PURPOSE: The expanded use of clinical tools that incorporate artificial intelligence (AI) methods has generated calls for specific competencies for effective and ethical use. This qualitative study used expert interviews to define AI-related clinical competencies for health care professionals. METHOD: In 2021, a multidisciplinary team interviewed 15 experts in the use of AI-based tools in health care settings about the clinical competencies health care professionals need to work effectively with such tools. Transcripts of the semistructured interviews were coded and thematically analyzed. Draft competency statements were developed and provided to the experts for feedback. The competencies were finalized using a consensus process across the research team. RESULTS: Six competency domain statements and 25 subcompetencies were formulated from the thematic analysis. The competency domain statements are: (1) basic knowledge of AI: explain what AI is and describe its health care applications; (2) social and ethical implications of AI: explain how social, economic, and political systems influence AI-based tools and how these relationships impact justice, equity, and ethics; (3) AI-enhanced clinical encounters: carry out AI-enhanced clinical encounters that integrate diverse sources of information in creating patient-centered care plans; (4) evidence-based evaluation of AI-based tools: evaluate the quality, accuracy, safety, contextual appropriateness, and biases of AI-based tools and their underlying data sets in providing care to patients and populations; (5) workflow analysis for AI-based tools: analyze and adapt to changes in teams, roles, responsibilities, and workflows resulting from implementation of AI-based tools; and (6) practice-based learning and improvement regarding AI-based tools: participate in continuing professional development and practice-based improvement activities related to use of AI tools in health care. CONCLUSIONS: The 6 clinical competencies identified can be used to guide future teaching and learning programs to maximize the potential benefits of AI-based tools and diminish potential harms.


Subject(s)
Artificial Intelligence , Learning , Humans , Clinical Competence , Delivery of Health Care , Health Personnel
9.
JMIR AI ; 2: e52888, 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-38875540

ABSTRACT

BACKGROUND: Artificial intelligence (AI) and machine learning (ML) technology design and development continues to be rapid, despite major limitations in its current form as a practice and discipline to address all sociohumanitarian issues and complexities. From these limitations emerges an imperative to strengthen AI and ML literacy in underserved communities and build a more diverse AI and ML design and development workforce engaged in health research. OBJECTIVE: AI and ML has the potential to account for and assess a variety of factors that contribute to health and disease and to improve prevention, diagnosis, and therapy. Here, we describe recent activities within the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) Ethics and Equity Workgroup (EEWG) that led to the development of deliverables that will help put ethics and fairness at the forefront of AI and ML applications to build equity in biomedical research, education, and health care. METHODS: The AIM-AHEAD EEWG was created in 2021 with 3 cochairs and 51 members in year 1 and 2 cochairs and ~40 members in year 2. Members in both years included AIM-AHEAD principal investigators, coinvestigators, leadership fellows, and research fellows. The EEWG used a modified Delphi approach using polling, ranking, and other exercises to facilitate discussions around tangible steps, key terms, and definitions needed to ensure that ethics and fairness are at the forefront of AI and ML applications to build equity in biomedical research, education, and health care. RESULTS: The EEWG developed a set of ethics and equity principles, a glossary, and an interview guide. The ethics and equity principles comprise 5 core principles, each with subparts, which articulate best practices for working with stakeholders from historically and presently underrepresented communities. The glossary contains 12 terms and definitions, with particular emphasis on optimal development, refinement, and implementation of AI and ML in health equity research. To accompany the glossary, the EEWG developed a concept relationship diagram that describes the logical flow of and relationship between the definitional concepts. Lastly, the interview guide provides questions that can be used or adapted to garner stakeholder and community perspectives on the principles and glossary. CONCLUSIONS: Ongoing engagement is needed around our principles and glossary to identify and predict potential limitations in their uses in AI and ML research settings, especially for institutions with limited resources. This requires time, careful consideration, and honest discussions around what classifies an engagement incentive as meaningful to support and sustain their full engagement. By slowing down to meet historically and presently underresourced institutions and communities where they are and where they are capable of engaging and competing, there is higher potential to achieve needed diversity, ethics, and equity in AI and ML implementation in health research.

10.
JMIR Med Inform ; 10(11): e37478, 2022 Nov 16.
Article in English | MEDLINE | ID: mdl-36318697

ABSTRACT

BACKGROUND: The use of artificial intelligence (AI)-based tools in the care of individual patients and patient populations is rapidly expanding. OBJECTIVE: The aim of this paper is to systematically identify research on provider competencies needed for the use of AI in clinical settings. METHODS: A scoping review was conducted to identify articles published between January 1, 2009, and May 1, 2020, from MEDLINE, CINAHL, and the Cochrane Library databases, using search queries for terms related to health care professionals (eg, medical, nursing, and pharmacy) and their professional development in all phases of clinical education, AI-based tools in all settings of clinical practice, and professional education domains of competencies and performance. Limits were provided for English language, studies on humans with abstracts, and settings in the United States. RESULTS: The searches identified 3476 records, of which 4 met the inclusion criteria. These studies described the use of AI in clinical practice and measured at least one aspect of clinician competence. While many studies measured the performance of the AI-based tool, only 4 measured clinician performance in terms of the knowledge, skills, or attitudes needed to understand and effectively use the new tools being tested. These 4 articles primarily focused on the ability of AI to enhance patient care and clinical decision-making by improving information flow and display, specifically for physicians. CONCLUSIONS: While many research studies were identified that investigate the potential effectiveness of using AI technologies in health care, very few address specific competencies that are needed by clinicians to use them effectively. This highlights a critical gap.

13.
J Am Med Inform Assoc ; 29(1): 207-212, 2021 12 28.
Article in English | MEDLINE | ID: mdl-34725693

ABSTRACT

Use of artificial intelligence in healthcare, such as machine learning-based predictive algorithms, holds promise for advancing outcomes, but few systems are used in routine clinical practice. Trust has been cited as an important challenge to meaningful use of artificial intelligence in clinical practice. Artificial intelligence systems often involve automating cognitively challenging tasks. Therefore, previous literature on trust in automation may hold important lessons for artificial intelligence applications in healthcare. In this perspective, we argue that informatics should take lessons from literature on trust in automation such that the goal should be to foster appropriate trust in artificial intelligence based on the purpose of the tool, its process for making recommendations, and its performance in the given context. We adapt a conceptual model to support this argument and present recommendations for future work.


Subject(s)
Artificial Intelligence , Trust , Algorithms , Automation , Machine Learning
14.
JAMIA Open ; 4(4): ooab092, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34805776

ABSTRACT

OBJECTIVE: Given widespread excitement around predictive analytics and the proliferation of machine learning algorithms that predict outcomes, a key next step is understanding how this information is-or should be-communicated with patients. MATERIALS AND METHODS: We conducted a scoping review informed by PRISMA-ScR guidelines to identify current knowledge and gaps in this domain. RESULTS: Ten studies met inclusion criteria for full text review. The following topics were represented in the studies, some of which involved more than 1 topic: disease prevention (N = 5/10, 50%), treatment decisions (N = 5/10, 50%), medication harms reduction (N = 1/10, 10%), and presentation of cardiovascular risk information (N = 5/10, 50%). A single study included 6- and 12-month clinical outcome metrics. DISCUSSION: As predictive models are increasingly published, marketed by industry, and implemented, this paucity of relevant research poses important gaps. Published studies identified the importance of (1) identifying the most effective source of information for patient communications; (2) contextualizing risk information and associated design elements based on users' needs and problem areas; and (3) understanding potential impacts on risk factor modification and behavior change dependent on risk presentation. CONCLUSION: An opportunity remains for researchers and practitioners to share strategies for effective selection of predictive algorithms for clinical practice, approaches for educating clinicians and patients in effectively using predictive data, and new approaches for framing patient-provider communication in the era of artificial intelligence.

15.
Appl Clin Inform ; 12(5): 969-978, 2021 10.
Article in English | MEDLINE | ID: mdl-34670292

ABSTRACT

OBJECTIVE: To develop and evaluate an electronic tool that collects interval history and incorporates it into a provider summary note. METHODS: A parent-facing online before-visit questionnaire (BVQ) collected information from parents and caregivers of pediatric diabetes patients prior to a clinic encounter. This information was related to interval history and perceived self-management barriers. The BVQ generated a summary note that providers could paste in their own documentation. Parents also completed postvisit experience questionnaires. We assessed the BVQs perceived usefulness to parents and providers and compared provider documentation content and length pre- and post-BVQ rollout. We interviewed providers regarding their experiences with the system-generated note. RESULTS: Seventy-three parents of diabetic children were recruited and completed the BVQ. A total of 79% of parents stated that the BVQ helped with visit preparation and 80% said it improved perceived quality of visits. All 16 participating providers reviewed BVQs prior to patient encounters and 100% considered the summary beneficial. Most providers (81%) desired summaries less than 1 week old. A total of 69% of providers preferred the prose version of the summary; however, 75% also viewed the bulleted version as preferable for provider review. Analysis of provider notes revealed that BVQs increased provider documentation of patients' adherence and barriers. We observed a 50% reduction in typing by providers to document interval histories. Providers not using summaries typed an average of 137 words (standard deviation [SD]: 74) to document interval history compared with 68 words [SD 47] typed with BVQ use. DISCUSSION: Providers and parents of children with diabetes appreciated the use of previsit, parent-completed BVQs that automatically produced provider documentation. Despite the BVQ redistributing work from providers to parents, its use was acceptable to both groups. CONCLUSION: Parent-completed questionnaires on the patient's behalf that generate provider documentation encourage communication between parents and providers regarding disease management and reduce provider workload.


Subject(s)
Diabetes Mellitus , Documentation , Child , Communication , Humans , Parents , Surveys and Questionnaires
16.
J Am Med Inform Assoc ; 28(9): 1858-1865, 2021 08 13.
Article in English | MEDLINE | ID: mdl-34142141

ABSTRACT

OBJECTIVE: The goals of this study are to describe the value and impact of Project HealthDesign (PHD), a program of the Robert Wood Johnson Foundation that applied design thinking to personal health records, and to explore the applicability of the PHD model to another challenging translational informatics problem: the integration of AI into the healthcare system. MATERIALS AND METHODS: We assessed PHD's impact and value in 2 ways. First, we analyzed publication impact by calculating a PHD h-index and characterizing the professional domains of citing journals. Next, we surveyed and interviewed PHD grantees, expert consultants, and codirectors to assess the program's components and the potential future application of design thinking to artificial intelligence (AI) integration into healthcare. RESULTS: There was a total of 1171 unique citations to PHD-funded work (collective h-index of 25). Studies citing PHD span medical, legal, and computational journals. Participants stated that this project transformed their thinking, altered their career trajectory, and resulted in technology transfer into the commercial sector. Participants felt, in general, that the approach would be valuable in solving contemporary challenges integrating AI in healthcare including complex social questions, integrating knowledge from multiple domains, implementation, and governance. CONCLUSION: Design thinking is a systematic approach to problem-solving characterized by cooperation and collaboration. PHD generated significant impacts as measured by citations, reach, and overall effect on participants. PHD's design thinking methods are potentially useful to other work on cyber-physical systems, such as the use of AI in healthcare, to propose structural or policy-related changes that may affect adoption, value, and improvement of the care delivery system.


Subject(s)
Artificial Intelligence , Health Records, Personal , Delivery of Health Care , Humans , Informatics
17.
J Am Med Inform Assoc ; 28(7): 1543-1547, 2021 07 14.
Article in English | MEDLINE | ID: mdl-33893511

ABSTRACT

OBJECTIVE: Successful technological implementations frequently involve individuals who serve as mediators between end users, management, and technology developers. The goal for this project was to evaluate the structure and activities of such mediators in a large-scale electronic health record implementation. MATERIALS AND METHODS: Field notes from observations taken during implementation beginning in November 2017 were analyzed qualitatively using a thematic analysis framework to examine the relationship between specific types of mediators and the type and level of support to end users. RESULTS: We found that support personnel possessing both contextual knowledge of the institution's workflow and training in the new technology were the most successful in mediation of adoption and use. Those that lacked context of either technology or institutional workflow often displayed barriers in communication, trust, and active problem solving. CONCLUSIONS: These findings suggest that institutional investment in technology training and explicit programs to foster skills in mediation, including roles for professionals with career development opportunities, prior to implementation can be beneficial in easing the pain of system transition.


Subject(s)
Medical Informatics , Electronic Health Records , Humans , Workflow
18.
JAMA Netw Open ; 4(3): e211428, 2021 03 01.
Article in English | MEDLINE | ID: mdl-33710291

ABSTRACT

Importance: Numerous prognostic models of suicide risk have been published, but few have been implemented outside of integrated managed care systems. Objective: To evaluate performance of a suicide attempt risk prediction model implemented in a vendor-supplied electronic health record to predict subsequent (1) suicidal ideation and (2) suicide attempt. Design, Setting, and Participants: This observational cohort study evaluated implementation of a suicide attempt prediction model in live clinical systems without alerting. The cohort comprised patients seen for any reason in adult inpatient, emergency department, and ambulatory surgery settings at an academic medical center in the mid-South from June 2019 to April 2020. Main Outcomes and Measures: Primary measures assessed external, prospective, and concurrent validity. Manual medical record validation of coded suicide attempts confirmed incident behaviors with intent to die. Subgroup analyses were performed based on demographic characteristics, relevant clinical context/setting, and presence or absence of universal screening. Performance was evaluated using discrimination (number needed to screen, C statistics, positive/negative predictive values) and calibration (Spiegelhalter z statistic). Recalibration was performed with logistic calibration. Results: The system generated 115 905 predictions for 77 973 patients (42 490 [54%] men, 35 404 [45%] women, 60 586 [78%] White, 12 620 [16%] Black). Numbers needed to screen in highest risk quantiles were 23 and 271 for suicidal ideation and attempt, respectively. Performance was maintained across demographic subgroups. Numbers needed to screen for suicide attempt by sex were 256 for men and 323 for women; and by race: 373, 176, and 407 for White, Black, and non-White/non-Black patients, respectively. Model C statistics were, across the health system: 0.836 (95% CI, 0.836-0.837); adult hospital: 0.77 (95% CI, 0.77-0.772); emergency department: 0.778 (95% CI, 0.777-0.778); psychiatry inpatient settings: 0.634 (95% CI, 0.633-0.636). Predictions were initially miscalibrated (Spiegelhalter z = -3.1; P = .001) with improvement after recalibration (Spiegelhalter z = 1.1; P = .26). Conclusions and Relevance: In this study, this real-time predictive model of suicide attempt risk showed reasonable numbers needed to screen in nonpsychiatric specialty settings in a large clinical system. Assuming that research-valid models will translate without performing this type of analysis risks inaccuracy in clinical practice, misclassification of risk, wasted effort, and missed opportunity to correct and prevent such problems. The next step is careful pairing with low-cost, low-harm preventive strategies in a pragmatic trial of effectiveness in preventing future suicidality.


Subject(s)
Electronic Health Records , Models, Statistical , Risk Assessment/methods , Suicidal Ideation , Suicide, Attempted/statistics & numerical data , Adult , Cohort Studies , Computer Systems , Female , Humans , Male , Middle Aged , Predictive Value of Tests
19.
Appl Clin Inform ; 12(1): 164-169, 2021 01.
Article in English | MEDLINE | ID: mdl-33657635

ABSTRACT

BACKGROUND: The data visualization literature asserts that the details of the optimal data display must be tailored to the specific task, the background of the user, and the characteristics of the data. The general organizing principle of a concept-oriented display is known to be useful for many tasks and data types. OBJECTIVES: In this project, we used general principles of data visualization and a co-design process to produce a clinical display tailored to a specific cognitive task, chosen from the anesthesia domain, but with clear generalizability to other clinical tasks. To support the work of the anesthesia-in-charge (AIC) our task was, for a given day, to depict the acuity level and complexity of each patient in the collection of those that will be operated on the following day. The AIC uses this information to optimally allocate anesthesia staff and providers across operating rooms. METHODS: We used a co-design process to collaborate with participants who work in the AIC role. We conducted two in-depth interviews with AICs and engaged them in subsequent input on iterative design solutions. RESULTS: Through a co-design process, we found (1) the need to carefully match the level of detail in the display to the level required by the clinical task, (2) the impedance caused by irrelevant information on the screen such as icons relevant only to other tasks, and (3) the desire for a specific but optional trajectory of increasingly detailed textual summaries. CONCLUSION: This study reports a real-world clinical informatics development project that engaged users as co-designers. Our process led to the user-preferred design of a single binary flag to identify the subset of patients needing further investigation, and then a trajectory of increasingly detailed, text-based abstractions for each patient that can be displayed when more information is needed.


Subject(s)
Data Display , Medical Informatics , Delivery of Health Care , Humans , Operating Rooms , Perioperative Care
20.
J Am Med Inform Assoc ; 28(6): 1168-1177, 2021 06 12.
Article in English | MEDLINE | ID: mdl-33576432

ABSTRACT

OBJECTIVE: The characteristics of clinician activities while interacting with electronic health record (EHR) systems can influence the time spent in EHRs and workload. This study aims to characterize EHR activities as tasks and define novel, data-driven metrics. MATERIALS AND METHODS: We leveraged unsupervised learning approaches to learn tasks from sequences of events in EHR audit logs. We developed metrics characterizing the prevalence of unique events and event repetition and applied them to categorize tasks into 4 complexity profiles. Between these profiles, Mann-Whitney U tests were applied to measure the differences in performance time, event type, and clinician prevalence, or the number of unique clinicians who were observed performing these tasks. In addition, we apply process mining frameworks paired with clinical annotations to support the validity of a sample of our identified tasks. We apply our approaches to learn tasks performed by nurses in the Vanderbilt University Medical Center neonatal intensive care unit. RESULTS: We examined EHR audit logs generated by 33 neonatal intensive care unit nurses resulting in 57 234 sessions and 81 tasks. Our results indicated significant differences in performance time for each observed task complexity profile. There were no significant differences in clinician prevalence or in the frequency of viewing and modifying event types between tasks of different complexities. We presented a sample of expert-reviewed, annotated task workflows supporting the interpretation of their clinical meaningfulness. CONCLUSIONS: The use of the audit log provides an opportunity to assist hospitals in further investigating clinician activities to optimize EHR workflows.


Subject(s)
Electronic Health Records , Unsupervised Machine Learning , Humans , Infant, Newborn , Intensive Care Units, Neonatal , Workflow , Workload
SELECTION OF CITATIONS
SEARCH DETAIL
...