Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38835626

ABSTRACT

Today's AI systems for medical decision support often succeed on benchmark datasets in research papers but fail in real-world deployment. This work focuses on the decision making of sepsis, an acute life-threatening systematic infection that requires an early diagnosis with high uncertainty from the clinician. Our aim is to explore the design requirements for AI systems that can support clinical experts in making better decisions for the early diagnosis of sepsis. The study begins with a formative study investigating why clinical experts abandon an existing AI-powered Sepsis predictive module in their electrical health record (EHR) system. We argue that a human-centered AI system needs to support human experts in the intermediate stages of a medical decision-making process (e.g., generating hypotheses or gathering data), instead of focusing only on the final decision. Therefore, we build SepsisLab based on a state-of-the-art AI algorithm and extend it to predict the future projection of sepsis development, visualize the prediction uncertainty, and propose actionable suggestions (i.e., which additional laboratory tests can be collected) to reduce such uncertainty. Through heuristic evaluation with six clinicians using our prototype system, we demonstrate that SepsisLab enables a promising human-AI collaboration paradigm for the future of AI-assisted sepsis diagnosis and other high-stakes medical decision making.

2.
J Med Internet Res ; 25: e46427, 2023 07 05.
Article in English | MEDLINE | ID: mdl-37405831

ABSTRACT

BACKGROUND: Neurodegenerative diseases (NDDs) are prevalent among older adults worldwide. Early diagnosis of NDD is challenging yet crucial. Gait status has been identified as an indicator of early-stage NDD changes and can play a significant role in diagnosis, treatment, and rehabilitation. Historically, gait assessment has relied on intricate but imprecise scales by trained professionals or required patients to wear additional equipment, causing discomfort. Advancements in artificial intelligence may completely transform this and offer a novel approach to gait evaluation. OBJECTIVE: This study aimed to use cutting-edge machine learning techniques to offer patients a noninvasive, entirely contactless gait assessment and provide health care professionals with precise gait assessment results covering all common gait-related parameters to assist in diagnosis and rehabilitation planning. METHODS: Data collection involved motion data from 41 different participants aged 25 to 85 (mean 57.51, SD 12.93) years captured in motion sequences using the Azure Kinect (Microsoft Corp; a 3D camera with a 30-Hz sampling frequency). Support vector machine (SVM) and bidirectional long short-term memory (Bi-LSTM) classifiers trained using spatiotemporal features extracted from raw data were used to identify gait types in each walking frame. Gait semantics could then be obtained from the frame labels, and all the gait parameters could be calculated accordingly. For optimal generalization performance of the model, the classifiers were trained using a 10-fold cross-validation strategy. The proposed algorithm was also compared with the previous best heuristic method. Qualitative and quantitative feedback from medical staff and patients in actual medical scenarios was extensively collected for usability analysis. RESULTS: The evaluations comprised 3 aspects. Regarding the classification results from the 2 classifiers, Bi-LSTM achieved an average precision, recall, and F1-score of 90.54%, 90.41%, and 90.38%, respectively, whereas these metrics were 86.99%, 86.62%, and 86.67%, respectively, for SVM. Moreover, the Bi-LSTM-based method attained 93.2% accuracy in gait segmentation evaluation (tolerance set to 2), whereas that of the SVM-based method achieved only 77.5% accuracy. For the final gait parameter calculation result, the average error rate of the heuristic method, SVM, and Bi-LSTM was 20.91% (SD 24.69%), 5.85% (SD 5.45%), and 3.17% (SD 2.75%), respectively. CONCLUSIONS: This study demonstrated that the Bi-LSTM-based approach can effectively support accurate gait parameter assessment, assisting medical professionals in making early diagnoses and reasonable rehabilitation plans for patients with NDD.


Subject(s)
Deep Learning , Gait , Neurodegenerative Diseases , Aged , Humans , Artificial Intelligence , Machine Learning , Neurodegenerative Diseases/diagnosis
3.
J Med Syst ; 45(6): 64, 2021 May 04.
Article in English | MEDLINE | ID: mdl-33948743

ABSTRACT

Ongoing research efforts have been examining how to utilize artificial intelligence technology to help healthcare consumers make sense of their clinical data, such as diagnostic radiology reports. How to promote the acceptance of such novel technology is a heated research topic. Recent studies highlight the importance of providing local explanations about AI prediction and model performance to help users determine whether to trust AI's predictions. Despite some efforts, limited empirical research has been conducted to quantitatively measure how AI explanations impact healthcare consumers' perceptions of using patient-facing, AI-powered healthcare systems. The aim of this study is to evaluate the effects of different AI explanations on people's perceptions of AI-powered healthcare system. In this work, we designed and deployed a large-scale experiment (N = 3,423) on Amazon Mechanical Turk (MTurk) to evaluate the effects of AI explanations on people's perceptions in the context of comprehending radiology reports. We created four groups based on two factors-the extent of explanations for the prediction (High vs. Low Transparency) and the model performance (Good vs. Weak AI Model)-and randomly assigned participants to one of the four conditions. Participants were instructed to classify a radiology report as describing a normal or abnormal finding, followed by completing a post-study survey to indicate their perceptions of the AI tool. We found that revealing model performance information can promote people's trust and perceived usefulness of system outputs, while providing local explanations for the rationale of a prediction can promote understandability but not necessarily trust. We also found that when model performance is low, the more information the AI system discloses, the less people would trust the system. Lastly, whether human agrees with AI predictions or not and whether the AI prediction is correct or not could also influence the effect of AI explanations. We conclude this paper by discussing implications for designing AI systems for healthcare consumers to interpret diagnostic report.


Subject(s)
Artificial Intelligence , Radiology , Delivery of Health Care , Humans , Perception , Radiography
4.
Health Informatics J ; 27(2): 14604582211011215, 2021.
Article in English | MEDLINE | ID: mdl-33913359

ABSTRACT

Results of radiology imaging studies are not typically comprehensible to patients. With the advances in artificial intelligence (AI) technology in recent years, it is expected that AI technology can aid patients' understanding of radiology imaging data. The aim of this study is to understand patients' perceptions and acceptance of using AI technology to interpret their radiology reports. We conducted semi-structured interviews with 13 participants to elicit reflections pertaining to the use of AI technology in radiology report interpretation. A thematic analysis approach was employed to analyze the interview data. Participants have a generally positive attitude toward using AI-based systems to comprehend their radiology reports. AI is perceived to be particularly useful in seeking actionable information, confirming the doctor's opinions, and preparing for the consultation. However, we also found various concerns related to the use of AI in this context, such as cyber-security, accuracy, and lack of empathy. Our results highlight the necessity of providing AI explanations to promote people's trust and acceptance of AI. Designers of patient-centered AI systems should employ user-centered design approaches to address patients' concerns. Such systems should also be designed to promote trust and deliver concerning health results in an empathetic manner to optimize the user experience.


Subject(s)
Artificial Intelligence , Radiology , Diagnostic Imaging , Humans , Perception , Technology
5.
J Med Internet Res ; 23(1): e19928, 2021 01 06.
Article in English | MEDLINE | ID: mdl-33404508

ABSTRACT

BACKGROUND: Artificial intelligence (AI)-driven chatbots are increasingly being used in health care, but most chatbots are designed for a specific population and evaluated in controlled settings. There is little research documenting how health consumers (eg, patients and caregivers) use chatbots for self-diagnosis purposes in real-world scenarios. OBJECTIVE: The aim of this research was to understand how health chatbots are used in a real-world context, what issues and barriers exist in their usage, and how the user experience of this novel technology can be improved. METHODS: We employed a data-driven approach to analyze the system log of a widely deployed self-diagnosis chatbot in China. Our data set consisted of 47,684 consultation sessions initiated by 16,519 users over 6 months. The log data included a variety of information, including users' nonidentifiable demographic information, consultation details, diagnostic reports, and user feedback. We conducted both statistical analysis and content analysis on this heterogeneous data set. RESULTS: The chatbot users spanned all age groups, including middle-aged and older adults. Users consulted the chatbot on a wide range of medical conditions, including those that often entail considerable privacy and social stigma issues. Furthermore, we distilled 2 prominent issues in the use of the chatbot: (1) a considerable number of users dropped out in the middle of their consultation sessions, and (2) some users pretended to have health concerns and used the chatbot for nontherapeutic purposes. Finally, we identified a set of user concerns regarding the use of the chatbot, including insufficient actionable information and perceived inaccurate diagnostic suggestions. CONCLUSIONS: Although health chatbots are considered to be convenient tools for enhancing patient-centered care, there are issues and barriers impeding the optimal use of this novel technology. Designers and developers should employ user-centered approaches to address the issues and user concerns to achieve the best uptake and utilization. We conclude the paper by discussing several design implications, including making the chatbots more informative, easy-to-use, and trustworthy, as well as improving the onboarding experience to enhance user engagement.


Subject(s)
Artificial Intelligence/standards , Telemedicine/methods , Adult , Female , Humans , Male , Research Design , Social Media
SELECTION OF CITATIONS
SEARCH DETAIL
...