Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Health Informatics J ; 30(2): 14604582241262251, 2024.
Article in English | MEDLINE | ID: mdl-38865081

ABSTRACT

OBJECTIVE: Family health history (FHx) is an important tool in assessing one's risk towards specific health conditions. However, user experience of FHx collection tools is rarely studied. ItRunsInMyFamily.com (ItRuns) was developed to assess FHx and hereditary cancer risk. This study reports a quantitative user experience analysis of ItRuns. METHODS: We conducted a public health campaign in November 2019 to promote FHx collection using ItRuns. We used software telemetry to quantify abandonment and time spent on ItRuns to identify user behaviors and potential areas of improvement. RESULTS: Of 11,065 users who started the ItRuns assessment, 4305 (38.91%) reached the final step to receive recommendations about hereditary cancer risk. Highest abandonment rates were during Introduction (32.82%), Invite Friends (29.03%), and Family Cancer History (12.03%) subflows. Median time to complete the assessment was 636 s. Users spent the highest median time on Proband Cancer History (124.00 s) and Family Cancer History (119.00 s) subflows. Search list questions took the longest to complete (median 19.50 s), followed by free text email input (15.00 s). CONCLUSION: Knowledge of objective user behaviors at a large scale and factors impacting optimal user experience will help enhance the ItRuns workflow and improve future FHx collection.


Subject(s)
Medical History Taking , Humans , Medical History Taking/methods , Medical History Taking/statistics & numerical data , Family Health , Female , Male , Telemetry/methods , Software
2.
Front Digit Health ; 4: 954069, 2022.
Article in English | MEDLINE | ID: mdl-36310920

ABSTRACT

Objective: Virtual conversational agents, or chatbots, have emerged as a novel approach to health data collection. However, research on patient perceptions of chatbots in comparison to traditional online forms is sparse. This study aimed to compare and assess the experience of completing a health assessment using a chatbot vs. an online form. Methods: A counterbalanced, within-subject experimental design was used with participants recruited via Amazon Mechanical Turk (mTurk). Participants completed a standardized health assessment using a chatbot (i.e., Dokbot) and an online form (i.e., REDCap), each followed by usability and experience questionnaires. To address poor data quality and preserve integrity of mTurk responses, we employed a thorough data cleaning process informed by previous literature. Quantitative (descriptive and inferential statistics) and qualitative (thematic analysis and complex coding query) approaches were used for analysis. Results: A total of 391 participants were recruited, 185 of whom were excluded, resulting in a final sample size of 206 individuals. Most participants (69.9%) preferred the chatbot over the online form. Average Net Promoter Score was higher for the chatbot (NPS = 24) than the online form (NPS = 13) at a statistically significant level. System Usability Scale scores were also higher for the chatbot (i.e. 69.7 vs. 67.7), but this difference was not statistically significant. The chatbot took longer to complete but was perceived as conversational, interactive, and intuitive. The online form received favorable comments for its familiar survey-like interface. Conclusion: Our findings demonstrate that a chatbot provided superior engagement, intuitiveness, and interactivity despite increased completion time compared to online forms. Knowledge of patient preferences and barriers will inform future design and development of recommendations and best practice for chatbots for healthcare data collection.

SELECTION OF CITATIONS
SEARCH DETAIL
...