Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
BMC Med Inform Decis Mak ; 22(1): 240, 2022 09 13.
Article in English | MEDLINE | ID: mdl-36100876

ABSTRACT

BACKGROUND: The goal of the study is to assess the downstream effects of who requests personal information from individuals for artificial intelligence-(AI) based healthcare research purposes-be it a pharmaceutical company (as an example of a for-profit organization) or a university hospital (as an example of a not-for-profit organization)-as well as their boundary conditions on individuals' likelihood to release personal information about their health. For the latter, the study considers two dimensions: the tendency to self-disclose (which is aimed to be high so that AI applications can reach their full potential) and the tendency to falsify (which is aimed to be low so that AI applications are based on both valid and reliable data). METHODS: Across three experimental studies with Amazon Mechanical Turk workers from the U.S. (n = 204, n = 330, and n = 328, respectively), Covid-19 was used as the healthcare research context. RESULTS: University hospitals (vs. pharmaceutical companies) score higher on altruism and lower on egoism. Individuals were more willing to disclose data if they perceived that the requesting organization acts based on altruistic motives (i.e., the motives function as gate openers). Individuals were more likely to protect their data by intending to provide false information when they perceived egoistic motives to be the main driver for the organization requesting their data (i.e., the motives function as a privacy protection tool). Two moderators, namely message appeal (Study 2) and message endorser credibility (Study 3) influence the two indirect pathways of the release of personal information. CONCLUSION: The findings add to Communication Privacy Management Theory as well as Attribution Theory by suggesting motive-based pathways to the release of correct personal health data. Compared to not-for-profit organizations, for-profit organizations are particularly recommended to match their message appeal with the organizations' purposes (to provide personal benefit) and to use high-credibility endorsers in order to reduce inherent disadvantages in motive perceptions.


Subject(s)
Artificial Intelligence , COVID-19 , Delivery of Health Care , Humans , Pharmaceutical Preparations , Social Perception
2.
BMC Med Inform Decis Mak ; 21(1): 236, 2021 08 06.
Article in English | MEDLINE | ID: mdl-34362359

ABSTRACT

BACKGROUND: Advanced analytics, such as artificial intelligence (AI), increasingly gain relevance in medicine. However, patients' responses to the involvement of AI in the care process remains largely unclear. The study aims to explore whether individuals were more likely to follow a recommendation when a physician used AI in the diagnostic process considering a highly (vs. less) severe disease compared to when the physician did not use AI or when AI fully replaced the physician. METHODS: Participants from the USA (n = 452) were randomly assigned to a hypothetical scenario where they imagined that they received a treatment recommendation after a skin cancer diagnosis (high vs. low severity) from a physician, a physician using AI, or an automated AI tool. They then indicated their intention to follow the recommendation. Regression analyses were used to test hypotheses. Beta coefficients (ß) describe the nature and strength of relationships between predictors and outcome variables; confidence intervals [CI] excluding zero indicate significant mediation effects. RESULTS: The total effects reveal the inferiority of automated AI (ß = .47, p = .001 vs. physician; ß = .49, p = .001 vs. physician using AI). Two pathways increase intention to follow the recommendation. When a physician performs the assessment (vs. automated AI), the perception that the physician is real and present (a concept called social presence) is high, which increases intention to follow the recommendation (ß = .22, 95% CI [.09; 0.39]). When AI performs the assessment (vs. physician only), perceived innovativeness of the method is high, which increases intention to follow the recommendation (ß = .15, 95% CI [- .28; - .04]). When physicians use AI, social presence does not decrease and perceived innovativeness increases. CONCLUSION: Pairing AI with a physician in medical diagnosis and treatment in a hypothetical scenario using topical therapy and oral medication as treatment recommendations leads to a higher intention to follow the recommendation than AI on its own. The findings might help develop practice guidelines for cases where AI involvement benefits outweigh risks, such as using AI in pathology and radiology, to enable augmented human intelligence and inform physicians about diagnoses and treatments.


Subject(s)
Medicine , Physicians , Artificial Intelligence , Humans , Intelligence , Patient Compliance
SELECTION OF CITATIONS
SEARCH DETAIL
...