Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Front Artif Intell ; 6: 1241290, 2023.
Article in English | MEDLINE | ID: mdl-37854078

ABSTRACT

Calibrating appropriate trust of non-expert users in artificial intelligence (AI) systems is a challenging yet crucial task. To align subjective levels of trust with the objective trustworthiness of a system, users need information about its strengths and weaknesses. The specific explanations that help individuals avoid over- or under-trust may vary depending on their initial perceptions of the system. In an online study, 127 participants watched a video of a financial AI assistant with varying degrees of decision agency. They generated 358 spontaneous text descriptions of the system and completed standard questionnaires from the Trust in Automation and Technology Acceptance literature (including perceived system competence, understandability, human-likeness, uncanniness, intention of developers, intention to use, and trust). Comparisons between a high trust and a low trust user group revealed significant differences in both open-ended and closed-ended answers. While high trust users characterized the AI assistant as more useful, competent, understandable, and humanlike, low trust users highlighted the system's uncanniness and potential dangers. Manipulating the AI assistant's agency had no influence on trust or intention to use. These findings are relevant for effective communication about AI and trust calibration of users who differ in their initial levels of trust.

2.
Front Psychol ; 13: 855091, 2022.
Article in English | MEDLINE | ID: mdl-35774945

ABSTRACT

Artificial Intelligence (AI) is supposed to perform tasks autonomously, make competent decisions, and interact socially with people. From a psychological perspective, AI can thus be expected to impact users' three Basic Psychological Needs (BPNs), namely (i) autonomy, (ii) competence, and (iii) relatedness to others. While research highlights the fulfillment of these needs as central to human motivation and well-being, their role in the acceptance of AI applications has hitherto received little consideration. Addressing this research gap, our study examined the influence of BPN Satisfaction on Intention to Use (ITU) an AI assistant for personal banking. In a 2×2 factorial online experiment, 282 participants (154 males, 126 females, two non-binary participants) watched a video of an AI finance coach with a female or male synthetic voice that exhibited either high or low agency (i.e., capacity for self-control). In combination, these factors resulted either in AI assistants conforming to traditional gender stereotypes (e.g., low-agency female) or in non-conforming conditions (e.g., high-agency female). Although the experimental manipulations had no significant influence on participants' relatedness and competence satisfaction, a strong effect on autonomy satisfaction was found. As further analyses revealed, this effect was attributable only to male participants, who felt their autonomy need significantly more satisfied by the low-agency female assistant, consistent with stereotypical images of women, than by the high-agency female assistant. A significant indirect effects model showed that the greater autonomy satisfaction that men, unlike women, experienced from the low-agency female assistant led to higher ITU. The findings are discussed in terms of their practical relevance and the risk of reproducing traditional gender stereotypes through technology design.

3.
Front Psychol ; 13: 787499, 2022.
Article in English | MEDLINE | ID: mdl-35645911

ABSTRACT

The growing popularity of speech interfaces goes hand in hand with the creation of synthetic voices that sound ever more human. Previous research has been inconclusive about whether anthropomorphic design features of machines are more likely to be associated with positive user responses or, conversely, with uncanny experiences. To avoid detrimental effects of synthetic voice design, it is therefore crucial to explore what level of human realism human interactors prefer and whether their evaluations may vary across different domains of application. In a randomized laboratory experiment, 165 participants listened to one of five female-sounding robot voices, each with a different degree of human realism. We assessed how much participants anthropomorphized the voice (by subjective human-likeness ratings, a name-giving task and an imagination task), how pleasant and how eerie they found it, and to what extent they would accept its use in various domains. Additionally, participants completed Big Five personality measures and a tolerance of ambiguity scale. Our results indicate a positive relationship between human-likeness and user acceptance, with the most realistic sounding voice scoring highest in pleasantness and lowest in eeriness. Participants were also more likely to assign real human names to the voice (e.g., "Julia" instead of "T380") if it sounded more realistic. In terms of application context, participants overall indicated lower acceptance of the use of speech interfaces in social domains (care, companionship) than in others (e.g., information & navigation), though the most human-like voice was rated significantly more acceptable in social applications than the remaining four. While most personality factors did not prove influential, openness to experience was found to moderate the relationship between voice type and user acceptance such that individuals with higher openness scores rated the most human-like voice even more positively. Study results are discussed in the light of the presented theory and in relation to open research questions in the field of synthetic voice design.

4.
Front Robot AI ; 9: 838116, 2022.
Article in English | MEDLINE | ID: mdl-35360497

ABSTRACT

There is a confidence crisis in many scientific disciplines, in particular disciplines researching human behavior, as many effects of original experiments have not been replicated successfully in large-scale replication studies. While human-robot interaction (HRI) is an interdisciplinary research field, the study of human behavior, cognition and emotion in HRI plays also a vital part. Are HRI user studies facing the same problems as other fields and if so, what can be done to overcome them? In this article, we first give a short overview of the replicability crisis in behavioral sciences and its causes. In a second step, we estimate the replicability of HRI user studies mainly 1) by structural comparison of HRI research processes and practices with those of other disciplines with replicability issues, 2) by systematically reviewing meta-analyses of HRI user studies to identify parameters that are known to affect replicability, and 3) by summarizing first replication studies in HRI as direct evidence. Our findings suggest that HRI user studies often exhibit the same problems that caused the replicability crisis in many behavioral sciences, such as small sample sizes, lack of theory, or missing information in reported data. In order to improve the stability of future HRI research, we propose some statistical, methodological and social reforms. This article aims to provide a basis for further discussion and a potential outline for improvements in the field.

6.
Front Psychol ; 12: 633178, 2021.
Article in English | MEDLINE | ID: mdl-33935883

ABSTRACT

Humanoid robots (i.e., robots with a human-like body) are projected to be mass marketed in the future in several fields of application. Today, however, user evaluations of humanoid robots are often based on mediated depictions rather than actual observations or interactions with a robot, which holds true not least for scientific user studies. People can be confronted with robots in various modes of presentation, among them (1) 2D videos, (2) 3D, i.e., stereoscopic videos, (3) immersive Virtual Reality (VR), or (4) live on site. A systematic investigation into how such differential modes of presentation influence user perceptions of a robot is still lacking. Thus, the current study systematically compares the effects of different presentation modes with varying immersive potential on user evaluations of a humanoid service robot. Participants (N = 120) observed an interaction between a humanoid service robot and an actor either on 2D or 3D video, via a virtual reality headset (VR) or live. We found support for the expected effect of the presentation mode on perceived immediacy. Effects regarding the degree of human likeness that was attributed to the robot were mixed. The presentation mode had no influence on evaluations in terms of eeriness, likability, and purchase intentions. Implications for empirical research on humanoid robots and practice are discussed.

7.
Wearable Technol ; 2: e10, 2021.
Article in English | MEDLINE | ID: mdl-38486624

ABSTRACT

Objective: This field study aimed to explore the effects of exoskeleton use on task-specific self-efficacy beliefs of logistics workers and to relate these effects to usefulness perceptions and technology acceptance. Background: A growing number of industrial companies have shown interest in having employees wearing exoskeletons to support their physical health. However, psychological consequences of exoskeleton use and mechanisms associated with workers' acceptance or rejection of exoskeletons are not yet sufficiently understood. Methods: A total of 31 logistics workers of a vehicle manufacturing company reported on their work-related self-efficacy, that is, how capable they felt of performing tasks related to their job well, before partaking in half-hour trials of a passive lift-assistive exoskeleton (Laevo V2.5) during their normal work. Afterward, they completed a questionnaire on their exoskeleton-supported self-efficacy and indicated how useful they found the exoskeleton, how much physical relief they felt from wearing it, and how willing they were to continue with its use. Results: Overall, wearing the exoskeleton did not lead to increased work-specific self-efficacy. However, indications of interaction effects were found between baseline self-efficacy, perceived physical relief, and perceived usefulness in such a way that workers who experienced the exoskeleton as more strain-relieving or more useful were also more likely to report a post-trial growth in their self-efficacy beliefs. A positive change in self-efficacy, in turn, was associated with a greater willingness to further use the exoskeleton at the workplace.

8.
Front Psychol ; 10: 569, 2019.
Article in English | MEDLINE | ID: mdl-30984059

ABSTRACT

Social robots are becoming increasingly prevalent in everyday life and sex robots are a sub-category of especially high public interest and controversy. Starting from the concept of the otaku, a term from Japanese youth culture that describes secluded persons with a high affinity for fictional manga characters, we examine individual differences behind sex robot appeal (anime and manga fandom, interest in Japanese culture, preference for indoor activities, shyness). In an online-experiment, 261 participants read one out of three randomly assigned descriptions of future technologies (sex robot, nursing robot, genetically modified organism) and reported on their overall evaluation, eeriness, and contact/purchase intentions. Higher anime and manga fandom was associated with higher appeal for all three future technologies. For our male subsample, sex robots and GMOs stood out as shyness yielded a particularly strong relationship to contact/purchase intentions for these new technologies.

SELECTION OF CITATIONS
SEARCH DETAIL
...