Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Sensors (Basel) ; 24(11)2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38894462

ABSTRACT

Robots are becoming an increasingly important part of our society and have started to be used in tasks that require communicating with humans. Communication can be decoupled in two dimensions: symbolic (information aimed to achieve a particular goal) and spontaneous (displaying the speaker's emotional and motivational state) communication. Thus, to enhance human-robot interactions, the expressions that are used have to convey both dimensions. This paper presents a method for modelling a robot's expressiveness as a combination of these two dimensions, where each of them can be generated independently. This is the first contribution of our work. The second contribution is the development of an expressiveness architecture that uses predefined multimodal expressions to convey the symbolic dimension and integrates a series of modulation strategies for conveying the robot's mood and emotions. In order to validate the performance of the proposed architecture, the last contribution is a series of experiments that aim to study the effect that the addition of the spontaneous dimension of communication and its fusion with the symbolic dimension has on how people perceive a social robot. Our results show that the modulation strategies improve the users' perception and can convey a recognizable affective state.

2.
BMC Psychiatry ; 22(1): 760, 2022 12 05.
Article in English | MEDLINE | ID: mdl-36471336

ABSTRACT

BACKGROUND: Social robots have demonstrated promising outcomes in terms of increasing the social health and well-being of people with dementia and mild cognitive impairment. According to the World Health Organization's Monitoring and assessing digital health interventions framework, usability and feasibility studies are crucial before implementing prototype social robots and proving their efficacy and effectiveness. This protocol paper aims to detail the plan for conducting the usability and feasibility study of the MINI robot based on evidence-based recommended methodology. METHODS: In this study, an experimental design and a mixed method of data collection will be applied. Twenty participants aged 65 and over with dementia or mild cognitive impairment will be recruited. Eight sessions of interaction with the robot, as well as qualitative and quantitative assessments, will be accomplished. The research will take place in a laboratory. Ethical approvals have been acquired. This research will be valuable in the development of the MINI robot and its practical deployment in the actual world, as well as the methodological evidence base in the sector of social robots. DISCUSSION: By the winter of 2022-2023, the findings of this study will be accessible for dissemination. This study will aid to improve the evidence-based methodology used to study the feasibility and usability of social robots in people with dementia and mild cognitive impairment as well as what can be learned to advance such study designs in the future.


Subject(s)
Cognitive Dysfunction , Dementia , Robotics , Humans , Dementia/psychology , Feasibility Studies , Social Interaction , Cognitive Dysfunction/psychology
3.
Sensors (Basel) ; 21(23)2021 Dec 06.
Article in English | MEDLINE | ID: mdl-34884147

ABSTRACT

Travellers use the term waymarking to define the action of posting signs, or waymarks, along a route. These marks are intended to be points of reference during navigation for the environment. In this research, we will define waymarking as the skill of a robot to signal the environment or generate information to facilitate localization and navigation, both for its own use and for other robots as well. We present an automated environment signaling system using human-robot interaction and radio frequency identification (RFID) technology. The goal is for the robot, through human-robot interaction, to obtain information from the environment and use this information to carry out the signaling or waymarking process. HRI will play a key role in the signaling process since this type of communication makes it possible to exchange more specific and enriching information. The robot uses common phrases such as "Where am I?" and "Where can I go?", just as we humans do when we ask other people for information about the environment. It is also possible to guide the robot and "show" it the environment to carry out the task of writing the signs. The robot will use the information received to create, update, or improve the navigation data in the RFID signals. In this paper, the signaling process will be described, how the robot acquires the information for signals, writing and updating process and finally, the implementation and integration in a real social robot in a real indoor environment.


Subject(s)
Robotics , Communication , Humans , Motivation , Social Interaction
4.
Sensors (Basel) ; 20(12)2020 Jun 18.
Article in English | MEDLINE | ID: mdl-32570807

ABSTRACT

Social Robots need to communicate in a way that feels natural to humans if they are to effectively bond with the users and provide an engaging interaction. Inline with this natural, effective communication, robots need to perceive and manage multimodal information, both as input and output, and respond accordingly. Consequently, dialogue design is a key factor in creating an engaging multimodal interaction. These dialogues need to be flexible enough to adapt to unforeseen circumstances that arise during the conversation but should also be easy to create, so the development of new applications gets simpler. In this work, we present our approach to dialogue modelling based on basic atomic interaction units called Communicative Acts. They manage basic interactions considering who has the initiative (the robot or the user), and what is his/her intention. The two possible intentions are either ask for information or give information. In addition, because we focus on one-to-one interactions, the initiative can only be taken by the robot or the user. Communicative Acts can be parametrised and combined in a hierarchical manner to fulfil the needs of the robot's applications, and they have been equipped with built-in functionalities that are in charge of low-level communication tasks. These tasks include communication error handling, turn-taking or user disengagement. This system has been integrated in Mini, a social robot that has been created to assist older adults with cognitive impairment. In a case of use, we demonstrate the operation of our system as well as its performance in real human-robot interactions.


Subject(s)
Communication , Robotics , Aged , Emotions , Female , Humans , Male , Social Interaction
5.
Sensors (Basel) ; 18(8)2018 Aug 16.
Article in English | MEDLINE | ID: mdl-30115836

ABSTRACT

Nowadays, many robotic applications require robots making their own decisions and adapting to different conditions and users. This work presents a biologically inspired decision making system, based on drives, motivations, wellbeing, and self-learning, that governs the behavior of the robot considering both internal and external circumstances. In this paper we state the biological foundations that drove the design of the system, as well as how it has been implemented in a real robot. Following a homeostatic approach, the ultimate goal of the robot is to keep its wellbeing as high as possible. In order to achieve this goal, our decision making system uses learning mechanisms to assess the best action to execute at any moment. Considering that the proposed system has been implemented in a real social robot, human-robot interaction is of paramount importance and the learned behaviors of the robot are oriented to foster the interactions with the user. The operation of the system is shown in a scenario where the robot Mini plays games with a user. In this context, we have included a robust user detection mechanism tailored for short distance interactions. After the learning phase, the robot has learned how to lead the user to interact with it in a natural way.


Subject(s)
Decision Making , Motivation , Robotics/methods , Humans , Learning , Perception
6.
J Healthc Eng ; 2018: 7075290, 2018.
Article in English | MEDLINE | ID: mdl-29713440

ABSTRACT

Apraxia of speech is a motor speech disorder in which messages from the brain to the mouth are disrupted, resulting in an inability for moving lips or tongue to the right place to pronounce sounds correctly. Current therapies for this condition involve a therapist that in one-on-one sessions conducts the exercises. Our aim is to work in the line of robotic therapies in which a robot is able to perform partially or autonomously a therapy session, endowing a social robot with the ability of assisting therapists in apraxia of speech rehabilitation exercises. Therefore, we integrate computer vision and machine learning techniques to detect the mouth pose of the user and, on top of that, our social robot performs autonomously the different steps of the therapy using multimodal interaction.


Subject(s)
Apraxias/rehabilitation , Machine Learning , Robotics , Social Behavior , Speech Therapy/methods , Speech , Humans , Imaging, Three-Dimensional , Internet , Mouth/physiology , Movement , User-Computer Interface
7.
Sensors (Basel) ; 14(2): 2476-88, 2014 Feb 05.
Article in English | MEDLINE | ID: mdl-24504105

ABSTRACT

Nowadays the automobile industry is becoming more and more demanding as far as quality is concerned. Within the wide variety of processes in which this quality must be ensured, those regarding the squeezing of the auto bodywork are especially important due to the fact that the quality of the resulting product is tested manually by experts, leading to inaccuracies of all types. In this paper, an algorithm is proposed for the automated evaluation of the imperfections in the sheets of the bodywork after the squeezing process. The algorithm processes the profile signals from a retroreflective image and characterizes an imperfection. It is based on a convergence criterion that follows the line of the maximum gradient of the imperfection and gives its geometrical characteristics as a result: maximum gradient, length, width, and area.

8.
Sensors (Basel) ; 13(11): 15549-81, 2013 Nov 14.
Article in English | MEDLINE | ID: mdl-24240598

ABSTRACT

In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human-robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human-robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately.


Subject(s)
Emotions/physiology , Robotics , Facial Expression , Humans
9.
Sensors (Basel) ; 13(9): 12406-30, 2013 Sep 17.
Article in English | MEDLINE | ID: mdl-24048336

ABSTRACT

The main activity of social robots is to interact with people. In order to do that, the robot must be able to understand what the user is saying or doing. Typically, this capability consists of pre-programmed behaviors or is acquired through controlled learning processes, which are executed before the social interaction begins. This paper presents a software architecture that enables a robot to learn poses in a similar way as people do. That is, hearing its teacher's explanations and acquiring new knowledge in real time. The architecture leans on two main components: an RGB-D (Red-, Green-, Blue- Depth) -based visual system, which gathers the user examples, and an Automatic Speech Recognition (ASR) system, which processes the speech describing those examples. The robot is able to naturally learn the poses the teacher is showing to it by maintaining a natural interaction with the teacher. We evaluate our system with 24 users who teach the robot a predetermined set of poses. The experimental results show that, with a few training examples, the system reaches high accuracy and robustness. This method shows how to combine data from the visual and auditory systems for the acquisition of new knowledge in a natural manner. Such a natural way of training enables robots to learn from users, even if they are not experts in robotics.


Subject(s)
Artificial Intelligence , Computer-Assisted Instruction/methods , Gestures , Interpersonal Relations , Man-Machine Systems , Robotics/methods , Speech Recognition Software , Communication , Cybernetics/methods , Pattern Recognition, Automated/methods , Systems Integration
10.
Sensors (Basel) ; 12(7): 9913-35, 2012.
Article in English | MEDLINE | ID: mdl-23012577

ABSTRACT

This paper presents a user localization system based on the fusion of visual information and sound source localization, implemented on a social robot called Maggie. One of the main requisites to obtain a natural interaction between human-human and human-robot is an adequate spatial situation between the interlocutors, that is, to be orientated and situated at the right distance during the conversation in order to have a satisfactory communicative process. Our social robot uses a complete multimodal dialog system which manages the user-robot interaction during the communicative process. One of its main components is the presented user localization system. To determine the most suitable allocation of the robot in relation to the user, a proxemic study of the human-robot interaction is required, which is described in this paper. The study has been made with two groups of users: children, aged between 8 and 17, and adults. Finally, at the end of the paper, experimental results with the proposed multimodal dialog system are presented.


Subject(s)
Robotics , Algorithms , Humans , Software , Sound , User-Computer Interface
SELECTION OF CITATIONS
SEARCH DETAIL
...