ABSTRACT
Social robotics represents a branch of human-robot interaction dedicated to developing systems to control the robots to operate in unstructured environments with the presence of human beings. Social robots must interact with human beings by understanding social signals and responding appropriately to them. Most social robots are still pre-programmed, not having great ability to learn and respond with actions adequate during an interaction with humans. Recently more elaborate methods use body movements, gaze direction, and body language. However, these methods generally neglect vital signs present during an interaction, such as the human emotional state. In this article, we address the problem of developing a system to turn a robot able to decide, autonomously, what behaviors to emit in the function of the human emotional state. From one side, the use of Reinforcement Learning (RL) represents a way for social robots to learn advanced models of social cognition, following a self-learning paradigm, using characteristics automatically extracted from high-dimensional sensory information. On the other side, Deep Learning (DL) models can help the robots to capture information from the environment, abstracting complex patterns from the visual information. The combination of these two techniques is known as Deep Reinforcement Learning (DRL). The purpose of this work is the development of a DRL system to promote a natural and socially acceptable interaction among humans and robots. For this, we propose an architecture, Social Robotics Deep Q-Network (SocialDQN), for teaching social robots to behave and interact appropriately with humans based on social signals, especially on human emotional states. This constitutes a relevant contribution for the area since the social signals must not only be recognized by the robot but help him to take action appropriated according to the situation presented. Characteristics extracted from people's faces are considered for extracting the human emotional state aiming to improve the robot perception. The development and validation of the system are carried out with the support of SimDRLSR simulator. Results obtained through several tests demonstrate that the system learned satisfactorily to maximize the rewards, and consequently, the robot behaves in a socially acceptable way.
ABSTRACT
This research presents the technical considerations for implementing the CeCi (Computer Electronic Communication Interface) social robot. In this case, this robot responds to the need to achieve technological development in an emerging country with the aim of social impact and social interaction. There are two problems with the social robots currently on the market, which are the main focus of this research. First, their costs are not affordable for companies, universities, or individuals in emerging countries. The second is that their design is exclusively oriented to the functional part with a vision inherent to the engineers who create them without considering the vision, preferences, or requirements of the end users, especially for their social interaction. This last reason ends causing an aversion to the use of this type of robot. In response to the issues raised, a low-cost prototype is proposed, starting from a commercial platform for research development and using open source code. The robot design presented here is centered on the criteria and preferences of the end user, prioritizing acceptability for social interaction. This article details the selection process and hardware capabilities of the robot. Moreover, a programming section is provided to introduce the different software packages used and adapted for the social interaction, the main functions implemented, as well as the new and original part of the proposal. Finally, a list of applications currently developed with the robot and possible applications for future research are discussed.
Subject(s)
Robotics , Engineering , Humans , Social Interaction , Software , User-Computer InterfaceABSTRACT
The lack of interest of children at school is one of the biggest problems that Mexican education faces. Two important factors causing this lack of interest are the predominant methodology used in Mexican schools and the technology as a barrier for attention. The methodology that institutions have followed has become an issue because of its very traditional approach, with the professor giving all the theoretical material to the students while they listen and memorize the contents, and, if we add the issue of the growing access to technological devices for students, children carrying a phone are more likely to be distracted. This study aims to integrate technology through assistive robots as a beneficial tool for educators, in order to improve the attention span of students by making the learning process in multiple areas of the Mexican curriculum more dynamic, therefore obtaining better results. To prove this, four different approaches were implemented; three in elementary schools and one in higher education: the LEGO® robotic kit and the NAO robot for STEM (science, technology, engineering, and mathematics) teaching, the NAO robot for physical education (PE), and the PhantomX Hexapod, respectively. Each of these technological approaches was applied by considering both control and experimental groups, in order to compare the data and provide conclusions. Finally, this study proves that the attention span is indeed improved as a result of implementing robotic platforms during the teaching process, allowing the children to become more motivated during their PE class and become more proactive and retain more information during their STEM classes.
Subject(s)
Robotic Surgical Procedures , Robotics , Child , Developed Countries , Humans , Physical Education and Training , TechnologyABSTRACT
INTRODUCTION: We present Lil'Flo, a socially assistive robotic telerehabilitation system for deployment in the community. As shortages in rehabilitation professionals increase, especially in rural areas, there is a growing need to deliver care in the communities where patients live, work, learn, and play. Traditional telepresence, while useful, fails to deliver the rich interactions and data needed for motor rehabilitation and assessment. METHODS: We designed Lil'Flo, targeted towards pediatric patients with cerebral palsy and brachial plexus injuries using results from prior usability studies. The system combines traditional telepresence and computer vision with a humanoid, who can play games with patients and guide them in a present and engaging way under the supervision of a remote clinician. We surveyed 13 rehabilitation clinicians in a virtual usability test to evaluate the system. RESULTS: The system is more portable, extensible, and cheaper than our prior iteration, with an expressive humanoid. The virtual usability testing shows that clinicians believe Lil'Flo could be deployed in rural and elder care facilities and is more capable of remote stretching, strength building, and motor assessments than traditional video only telepresence. CONCLUSIONS: Lil'Flo represents a novel approach to delivering rehabilitation care in the community while maintaining the clinician-patient connection.