ABSTRACT
BACKGROUND: With the fast-paced advancements of robot technology, human-robot interaction (HRI) has become increasingly popular and complex, and self-efficacy in HRI has received extensive attention. Despite its popularity, this topic remains understudied in China. OBJECTIVE: In order to provide a psychometrically sound instrument in China, this study aimed to translate and validate the Self-Efficacy in Human-Robot Interaction Scale (SE-HRI) in two Chinese adult samples (N1 = 300, N2 = 500). METHODS: The data was analyzed by SPSS 26.0 and Amos 24.0. Item analysis and exploratory factor analysis were conducted using Sample 1 data. Confirmatory factor analysis, criterion-related validity analysis, and reliability analysis were then performed using Sample 2 data. RESULTS: The results revealed that the Chinese SE-HRI scale consisted of 13 items in a two-factor model, suggesting a good model fit. Moreover, general self-efficacy and willingness to accept the use of artificial intelligence (AI) were both positively correlated with self-efficacy in HRI, while negative attitudes toward robots showed an inverse correlation, proving the Chinese SE-HRI scale exhibited excellent criterion-related validity. CONCLUSION: The Chinese SE-HRI scale is a reliable assessment tool for evaluating self-efficacy in HRI in China. The study discussed implications and limitations, and suggested future directions.
ABSTRACT
A respiratory distress estimation technique for telephony previously proposed by the authors is adapted and evaluated in real static and dynamic HRI scenarios. The system is evaluated with a telephone dataset re-recorded using the robotic platform designed and implemented for this study. In addition, the original telephone training data are modified using an environmental model that incorporates natural robot-generated and external noise sources and reverberant effects using room impulse responses (RIRs). The results indicate that the average accuracy and AUC are just 0.4% less than those obtained with matched training/testing conditions with simulated data. Quite surprisingly, there is not much difference in accuracy and AUC between static and dynamic HRI conditions. Moreover, the beamforming methods delay-and-sum and MVDR lead to average improvement in accuracy and AUC equal to 8% and 2%, respectively, when applied to training and testing data. Regarding the complementarity of time-dependent and time-independent features, the combination of both types of classifiers provides the best joint accuracy and AUC score.
Subject(s)
Robotics , Humans , Dyspnea , RecordsABSTRACT
Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder characterized by inattention, hyperactivity, and impulsivity that affects a large number of young people in the world. The current treatments for children living with ADHD combine different approaches, such as pharmacological, behavioral, cognitive, and psychological treatment. However, the computer science research community has been working on developing non-pharmacological treatments based on novel technologies for dealing with ADHD. For instance, social robots are physically embodied agents with some autonomy and social interaction capabilities. Nowadays, these social robots are used in therapy sessions as a mediator between therapists and children living with ADHD. Another novel technology for dealing with ADHD is serious video games based on a brain-computer interface (BCI). These BCI video games can offer cognitive and neurofeedback training to children living with ADHD. This paper presents a systematic review of the current state of the art of these two technologies. As a result of this review, we identified the maturation level of systems based on these technologies and how they have been evaluated. Additionally, we have highlighted ethical and technological challenges that must be faced to improve these recently introduced technologies in healthcare.
ABSTRACT
Children with autism spectrum disorder (ASD) have deficits in social interaction and expressing and understanding emotions. Based on this, robots for children with ASD have been proposed. However, few studies have been conducted about how to design a social robot for children with ASD. Non-experimental studies have been carried out to evaluate social robots; however, the general methodology that should be used to design a social robot is not clear. This study proposes a design path for a social robot for emotional communication for children with ASD following a user-centered design approach. This design path was applied to a case study and evaluated by a group of experts in psychology, human-robot interaction, and human-computer interaction from Chile and Colombia, as well as parents of children with ASD. Our results show that following the proposed design path for a social robot to communicate emotions for children with ASD is favorable.
Subject(s)
Autism Spectrum Disorder , Robotics , Social Interaction , Child , Humans , Autism Spectrum Disorder/psychology , Communication , Emotions , Robotics/methodsABSTRACT
Social robotics represents a branch of human-robot interaction dedicated to developing systems to control the robots to operate in unstructured environments with the presence of human beings. Social robots must interact with human beings by understanding social signals and responding appropriately to them. Most social robots are still pre-programmed, not having great ability to learn and respond with actions adequate during an interaction with humans. Recently more elaborate methods use body movements, gaze direction, and body language. However, these methods generally neglect vital signs present during an interaction, such as the human emotional state. In this article, we address the problem of developing a system to turn a robot able to decide, autonomously, what behaviors to emit in the function of the human emotional state. From one side, the use of Reinforcement Learning (RL) represents a way for social robots to learn advanced models of social cognition, following a self-learning paradigm, using characteristics automatically extracted from high-dimensional sensory information. On the other side, Deep Learning (DL) models can help the robots to capture information from the environment, abstracting complex patterns from the visual information. The combination of these two techniques is known as Deep Reinforcement Learning (DRL). The purpose of this work is the development of a DRL system to promote a natural and socially acceptable interaction among humans and robots. For this, we propose an architecture, Social Robotics Deep Q-Network (SocialDQN), for teaching social robots to behave and interact appropriately with humans based on social signals, especially on human emotional states. This constitutes a relevant contribution for the area since the social signals must not only be recognized by the robot but help him to take action appropriated according to the situation presented. Characteristics extracted from people's faces are considered for extracting the human emotional state aiming to improve the robot perception. The development and validation of the system are carried out with the support of SimDRLSR simulator. Results obtained through several tests demonstrate that the system learned satisfactorily to maximize the rewards, and consequently, the robot behaves in a socially acceptable way.
ABSTRACT
This research presents the technical considerations for implementing the CeCi (Computer Electronic Communication Interface) social robot. In this case, this robot responds to the need to achieve technological development in an emerging country with the aim of social impact and social interaction. There are two problems with the social robots currently on the market, which are the main focus of this research. First, their costs are not affordable for companies, universities, or individuals in emerging countries. The second is that their design is exclusively oriented to the functional part with a vision inherent to the engineers who create them without considering the vision, preferences, or requirements of the end users, especially for their social interaction. This last reason ends causing an aversion to the use of this type of robot. In response to the issues raised, a low-cost prototype is proposed, starting from a commercial platform for research development and using open source code. The robot design presented here is centered on the criteria and preferences of the end user, prioritizing acceptability for social interaction. This article details the selection process and hardware capabilities of the robot. Moreover, a programming section is provided to introduce the different software packages used and adapted for the social interaction, the main functions implemented, as well as the new and original part of the proposal. Finally, a list of applications currently developed with the robot and possible applications for future research are discussed.
Subject(s)
Robotics , Engineering , Humans , Social Interaction , Software , User-Computer InterfaceABSTRACT
Social robotics is an emerging area that is becoming present in social spaces, by introducing autonomous social robots. Social robots offer services, perform tasks, and interact with people in such social environments, demanding more efficient and complex Human-Robot Interaction (HRI) designs. A strategy to improve HRI is to provide robots with the capacity of detecting the emotions of the people around them to plan a trajectory, modify their behaviour, and generate an appropriate interaction with people based on the analysed information. However, in social environments in which it is common to find a group of persons, new approaches are needed in order to make robots able to recognise groups of people and the emotion of the groups, which can be also associated with a scene in which the group is participating. Some existing studies are focused on detecting group cohesion and the recognition of group emotions; nevertheless, these works do not focus on performing the recognition tasks from a robocentric perspective, considering the sensory capacity of robots. In this context, a system to recognise scenes in terms of groups of people, to then detect global (prevailing) emotions in a scene, is presented. The approach proposed to visualise and recognise emotions in typical HRI is based on the face size of people recognised by the robot during its navigation (face sizes decrease when the robot moves away from a group of people). On each frame of the video stream of the visual sensor, individual emotions are recognised based on the Visual Geometry Group (VGG) neural network pre-trained to recognise faces (VGGFace); then, to detect the emotion of the frame, individual emotions are aggregated with a fusion method, and consequently, to detect global (prevalent) emotion in the scene (group of people), the emotions of its constituent frames are also aggregated. Additionally, this work proposes a strategy to create datasets with images/videos in order to validate the estimation of emotions in scenes and personal emotions. Both datasets are generated in a simulated environment based on the Robot Operating System (ROS) from videos captured by robots through their sensory capabilities. Tests are performed in two simulated environments in ROS/Gazebo: a museum and a cafeteria. Results show that the accuracy in the detection of individual emotions is 99.79% and the detection of group emotion (scene emotion) in each frame is 90.84% and 89.78% in the cafeteria and the museum scenarios, respectively.
Subject(s)
Robotics , Emotions , Humans , Reactive Oxygen Species , Robotics/methods , Social Interaction , Social PerceptionABSTRACT
Augmented humanity (AH) is a term that has been mentioned in several research papers. However, these papers differ in their definitions of AH. The number of publications dealing with the topic of AH is represented by a growing number of publications that increase over time, being high impact factor scientific contributions. However, this terminology is used without being formally defined. The aim of this paper is to carry out a systematic mapping review of the different existing definitions of AH and its possible application areas. Publications from 2009 to 2020 were searched in Scopus, IEEE and ACM databases, using search terms "augmented human", "human augmentation" and "human 2.0". Of the 16,914 initially obtained publications, a final number of 133 was finally selected. The mapping results show a growing focus on works based on AH, with computer vision being the index term with the highest number of published articles. Other index terms are wearable computing, augmented reality, human-robot interaction, smart devices and mixed reality. In the different domains where AH is present, there are works in computer science, engineering, robotics, automation and control systems and telecommunications. This review demonstrates that it is necessary to formalize the definition of AH and also the areas of work with greater openness to the use of such concept. This is why the following definition is proposed: "Augmented humanity is a human-computer integration technology that proposes to improve capacity and productivity by changing or increasing the normal ranges of human function through the restoration or extension of human physical, intellectual and social capabilities".
Subject(s)
Augmented Reality , Robotics , Automation , HumansABSTRACT
Research on affective communication for socially assistive robots has been conducted to enable physical robots to perceive, express, and respond emotionally. However, the use of affective computing in social robots has been limited, especially when social robots are designed for children, and especially those with autism spectrum disorder (ASD). Social robots are based on cognitive-affective models, which allow them to communicate with people following social behaviors and rules. However, interactions between a child and a robot may change or be different compared to those with an adult or when the child has an emotional deficit. In this study, we systematically reviewed studies related to computational models of emotions for children with ASD. We used the Scopus, WoS, Springer, and IEEE-Xplore databases to answer different research questions related to the definition, interaction, and design of computational models supported by theoretical psychology approaches from 1997 to 2021. Our review found 46 articles; not all the studies considered children or those with ASD.
Subject(s)
Autism Spectrum Disorder , Robotics , Child , Communication , Emotions , Humans , Social BehaviorABSTRACT
INTRODUCTION: We present Lil'Flo, a socially assistive robotic telerehabilitation system for deployment in the community. As shortages in rehabilitation professionals increase, especially in rural areas, there is a growing need to deliver care in the communities where patients live, work, learn, and play. Traditional telepresence, while useful, fails to deliver the rich interactions and data needed for motor rehabilitation and assessment. METHODS: We designed Lil'Flo, targeted towards pediatric patients with cerebral palsy and brachial plexus injuries using results from prior usability studies. The system combines traditional telepresence and computer vision with a humanoid, who can play games with patients and guide them in a present and engaging way under the supervision of a remote clinician. We surveyed 13 rehabilitation clinicians in a virtual usability test to evaluate the system. RESULTS: The system is more portable, extensible, and cheaper than our prior iteration, with an expressive humanoid. The virtual usability testing shows that clinicians believe Lil'Flo could be deployed in rural and elder care facilities and is more capable of remote stretching, strength building, and motor assessments than traditional video only telepresence. CONCLUSIONS: Lil'Flo represents a novel approach to delivering rehabilitation care in the community while maintaining the clinician-patient connection.
ABSTRACT
What are the benefits of using a socially assistive robot for long-term cardiac rehabilitation? To answer this question we designed and conducted a real-world long-term study, in collaboration with medical specialists, at the Fundación Cardioinfantil-Instituto de Cardiología clinic (Bogotá, Colombia) lasting 2.5 years. The study took place within the context of the outpatient phase of patients' cardiac rehabilitation programme and aimed to compare the patients' progress and adherence in the conventional cardiac rehabilitation programme (control condition) against rehabilitation supported by a fully autonomous socially assistive robot which continuously monitored the patients during exercise to provide immediate feedback and motivation based on sensory measures (robot condition). The explicit aim of the social robot is to improve patient motivation and increase adherence to the programme to ensure a complete recovery. We recruited 15 patients per condition. The cardiac rehabilitation programme was designed to last 36 sessions (18 weeks) per patient. The findings suggest that robot increases adherence (by 13.3%) and leads to faster completion of the programme. In addition, the patients assisted by the robot had more rapid improvement in their recovery heart rate, better physical activity performance and a higher improvement in cardiovascular functioning, which indicate a successful cardiac rehabilitation programme performance. Moreover, the medical staff and the patients acknowledged that the robot improved the patient motivation and adherence to the programme, supporting its potential in addressing the major challenges in rehabilitation programmes.
ABSTRACT
COVID-19 pandemic has affected the population worldwide, evidencing new challenges and opportunities for several kinds of emergent and existing technologies. Social Assistive Robotics could be a potential tool to support clinical care areas, promoting physical distancing, and reducing the contagion rate. In this context, this paper presents a long-term evaluation of a social robotic platform for gait neurorehabilitation. The robot's primary roles are monitoring physiological progress and promoting social interaction with human distancing during the sessions. A clinical validation with ten patients during 15 sessions were conducted in a rehabilitation center located in Colombia. Results showed that the robot's support improves the patients' physiological progress by reducing their unhealthy spinal posture time, with positive acceptance. 65% of patients described the platform as helpful and secure. Regarding the robot's role within the therapy, the health care staff agreed (>95%) that this tool can promote physical distancing and it is highly useful to support neurorehabilitation throughout the pandemic. These outcomes suggest the benefits of this tool to be further implemented in the pandemic.
ABSTRACT
Worldwide demographic projections point to a progressively older population. This fact has fostered research on Ambient Assisted Living, which includes developments on smart homes and social robots. To endow such environments with truly autonomous behaviours, algorithms must extract semantically meaningful information from whichever sensor data is available. Human activity recognition is one of the most active fields of research within this context. Proposed approaches vary according to the input modality and the environments considered. Different from others, this paper addresses the problem of recognising heterogeneous activities of daily living centred in home environments considering simultaneously data from videos, wearable IMUs and ambient sensors. For this, two contributions are presented. The first is the creation of the Heriot-Watt University/University of Sao Paulo (HWU-USP) activities dataset, which was recorded at the Robotic Assisted Living Testbed at Heriot-Watt University. This dataset differs from other multimodal datasets due to the fact that it consists of daily living activities with either periodical patterns or long-term dependencies, which are captured in a very rich and heterogeneous sensing environment. In particular, this dataset combines data from a humanoid robot's RGBD (RGB + depth) camera, with inertial sensors from wearable devices, and ambient sensors from a smart home. The second contribution is the proposal of a Deep Learning (DL) framework, which provides multimodal activity recognition based on videos, inertial sensors and ambient sensors from the smart home, on their own or fused to each other. The classification DL framework has also validated on our dataset and on the University of Texas at Dallas Multimodal Human Activities Dataset (UTD-MHAD), a widely used benchmark for activity recognition based on videos and inertial sensors, providing a comparative analysis between the results on the two datasets considered. Results demonstrate that the introduction of data from ambient sensors expressively improved the accuracy results.
Subject(s)
Activities of Daily Living , Wearable Electronic Devices , Algorithms , Ambient Intelligence , Human Activities , HumansABSTRACT
This paper presents the development and validation of a polymer optical-fiber strain-gauge sensor based on the light-coupling principle to measure axial deformation of elastic tendons incorporated in soft actuators for wearable assistive robots. An analytical model was proposed and further validated with experiment tests, showing correlation with a coefficient of R = 0.998 between experiment and theoretical data, and reaching a maximum axial displacement range of 15 mm and no significant hysteresis. Furthermore, experiment tests were carried out attaching the validated sensor to the elastic tendon. Results of three experiment tests show the sensor's capability to measure the tendon's response under tensile axial stress, finding 20.45% of hysteresis in the material's response between the stretching and recovery phase. Based on these results, there is evidence of the potential that the fiber-optical strain sensor presents for future applications in the characterization of such tendons and identification of dynamic models that allow the understanding of the material's response to the development of more efficient interaction-control strategies.
ABSTRACT
BACKGROUND: Collaborative robots are used in rehabilitation and are designed to interact with the client so as to provide the ability to assist walking therapeutically. One such device is the KineAssist which was designed to interact, either in a self-driven mode (SDM) or in an assist mode (AM), with neurologically-impaired individuals while they are walking on a treadmill surface. To understand the level of transparency (i.e., interference with movement due to the mechanical interface) between human and robot, and to estimate and account for changes in the kinetics and kinematics of the gait pattern, we tested the KineAssist under conditions of self-drive and horizontal push assistance. The aims of this study were to compare the joint kinematics, forces and moments during walking at a fixed constant treadmill belt speed and constrained walking cadence, with and without the robotic device (OUT) and to compare the biomechanics of assistive and self-drive modes in the device. METHOD: Twenty non-neurologically impaired adults participated in this study. We evaluated biomechanical parameters of walking at a fixed constant treadmill belt speed (1.0 m/s), with and without the robotic device in assistive mode. We also tested the self-drive condition, which enables the user to drive the speed and direction of a treadmill belt. Hip, knee and ankle angular displacements, ground reaction forces, hip, knee and ankle moments, and center of mass displacement were compared "in" vs "out" of the device. A repeated measures ANOVA test was applied with the three level factor of condition (OUT, AM, and SDM), and each participant was used as its own comparison. RESULTS: When comparing "in" and "out" of the device, we did not observe any interruptions and/or reversals of direction of the basic gait pattern trajectory, but there was increased ankle and hip angular excursions, vertical ground reaction force and hip moments and reduced center of mass displacement during the "in device" condition. Comparing assistive vs self-drive mode in device, participants had greater flexed posture and accentuated hip moments and propulsive force, but reduced braking force. CONCLUSIONS: Although the magnitudes and/or range of certain gait pattern components were altered by the device, we did not observe any interruption from the mechanical interface upon the advancement of the trajectories nor reversals in direction of movement which suggests that the KineAssist permits relative transparency (i.e.. lack of interference of movement by the device mechanism) to the individual's gait pattern. However, there are interactive forces to take into account, which appear to be overcome by kinematic and kinetic adjustments.
Subject(s)
Gait/physiology , Robotics/instrumentation , Adult , Biomechanical Phenomena , Female , Humans , Male , Walking , Young AdultABSTRACT
In this work, we present a multiclass hand posture classifier useful for human-robot interaction tasks. The proposed system is based exclusively on visual sensors, and it achieves a real-time performance, whilst detecting and recognizing an alphabet of four hand postures. The proposed approach is based on the real-time deformable detector, a boosting trained classifier. We describe a methodology to design the ensemble of real-time deformable detectors (one for each hand posture that can be classified). Given the lack of standard procedures for performance evaluation, we also propose the use of full image evaluation for this purpose. Such an evaluation methodology provides us with a more realistic estimation of the performance of the method. We have measured the performance of the proposed system and compared it to the one obtained by using only the sampled window approach. We present detailed results of such tests using a benchmark dataset. Our results show that the system can operate in real time at about a 10-fps frame rate.