Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Intell Neurosci ; 2023: 1394882, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37954097

RESUMO

Facial expression is the best evidence of our emotions. Its automatic detection and recognition are key for robotics, medicine, healthcare, education, psychology, sociology, marketing, security, entertainment, and many other areas. Experiments in the lab environments achieve high performance. However, in real-world scenarios, it is challenging. Deep learning techniques based on convolutional neural networks (CNNs) have shown great potential. Most of the research is exclusively model-centric, searching for better algorithms to improve recognition. However, progress is insufficient. Despite being the main resource for automatic learning, few works focus on improving the quality of datasets. We propose a novel data-centric method to tackle misclassification, a problem commonly encountered in facial image datasets. The strategy is to progressively refine the dataset by successive training of a CNN model that is fixed. Each training uses the facial images corresponding to the correct predictions of the previous training, allowing the model to capture more distinctive features of each class of facial expression. After the last training, the model performs automatic reclassification of the whole dataset. Unlike other similar work, our method avoids modifying, deleting, or augmenting facial images. Experimental results on three representative datasets proved the effectiveness of the proposed method, improving the validation accuracy by 20.45%, 14.47%, and 39.66%, for FER2013, NHFI, and AffectNet, respectively. The recognition rates on the reclassified versions of these datasets are 86.71%, 70.44%, and 89.17% and become state-of-the-art performance.


Assuntos
Reconhecimento Facial , Robótica , Redes Neurais de Computação , Algoritmos , Face , Expressão Facial
2.
Comput Intell Neurosci ; 2021: 5532580, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34220998

RESUMO

Around 5% of the world population suffers from hearing impairment. One of its main barriers is communication with others since it could lead to their social exclusion and frustration. To overcome this issue, this paper presents a system to interpret the Spanish sign language alphabet which makes the communication possible in those cases, where it is necessary to sign proper nouns such as names, streets, or trademarks. For this, firstly, we have generated an image dataset of the signed 30 letters composing the Spanish alphabet. Then, given that there are static and in-motion letters, two different kinds of neural networks have been tested and compared: convolutional neural networks (CNNs) and recurrent neural networks (RNNs). A comparative analysis of the experimental results highlights the importance of the spatial dimension with respect to the temporal dimension in sign interpretation. So, CNNs obtain a much better accuracy, with 96.42% being the maximum value.


Assuntos
Aprendizado Profundo , Língua de Sinais , Humanos , Idioma , Movimento (Física) , Redes Neurais de Computação
3.
Sensors (Basel) ; 21(4)2021 Feb 04.
Artigo em Inglês | MEDLINE | ID: mdl-33557363

RESUMO

Over time, the field of robotics has provided solutions to automate routine tasks in different scenarios. In particular, libraries are awakening great interest in automated tasks since they are semi-structured environments where machines coexist with humans and several repetitive operations could be automatically performed. In addition, multirotor aerial vehicles have become very popular in many applications over the past decade, however autonomous flight in confined spaces still presents a number of challenges and the use of small drones has not been reported as an automated inventory device within libraries. This paper presents the UJI aerial librarian robot that leverages computer vision techniques to autonomously self-localize and navigate in a library for automated inventory and book localization. A control strategy to navigate along the library bookcases is presented by using visual markers for self-localization during a visual inspection of bookshelves. An image-based book recognition technique is described that combines computer vision techniques to detect the tags on the book spines, followed by an optical character recognizer (OCR) to convert the book code on the tags into text. These data can be used for library inventory. Misplaced books can be automatically detected, and a particular book can be located within the library. Our quadrotor robot was tested in a real library with promising results. The problems encountered and limitation of the system are discussed, along with its relation to similar applications, such as automated inventory in warehouses.

4.
Sensors (Basel) ; 19(20)2019 Oct 18.
Artigo em Inglês | MEDLINE | ID: mdl-31635278

RESUMO

There are great physical and cognitive benefits for older adults who are engaged in active aging, a process that should involve daily exercise. In our previous work on the PHysical Assistant RObot System (PHAROS), we developed a system that proposed and monitored physical activities. The system used a social robot to analyse, by means of computer vision, the exercise a person was doing. Then, a recommender system analysed the exercise performed and indicated what exercise to perform next. However, the system needed certain improvements. On the one hand, the vision system captured the movement of the person and indicated whether the exercise had been done correctly or not. On the other hand, the recommender system was based purely on a ranking system that did not take into account temporal evolution and preferences. In this work, we propose an evolution of PHAROS, PHAROS 2.0, incorporating improvements in both of the previously mentioned aspects. In the motion capture aspect, we are now able to indicate the degree of completeness of each exercise, identifying the part that has not been done correctly, and a real-time performance correction. In this way, the recommender system receives a greater amount of information and so can more accurately indicate the exercise to be performed. In terms of the recommender system, an algorithm was developed to weigh the performance, temporal evolution and preferences, providing a more accurate recommendation, as well as expanding the recommendation to a batch of exercises, instead of just one.

5.
Comput Intell Neurosci ; 2019: 1431509, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31281333

RESUMO

Rehabilitation is essential for disabled people to achieve the highest level of functional independence, reducing or preventing impairments. Nonetheless, this process can be long and expensive. This fact together with the ageing phenomenon has become a critical issue for both clinicians and patients. In this sense, technological solutions may be beneficial since they reduce the costs and increase the number of patients per caregiver, which makes them more accessible. In addition, they provide access to rehabilitation services for those facing physical, financial, and/or attitudinal barriers. This paper presents the state of the art of the assistive rehabilitation technologies for different recovery methods starting from in-person sessions to complementary at-home activities.


Assuntos
Pessoas com Deficiência/reabilitação , Recuperação de Função Fisiológica , Reabilitação , Tecnologia Assistiva , Humanos , Sala de Recuperação , Reabilitação/instrumentação , Reabilitação/métodos
6.
Sensors (Basel) ; 19(7)2019 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-30959920

RESUMO

Advances in Robotics are leading to a new generation of assistant robots working in ordinary, domestic settings. This evolution raises new challenges in the tasks to be accomplished by the robots. This is the case for object manipulation where the detect-approach-grasp loop requires a robust recovery stage, especially when the held object slides. Several proprioceptive sensors have been developed in the last decades, such as tactile sensors or contact switches, that can be used for that purpose; nevertheless, their implementation may considerably restrict the gripper's flexibility and functionality, increasing their cost and complexity. Alternatively, vision can be used since it is an undoubtedly rich source of information, and in particular, depth vision sensors. We present an approach based on depth cameras to robustly evaluate the manipulation success, continuously reporting about any object loss and, consequently, allowing it to robustly recover from this situation. For that, a Lab-colour segmentation allows the robot to identify potential robot manipulators in the image. Then, the depth information is used to detect any edge resulting from two-object contact. The combination of those techniques allows the robot to accurately detect the presence or absence of contact points between the robot manipulator and a held object. An experimental evaluation in realistic indoor environments supports our approach.

7.
Comput Intell Neurosci ; 2018: 9179462, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30210534

RESUMO

Aimed at building autonomous service robots, reasoning, perception, and action should be properly integrated. In this paper, the depth cue has been analysed as an early stage given its importance for robotic tasks. So, from neuroscience findings, a hierarchical four-level dorsal architecture has been designed and implemented. Mainly, from a stereo image pair, a set of complex Gabor filters is applied for estimating an egocentric quantitative disparity map. This map leads to a quantitative depth scene representation that provides the raw input for a qualitative approach. So, the reasoning method infers the data required to make the right decision at any time. As it will be shown, the experimental results highlight the robust performance of the biologically inspired approach presented in this paper.


Assuntos
Percepção de Profundidade , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Robótica , Humanos , Modelos Biológicos , Estudo de Prova de Conceito
8.
Sensors (Basel) ; 18(8)2018 Aug 11.
Artigo em Inglês | MEDLINE | ID: mdl-30103492

RESUMO

The great demographic change leading to an ageing society demands technological solutions to satisfy the increasing varied elderly needs. This paper presents PHAROS, an interactive robot system that recommends and monitors physical exercises designed for the elderly. The aim of PHAROS is to be a friendly elderly companion that periodically suggests personalised physical activities, promoting healthy living and active ageing. Here, it is presented the PHAROS architecture, components and experimental results. The architecture has three main strands: a Pepper robot, that interacts with the users and records their exercises performance; the Human Exercise Recognition, that uses the Pepper recorded information to classify the exercise performed using Deep Leaning methods; and the Recommender, a smart-decision maker that schedules periodically personalised physical exercises in the users' agenda. The experimental results show a high accuracy in terms of detecting and classifying the physical exercises (97.35%) done by 7 persons. Furthermore, we have implemented a novel procedure of rating exercises on the recommendation algorithm. It closely follows the users' health status (poor performance may reveal health problems) and adapts the suggestions to it. The history may be used to access the physical condition of the user, revealing underlying problems that may be impossible to see otherwise.

9.
Comput Intell Neurosci ; 2018: 4350272, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30687398

RESUMO

The accelerated growth of the percentage of elder people and persons with brain injury-related conditions and who are intellectually challenged are some of the main concerns of the developed countries. These persons often require special cares and even almost permanent overseers that help them to carry out diary tasks. With this issue in mind, we propose an automated schedule system which is deployed on a social robot. The robot keeps track of the tasks that the patient has to fulfill in a diary basis. When a task is triggered, the robot guides the patient through its completion. The system is also able to detect if the steps are being properly carried out or not, issuing alerts in that case. To do so, an ensemble of deep learning techniques is used. The schedule is customizable by the carers and authorized relatives. Our system could enhance the quality of life of the patients and improve their self-autonomy. The experimentation, which was supervised by the ADACEA foundation, validates the achievement of these goals.


Assuntos
Lesões Encefálicas/fisiopatologia , Disfunção Cognitiva/fisiopatologia , Inteligência/fisiologia , Robótica , Envelhecimento/fisiologia , Encéfalo/fisiologia , Humanos , Qualidade de Vida
10.
ScientificWorldJournal ; 2014: 179391, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24672295

RESUMO

Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data.


Assuntos
Fixação Ocular , Visão Binocular , Robótica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...