Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Int Conf Rehabil Robot ; 2022: 1-6, 2022 07.
Article in English | MEDLINE | ID: mdl-36176074

ABSTRACT

The last decades saw a great innovation in computer vision. Recently, the field has been fundamental in the development of autonomous navigation systems. Modern assistive technologies, like smart wheelchairs, could employ autonomous navigation to assist users during operation. A prerequisite for such systems is to recognise the navigable space in real-time. The current research features an off-the-shelf powered wheelchair customised into an intelligent robot, which perceives the environment using Point Cloud Semantic Segmentation (PCSS). The implemented algorithm is used to distinguish between two conditions, traversable and non-traversable space, in real-time, using the aforementioned conditions as the two labelled classes. The accuracy of traversable space detection resulted as 99.64% while the accuracy of non-traversable space detection was 91.79%. The performance of the suggested method was invariant to changes in wheelchair velocity indicating that the latency of the suggested algorithm is within the tolerable limits for real-time operation.


Subject(s)
Self-Help Devices , Wheelchairs , Algorithms , Equipment Design , Humans , Semantics
2.
J Neuroeng Rehabil ; 19(1): 69, 2022 07 05.
Article in English | MEDLINE | ID: mdl-35790978

ABSTRACT

BACKGROUND: Brain-computer interfaces (BCIs) are systems capable of translating human brain patterns, measured through electroencephalography (EEG), into commands for an external device. Despite the great advances in machine learning solutions to enhance the performance of BCI decoders, the translational impact of this technology remains elusive. The reliability of BCIs is often unsatisfactory for end-users, limiting their application outside a laboratory environment. METHODS: We present the analysis on the data acquired from an end-user during the preparation for two Cybathlon competitions, where our pilot won the gold medal twice in a row. These data are of particular interest given the mutual learning approach adopted during the longitudinal training phase (8 months), the long training break in between the two events (1 year) and the demanding evaluation scenario. A multifaceted perspective on long-term user learning is proposed: we enriched the information gathered through conventional metrics (e.g., accuracy, application performances) by investigating novel neural correlates of learning in different neural domains. RESULTS: First, we showed that by focusing the training on user learning, the pilot was capable of significantly improving his performance over time even with infrequent decoder re-calibrations. Second, we revealed that the analysis of the within-class modifications of the pilot's neural patterns in the Riemannian domain is more effective in tracking the acquisition and the stabilization of BCI skills, especially after the 1-year break. These results further confirmed the key role of mutual learning in the acquisition of BCI skills, and particularly highlighted the importance of user learning as a key to enhance BCI reliability. CONCLUSION: We firmly believe that our work may open new perspectives and fuel discussions in the BCI field to shift the focus of future research: not only to the machine learning of the decoder, but also in investigating novel training procedures to boost the user learning and the stability of the BCI skills in the long-term. To this end, the analyses and the metrics proposed could be used to monitor the user learning during training and provide a marker guiding the decoder re-calibration to maximize the mutual adaptation of the user to the BCI system.


Subject(s)
Brain-Computer Interfaces , Brain , Electroencephalography/methods , Humans , Machine Learning , Reproducibility of Results
3.
Front Neurorobot ; 16: 886050, 2022.
Article in English | MEDLINE | ID: mdl-35619967

ABSTRACT

The growing interest in neurorobotics has led to a proliferation of heterogeneous neurophysiological-based applications controlling a variety of robotic devices. Although recent years have seen great advances in this technology, the integration between human neural interfaces and robotics is still limited, making evident the necessity of creating a standardized research framework bridging the gap between neuroscience and robotics. This perspective paper presents Robot Operating System (ROS)-Neuro, an open-source framework for neurorobotic applications based on ROS. ROS-Neuro aims to facilitate the software distribution, the repeatability of the experimental results, and support the birth of a new community focused on neuro-driven robotics. In addition, the exploitation of Robot Operating System (ROS) infrastructure guarantees stability, reliability, and robustness, which represent fundamental aspects to enhance the translational impact of this technology. We suggest that ROS-Neuro might be the future development platform for the flourishing of a new generation of neurorobots to promote the rehabilitation, the inclusion, and the independence of people with disabilities in their everyday life.

4.
Sensors (Basel) ; 22(9)2022 Apr 28.
Article in English | MEDLINE | ID: mdl-35591057

ABSTRACT

The development of a Social Intelligence System based on artificial intelligence is one of the cutting edge technologies in Assistive Robotics. Such systems need to create an empathic interaction with the users; therefore, it os required to include an Emotion Recognition (ER) framework which has to run, in near real-time, together with several other intelligent services. Most of the low-cost commercial robots, however, although more accessible by users and healthcare facilities, have to balance costs and effectiveness, resulting in under-performing hardware in terms of memory and processing unit. This aspect makes the design of the systems challenging, requiring a trade-off between the accuracy and the complexity of the adopted models. This paper proposes a compact and robust service for Assistive Robotics, called Lightweight EMotion recognitiON (LEMON), which uses image processing, Computer Vision and Deep Learning (DL) algorithms to recognize facial expressions. Specifically, the proposed DL model is based on Residual Convolutional Neural Networks with the combination of Dilated and Standard Convolution Layers. The first remarkable result is the few numbers (i.e., 1.6 Million) of parameters characterizing our model. In addition, Dilated Convolutions expand receptive fields exponentially with preserving resolution, less computation and memory cost to recognize the distinction among facial expressions by capturing the displacement of the pixels. Finally, to reduce the dying ReLU problem and improve the stability of the model, we apply an Exponential Linear Unit (ELU) activation function in the initial layers of the model. We have performed training and evaluation (via one- and five-fold cross validation) of the model with five datasets available in the community and one mixed dataset created by taking samples from all of them. With respect to the other approaches, our model achieves comparable results with a significant reduction in terms of the number of parameters.


Subject(s)
Citrus , Facial Recognition , Robotics , Artificial Intelligence , Facial Expression , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...