Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(12)2023 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-37420731

RESUMO

In rehabilitating orientation and mobility (O&M) for visually impaired people (VIP), the measurement of spatio-temporal gait and postural parameters is of specific interest for rehabilitators to assess performance and improvements in independent mobility. In the current practice of rehabilitation worldwide, this assessment is carried out in people with estimates made visually. The objective of this research was to propose a simple architecture based on the use of wearable inertial sensors for quantitative estimation of distance traveled, step detection, gait velocity, step length and postural stability. These parameters were calculated using absolute orientation angles. Two different sensing architectures were tested for gait according to a selected biomechanical model. The validation tests included five different walking tasks. There were nine visually impaired volunteers in real-time acquisitions, where the volunteers walked indoor and outdoor distances at different gait velocities in their residences. The ground truth gait characteristics of the volunteers in five walking tasks and an assessment of the natural posture during the walking tasks are also presented in this article. One of the proposed methods was selected for presenting the lowest absolute error of the calculated parameters in all of the traveling experimentations: 45 walking tasks between 7 and 45 m representing a total of 1039 m walked and 2068 steps; the step length measurement was 4.6 ± 6.7 cm with a mean of 56 cm (11.59 Std) and 1.5 ± 1.6 relative error in step count, which compromised the distance traveled and gait velocity measurements, presenting an absolute error of 1.78 ± 1.80 m and 7.1 ± 7.2 cm/s, respectively. The results suggest that the proposed method and its architecture could be used as a tool for assistive technology designed for O&M training to assess gait parameters and/or navigation, and that a sensor placed in the dorsal area is sufficient to detect noticeable postural changes that compromise heading, inclinations and balancing in walking tasks.


Assuntos
Marcha , Dispositivos Eletrônicos Vestíveis , Humanos , Caminhada , Voluntários , Postura
2.
PeerJ Comput Sci ; 8: e1052, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36091986

RESUMO

Deep learning (DL) models are very useful for human activity recognition (HAR); these methods present better accuracy for HAR when compared to traditional, among other advantages. DL learns from unlabeled data and extracts features from raw data, as for the case of time-series acceleration. Sliding windows is a feature extraction technique. When used for preprocessing time-series data, it provides an improvement in accuracy, latency, and cost of processing. The time and cost of preprocessing can be beneficial especially if the window size is small, but how small can this window be to keep good accuracy? The objective of this research was to analyze the performance of four DL models: a simple deep neural network (DNN); a convolutional neural network (CNN); a long short-term memory network (LSTM); and a hybrid model (CNN-LSTM), when variating the sliding window size using fixed overlapped windows to identify an optimal window size for HAR. We compare the effects in two acceleration sources': wearable inertial measurement unit sensors (IMU) and motion caption systems (MOCAP). Moreover, short sliding windows of sizes 5, 10, 15, 20, and 25 frames to long ones of sizes 50, 75, 100, and 200 frames were compared. The models were fed using raw acceleration data acquired in experimental conditions for three activities: walking, sit-to-stand, and squatting. Results show that the most optimal window is from 20-25 frames (0.20-0.25s) for both sources, providing an accuracy of 99,07% and F1-score of 87,08% in the (CNN-LSTM) using the wearable sensors data, and accuracy of 98,8% and F1-score of 82,80% using MOCAP data; similar accurate results were obtained with the LSTM model. There is almost no difference in accuracy in larger frames (100, 200). However, smaller windows present a decrease in the F1-score. In regard to inference time, data with a sliding window of 20 frames can be preprocessed around 4x (LSTM) and 2x (CNN-LSTM) times faster than data using 100 frames.

3.
Entropy (Basel) ; 23(7)2021 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-34356390

RESUMO

The rehabilitation of a visually impaired person (VIP) is a systematic process where the person is provided with tools that allow them to deal with the impairment to achieve personal autonomy and independence, such as training for the use of the long cane as a tool for orientation and mobility (O&M). This process must be trained personally by specialists, leading to a limitation of human, technological and structural resources in some regions, especially those with economical narrow circumstances. A system to obtain information about the motion of the long cane and the leg using low-cost inertial sensors was developed to provide an overview of quantitative parameters such as sweeping coverage and gait analysis, that are currently visually analyzed during rehabilitation. The system was tested with 10 blindfolded volunteers in laboratory conditions following constant contact, two points touch, and three points touch travel techniques. The results indicate that the quantification system is reliable for measuring grip rotation, safety zone, sweeping amplitude and hand position using orientation angles with an accuracy of around 97.62%. However, a new method or an improvement of hardware must be developed to improve gait parameters' measurements, since the step length measurement presented a mean accuracy of 94.62%. The system requires further development to be used as an aid in the rehabilitation process of the VIP. Now, it is a simple and low-cost technological aid that has the potential to improve the current practice of O&M.

4.
Sensors (Basel) ; 21(14)2021 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-34300507

RESUMO

A diverse array of assistive technologies have been developed to help Visually Impaired People (VIP) face many basic daily autonomy challenges. Inertial measurement unit sensors, on the other hand, have been used for navigation, guidance, and localization but especially for full body motion tracking due to their low cost and miniaturization, which have allowed the estimation of kinematic parameters and biomechanical analysis for different field of applications. The aim of this work was to present a comprehensive approach of assistive technologies for VIP that include inertial sensors as input, producing results on the comprehension of technical characteristics of the inertial sensors, the methodologies applied, and their specific role in each developed system. The results show that there are just a few inertial sensor-based systems. However, these sensors provide essential information when combined with optical sensors and radio signals for navigation and special application fields. The discussion includes new avenues of research, missing elements, and usability analysis, since a limitation evidenced in the selected articles is the lack of user-centered designs. Finally, regarding application fields, it has been highlighted that a gap exists in the literature regarding aids for rehabilitation and biomechanical analysis of VIP. Most of the findings are focused on navigation and obstacle detection, and this should be considered for future applications.


Assuntos
Tecnologia Assistiva , Pessoas com Deficiência Visual , Fenômenos Biomecânicos , Humanos , Movimento (Física)
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...