Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Biomed Eng Online ; 23(1): 20, 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38360664

RESUMO

Human-robot walking with prosthetic legs and exoskeletons, especially over complex terrains, such as stairs, remains a significant challenge. Egocentric vision has the unique potential to detect the walking environment prior to physical interactions, which can improve transitions to and from stairs. This motivated us to develop the StairNet initiative to support the development of new deep learning models for visual perception of real-world stair environments. In this study, we present a comprehensive overview of the StairNet initiative and key research to date. First, we summarize the development of our large-scale data set with over 515,000 manually labeled images. We then provide a summary and detailed comparison of the performances achieved with different algorithms (i.e., 2D and 3D CNN, hybrid CNN and LSTM, and ViT networks), training methods (i.e., supervised learning with and without temporal data, and semi-supervised learning with unlabeled images), and deployment methods (i.e., mobile and embedded computing), using the StairNet data set. Finally, we discuss the challenges and future directions. To date, our StairNet models have consistently achieved high classification accuracy (i.e., up to 98.8%) with different designs, offering trade-offs between model accuracy and size. When deployed on mobile devices with GPU and NPU accelerators, our deep learning models achieved inference speeds up to 2.8 ms. In comparison, when deployed on our custom-designed CPU-powered smart glasses, our models yielded slower inference speeds of 1.5 s, presenting a trade-off between human-centered design and performance. Overall, the results of numerous experiments presented herein provide consistent evidence that StairNet can be an effective platform to develop and study new deep learning models for visual perception of human-robot walking environments, with an emphasis on stair recognition. This research aims to support the development of next-generation vision-based control systems for robotic prosthetic legs, exoskeletons, and other mobility assistive technologies.


Assuntos
Robótica , Humanos , Locomoção , Caminhada , Algoritmos , Perna (Membro)
2.
IEEE Int Conf Rehabil Robot ; 2022: 1-6, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36176138

RESUMO

Computer vision can be used in robotic exoskeleton control to improve transitions between different locomotion modes through the prediction of future environmental states. Here we present the development of a large-scale automated stair recognition system powered by convolutional neural networks to recognize indoor and outdoor real-world stair environments. Building on the ExoNet database- the largest and most diverse open-source dataset of wearable camera images of walking environments-we designed a new computer vision dataset, called StairNet, specifically for stair recognition with over 515,000 images. We then developed and optimized an efficient deep learning model for automatic feature engineering and image classification. Our system was able to accurately predict complex stair environments with 98.4% classification accuracy. These promising results present an opportunity to increase the autonomy and safety of human-exoskeleton locomotion for real-world community mobility. Future work will explore the mobile deployment of our automated stair recognition system for onboard real-time inference.


Assuntos
Aprendizado Profundo , Exoesqueleto Energizado , Computadores , Humanos , Redes Neurais de Computação , Caminhada
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 4631-4635, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892246

RESUMO

Robotic exoskeletons require human control and decision making to switch between different locomotion modes, which can be inconvenient and cognitively demanding. To support the development of automated locomotion mode recognition systems (i.e., intelligent high-level controllers), we designed an environment recognition system using computer vision and deep learning. Here we first reviewed the development of the "ExoNet" database - the largest and most diverse open-source dataset of wearable camera images of indoor and outdoor real-world walking environments, which were annotated using a hierarchical labelling architecture. We then trained and tested the EfficientNetB0 convolutional neural network, which was optimized for efficiency using neural architecture search, to forward predict the walking environments. Our environment recognition system achieved ~73% image classification accuracy. These results provide the inaugural benchmark performance on the ExoNet database. Future research should evaluate and compare different convolutional neural networks to develop an accurate and real- time environment-adaptive locomotion mode recognition system for robotic exoskeleton control.


Assuntos
Aprendizado Profundo , Exoesqueleto Energizado , Procedimentos Cirúrgicos Robóticos , Computadores , Humanos , Redes Neurais de Computação
4.
Front Neurorobot ; 15: 730965, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35185507

RESUMO

Robotic leg prostheses and exoskeletons can provide powered locomotor assistance to older adults and/or persons with physical disabilities. However, the current locomotion mode recognition systems being developed for automated high-level control and decision-making rely on mechanical, inertial, and/or neuromuscular sensors, which inherently have limited prediction horizons (i.e., analogous to walking blindfolded). Inspired by the human vision-locomotor control system, we developed an environment classification system powered by computer vision and deep learning to predict the oncoming walking environments prior to physical interaction, therein allowing for more accurate and robust high-level control decisions. In this study, we first reviewed the development of our "ExoNet" database-the largest and most diverse open-source dataset of wearable camera images of indoor and outdoor real-world walking environments, which were annotated using a hierarchical labeling architecture. We then trained and tested over a dozen state-of-the-art deep convolutional neural networks (CNNs) on the ExoNet database for image classification and automatic feature engineering, including: EfficientNetB0, InceptionV3, MobileNet, MobileNetV2, VGG16, VGG19, Xception, ResNet50, ResNet101, ResNet152, DenseNet121, DenseNet169, and DenseNet201. Finally, we quantitatively compared the benchmarked CNN architectures and their environment classification predictions using an operational metric called "NetScore," which balances the image classification accuracy with the computational and memory storage requirements (i.e., important for onboard real-time inference with mobile computing devices). Our comparative analyses showed that the EfficientNetB0 network achieves the highest test accuracy; VGG16 the fastest inference time; and MobileNetV2 the best NetScore, which can inform the optimal architecture design or selection depending on the desired performance. Overall, this study provides a large-scale benchmark and reference for next-generation environment classification systems for robotic leg prostheses and exoskeletons.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...