Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38015662

RESUMO

Virtual environments provide a safe and accessible way to test innovative technologies for controlling wearable robotic devices. However, to simulate devices that support walking, such as powered prosthetic legs, it is not enough to model the hardware without its user. Predictive locomotion synthesizers can generate the movements of a virtual user, with whom the simulated device can be trained or evaluated. We implemented a Deep Reinforcement Learning based motion controller in the MuJoCo physics engine, where autonomy over the humanoid model was shared between the simulated user and the control policy of an active prosthesis. Despite not optimising the controller to match experimental dynamics, realistic torque profiles and ground reaction force curves were produced by the agent. A data-driven and continuous representation of user intent was used to simulate a Human Machine Interface that controlled a transtibial prosthesis in a non-steady state walking setting. The continuous intent representation was shown to mitigate the need for compensatory gait patterns from their virtual users and halve the rate of tripping. Co-adaptation was identified as a potential challenge for training human-in-the-loop prosthesis control policies. The proposed framework outlines a way to explore the complex design space of robot-assisted gait, promoting the transfer of the next generation of intent driven controllers from the lab to real-life scenarios.


Assuntos
Tornozelo , Membros Artificiais , Humanos , Articulação do Tornozelo , Locomoção , Caminhada , Marcha , Fenômenos Biomecânicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...