Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Neural Eng ; 16(2): 026039, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30864550

RESUMO

OBJECTIVE: Currently, there are some 95 000 people in Europe suffering from upper-limb impairment. Rehabilitation should be undertaken right after the impairment occurs and should be regularly performed thereafter. Moreover, the rehabilitation process should be tailored specifically to both patient and impairment. APPROACH: To address this, we have developed a low-cost solution that integrates an off-the-shelf virtual reality (VR) setup with our in-house developed arm/hand intent detection system. The resulting system, called VITA, enables an upper-limb disabled person to interact in a virtual world as if her impaired limb were still functional. VITA provides two specific features that we deem essential: proportionality of force control and interactivity between the user and the intent detection core. The usage of relatively cheap commercial components enables VITA to be used in rehabilitation centers, hospitals, or even at home. The applications of VITA range from rehabilitation of patients with musculodegenerative conditions (e.g. ALS), to treating phantom-limb pain of people with limb-loss and prosthetic training. MAIN RESULTS: We present a multifunctional system for upper-limb rehabilitation in VR. We tested the system using a VR implementation of a standard hand assessment tool, the Box and Block test and performed a user study on this standard test with both intact subjects and a prosthetic user. Furthermore, we present additional applications, showing the versatility of the system. SIGNIFICANCE: The VITA system shows the applicability of a combination of our experience in intent detection with state-of-the art VR system for rehabilitation purposes. With VITA, we have an easily adaptable experimental tool available, which allows us to quickly and realistically simulate all kind of real-world problems and rehabilitation exercises for upper-limb impaired patients. Additionally, other scenarios such as prostheses simulations and control modes can be quickly implemented and tested.


Assuntos
Amputados/reabilitação , Antebraço/fisiologia , Reabilitação Neurológica/métodos , Próteses e Implantes , Terapia de Exposição à Realidade Virtual/métodos , Adulto , Eletromiografia/métodos , Feminino , Humanos , Masculino , Reabilitação Neurológica/instrumentação , Membro Fantasma/fisiopatologia , Membro Fantasma/reabilitação , Recuperação de Função Fisiológica/fisiologia , Reabilitação do Acidente Vascular Cerebral/métodos , Extremidade Superior/fisiologia , Terapia de Exposição à Realidade Virtual/instrumentação
2.
Front Neurorobot ; 10: 3, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27148039

RESUMO

One of the crucial problems found in the scientific community of assistive/rehabilitation robotics nowadays is that of automatically detecting what a disabled subject (for instance, a hand amputee) wants to do, exactly when she wants to do it, and strictly for the time she wants to do it. This problem, commonly called "intent detection," has traditionally been tackled using surface electromyography, a technique which suffers from a number of drawbacks, including the changes in the signal induced by sweat and muscle fatigue. With the advent of realistic, physically plausible augmented- and virtual-reality environments for rehabilitation, this approach does not suffice anymore. In this paper, we explore a novel method to solve the problem, which we call Optical Myography (OMG). The idea is to visually inspect the human forearm (or stump) to reconstruct what fingers are moving and to what extent. In a psychophysical experiment involving ten intact subjects, we used visual fiducial markers (AprilTags) and a standard web camera to visualize the deformations of the surface of the forearm, which then were mapped to the intended finger motions. As ground truth, a visual stimulus was used, avoiding the need for finger sensors (force/position sensors, datagloves, etc.). Two machine-learning approaches, a linear and a non-linear one, were comparatively tested in settings of increasing realism. The results indicate an average error in the range of 0.05-0.22 (root mean square error normalized over the signal range), in line with similar results obtained with more mature techniques such as electromyography. If further successfully tested in the large, this approach could lead to vision-based intent detection of amputees, with the main application of letting such disabled persons dexterously and reliably interact in an augmented-/virtual-reality setup.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...