Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Robot AI ; 11: 1312554, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38476118

RESUMO

Objective: For transradial amputees, robotic prosthetic hands promise to regain the capability to perform daily living activities. Current control methods based on physiological signals such as electromyography (EMG) are prone to yielding poor inference outcomes due to motion artifacts, muscle fatigue, and many more. Vision sensors are a major source of information about the environment state and can play a vital role in inferring feasible and intended gestures. However, visual evidence is also susceptible to its own artifacts, most often due to object occlusion, lighting changes, etc. Multimodal evidence fusion using physiological and vision sensor measurements is a natural approach due to the complementary strengths of these modalities. Methods: In this paper, we present a Bayesian evidence fusion framework for grasp intent inference using eye-view video, eye-gaze, and EMG from the forearm processed by neural network models. We analyze individual and fused performance as a function of time as the hand approaches the object to grasp it. For this purpose, we have also developed novel data processing and augmentation techniques to train neural network components. Results: Our results indicate that, on average, fusion improves the instantaneous upcoming grasp type classification accuracy while in the reaching phase by 13.66% and 14.8%, relative to EMG (81.64% non-fused) and visual evidence (80.5% non-fused) individually, resulting in an overall fusion accuracy of 95.3%. Conclusion: Our experimental data analyses demonstrate that EMG and visual evidence show complementary strengths, and as a consequence, fusion of multimodal evidence can outperform each individual evidence modality at any given time.

2.
Front Neurosci ; 16: 849991, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35720725

RESUMO

Electromyography (EMG) data has been extensively adopted as an intuitive interface for instructing human-robot collaboration. A major challenge to the real-time detection of human grasp intent is the identification of dynamic EMG from hand movements. Previous studies predominantly implemented the steady-state EMG classification with a small number of grasp patterns in dynamic situations, which are insufficient to generate differentiated control regarding the variation of muscular activity in practice. In order to better detect dynamic movements, more EMG variability could be integrated into the model. However, only limited research was conducted on such detection of dynamic grasp motions, and most existing assessments on non-static EMG classification either require supervised ground-truth timestamps of the movement status or only contain limited kinematic variations. In this study, we propose a framework for classifying dynamic EMG signals into gestures and examine the impact of different movement phases, using an unsupervised method to segment and label the action transitions. We collected and utilized data from large gesture vocabularies with multiple dynamic actions to encode the transitions from one grasp intent to another based on natural sequences of human grasp movements. The classifier for identifying the gesture label was constructed afterward based on the dynamic EMG signal, with no supervised annotation of kinematic movements required. Finally, we evaluated the performances of several training strategies using EMG data from different movement phases and explored the information revealed from each phase. All experiments were evaluated in a real-time style with the performance transitions presented over time.

3.
Intell Serv Robot ; 13(1): 179-185, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33312264

RESUMO

Upper limb and hand functionality is critical to many activities of daily living and the amputation of one can lead to significant functionality loss for individuals. From this perspective, advanced prosthetic hands of the future are anticipated to benefit from improved shared control between a robotic hand and its human user, but more importantly from the improved capability to infer human intent from multimodal sensor data to provide the robotic hand perception abilities regarding the operational context. Such multimodal sensor data may include various environment sensors including vision, as well as human physiology and behavior sensors including electromyography and inertial measurement units. A fusion methodology for environmental state and human intent estimation can combine these sources of evidence in order to help prosthetic hand motion planning and control. In this paper, we present a dataset of this type that was gathered with the anticipation of cameras being built into prosthetic hands, and computer vision methods will need to assess this hand-view visual evidence in order to estimate human intent. Specifically, paired images from human eye-view and hand-view of various objects placed at different orientations have been captured at the initial state of grasping trials, followed by paired video, EMG and IMU from the arm of the human during a grasp, lift, put-down, and retract style trial structure. For each trial, based on eye-view images of the scene showing the hand and object on a table, multiple humans were asked to sort in decreasing order of preference, five grasp types appropriate for the object in its given configuration relative to the hand. The potential utility of paired eye-view and hand-view images was illustrated by training a convolutional neural network to process hand-view images in order to predict eye-view labels assigned by humans.

4.
Int IEEE EMBS Conf Neural Eng ; 2019: 1097-1100, 2019 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32818047

RESUMO

This study proposes a novel approach for evaluating the task invariance of muscle synergies, vital for potential implementation in improving prosthetic hand control. We do this by using a transfer learning paradigm to test for invariance across a relatively small set of hand/forearm muscle synergies, derived from electromyographic (EMG) activation patterns during voluntary behaviors such as finger spelling and grasp mimicking postures and unconstrained exploration. EMG for each task were decomposed using non-negative matrix factorization into synergy and weight matrices, and cross-task weights for each task were then reconstructed by employing the base matrices from different tasks. Support Vector Machine and Extreme Learning Machine classifiers were used to classify the resulting weights in order to compare their performance, as well as their behaviors as a function of synergy rank. Both algorithms showed robust and significantly higher performance, compared to two distinct randomized controls, with lower rank EMG representations, both within and between tasks/postures, supporting hypotheses of functional invariance of multi-muscle synergies. Our results suggest that this invariance could be leveraged to efficiently calibrate postures for prosthetic hand implementation by transferring learned EMG patterns from unconstrained movements to other tasks.

5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 1964-1967, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30440783

RESUMO

We present a novel hierarchical graphical model based context-aware hybrid brain-machine interface (hBMI) using probabilistic fusion of electroencephalographic (EEG) and electromyographic (EMG) activities. Based on experimental data collected during stationary executions and subsequent imageries of five different hand gestures with both limbs, we demonstrate feasibility of the proposed hBMI system through within session and online across sessions classification analyses. Furthermore, we investigate the context-aware extent of the model by a simulated probabilistic approach and highlight potential implications of our work in the field of neurophysiologically-driven robotic hand prosthetics.


Assuntos
Conscientização , Interfaces Cérebro-Computador , Eletroencefalografia , Gestos , Robótica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...