Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Front Robot AI ; 11: 1312554, 2024.
Article in English | MEDLINE | ID: mdl-38476118

ABSTRACT

Objective: For transradial amputees, robotic prosthetic hands promise to regain the capability to perform daily living activities. Current control methods based on physiological signals such as electromyography (EMG) are prone to yielding poor inference outcomes due to motion artifacts, muscle fatigue, and many more. Vision sensors are a major source of information about the environment state and can play a vital role in inferring feasible and intended gestures. However, visual evidence is also susceptible to its own artifacts, most often due to object occlusion, lighting changes, etc. Multimodal evidence fusion using physiological and vision sensor measurements is a natural approach due to the complementary strengths of these modalities. Methods: In this paper, we present a Bayesian evidence fusion framework for grasp intent inference using eye-view video, eye-gaze, and EMG from the forearm processed by neural network models. We analyze individual and fused performance as a function of time as the hand approaches the object to grasp it. For this purpose, we have also developed novel data processing and augmentation techniques to train neural network components. Results: Our results indicate that, on average, fusion improves the instantaneous upcoming grasp type classification accuracy while in the reaching phase by 13.66% and 14.8%, relative to EMG (81.64% non-fused) and visual evidence (80.5% non-fused) individually, resulting in an overall fusion accuracy of 95.3%. Conclusion: Our experimental data analyses demonstrate that EMG and visual evidence show complementary strengths, and as a consequence, fusion of multimodal evidence can outperform each individual evidence modality at any given time.

2.
Front Neurosci ; 16: 849991, 2022.
Article in English | MEDLINE | ID: mdl-35720725

ABSTRACT

Electromyography (EMG) data has been extensively adopted as an intuitive interface for instructing human-robot collaboration. A major challenge to the real-time detection of human grasp intent is the identification of dynamic EMG from hand movements. Previous studies predominantly implemented the steady-state EMG classification with a small number of grasp patterns in dynamic situations, which are insufficient to generate differentiated control regarding the variation of muscular activity in practice. In order to better detect dynamic movements, more EMG variability could be integrated into the model. However, only limited research was conducted on such detection of dynamic grasp motions, and most existing assessments on non-static EMG classification either require supervised ground-truth timestamps of the movement status or only contain limited kinematic variations. In this study, we propose a framework for classifying dynamic EMG signals into gestures and examine the impact of different movement phases, using an unsupervised method to segment and label the action transitions. We collected and utilized data from large gesture vocabularies with multiple dynamic actions to encode the transitions from one grasp intent to another based on natural sequences of human grasp movements. The classifier for identifying the gesture label was constructed afterward based on the dynamic EMG signal, with no supervised annotation of kinematic movements required. Finally, we evaluated the performances of several training strategies using EMG data from different movement phases and explored the information revealed from each phase. All experiments were evaluated in a real-time style with the performance transitions presented over time.

3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 359-364, 2021 11.
Article in English | MEDLINE | ID: mdl-34891309

ABSTRACT

The electromyography (EMG) signals have been widely utilized in human-robot interaction for extracting user hand/arm motion instructions. A major challenge of the online interaction with robots is the reliable EMG recognition from real-time data. However, previous studies mainly focused on using steady-state EMG signals with a small number of grasp patterns to implement classification algorithms, which is insufficient to generate robust control regarding the dynamic muscular activity variation in practice. Introducing more EMG variability during training and validation could implement a better dynamic-motion detection, but only limited research focused on such grasp-movement identification, and all of those assessments on the non-static EMG classification require supervised ground-truth label of the movement status. In this study, we propose a framework for classifying EMG signals generated from continuous grasp movements with variations on dynamic arm/hand postures, using an unsupervised motion status segmentation method. We collected data from large gesture vocabularies with multiple dynamic motion phases to encode the transitions from one intent to another based on common sequences of the grasp movements. Two classifiers were constructed for identifying the motion-phase label and grasptype label, where the dynamic motion phases were segmented and labeled in an unsupervised manner. The proposed framework was evaluated in real-time with the accuracy variation over time presented, which was shown to be efficient due to the high degree of freedom of the EMG data.


Subject(s)
Gestures , Hand Strength , Electromyography , Humans , Motion , Movement
4.
Intell Serv Robot ; 13(1): 179-185, 2020 Jan.
Article in English | MEDLINE | ID: mdl-33312264

ABSTRACT

Upper limb and hand functionality is critical to many activities of daily living and the amputation of one can lead to significant functionality loss for individuals. From this perspective, advanced prosthetic hands of the future are anticipated to benefit from improved shared control between a robotic hand and its human user, but more importantly from the improved capability to infer human intent from multimodal sensor data to provide the robotic hand perception abilities regarding the operational context. Such multimodal sensor data may include various environment sensors including vision, as well as human physiology and behavior sensors including electromyography and inertial measurement units. A fusion methodology for environmental state and human intent estimation can combine these sources of evidence in order to help prosthetic hand motion planning and control. In this paper, we present a dataset of this type that was gathered with the anticipation of cameras being built into prosthetic hands, and computer vision methods will need to assess this hand-view visual evidence in order to estimate human intent. Specifically, paired images from human eye-view and hand-view of various objects placed at different orientations have been captured at the initial state of grasping trials, followed by paired video, EMG and IMU from the arm of the human during a grasp, lift, put-down, and retract style trial structure. For each trial, based on eye-view images of the scene showing the hand and object on a table, multiple humans were asked to sort in decreasing order of preference, five grasp types appropriate for the object in its given configuration relative to the hand. The potential utility of paired eye-view and hand-view images was illustrated by training a convolutional neural network to process hand-view images in order to predict eye-view labels assigned by humans.

5.
IEEE Trans Biomed Circuits Syst ; 10(2): 339-51, 2016 Apr.
Article in English | MEDLINE | ID: mdl-25974946

ABSTRACT

New medical procedures promise continuous patient monitoring and drug delivery through implanted sensors and actuators. When over the air wireless radio frequency (OTA-RF) links are used for intra-body implant communication, the network incurs heavy energy costs owing to absorption within the human tissue. With this motivation, we explore an alternate form of intra-body communication that relies on weak electrical signals, instead of OTA-RF. To demonstrate the feasibility of this new paradigm for enabling communication between sensors and actuators embedded within the tissue, or placed on the surface of the skin, we develop a rigorous analytical model based on galvanic coupling of low energy signals. The main contributions in this paper are: (i) developing a suite of analytical expressions for modeling the resulting communication channel for weak electrical signals in a three dimensional multi-layered tissue structure, (ii) validating and verifying the model through extensive finite element simulations, published measurements in existing literature, and experiments conducted with porcine tissue, (iii) designing the communication framework with safety considerations, and analyzing the influence of different network and hardware parameters such as transmission frequency and electrode placements. Our results reveal a close agreement between theory, simulation, literature and experimental findings, pointing to the suitability of the model for quick and accurate channel characterization and parameter estimation for networked and implanted sensors.


Subject(s)
Monitoring, Ambulatory/methods , Telemetry/methods , Wireless Technology/instrumentation , Animals , Computer Simulation , Equipment Design , Humans , Models, Theoretical , Swine , Telemetry/instrumentation
SELECTION OF CITATIONS
SEARCH DETAIL
...