Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-35442889

ABSTRACT

Predicting the user's intended locomotion mode is critical for wearable robot control to assist the user's seamless transitions when walking on changing terrains. Although machine vision has recently proven to be a promising tool in identifying upcoming terrains in the travel path, existing approaches are limited to environment perception rather than human intent recognition that is essential for coordinated wearable robot operation. Hence, in this study, we aim to develop a novel system that fuses the human gaze (representing user intent) and machine vision (capturing environmental information) for accurate prediction of the user's locomotion mode. The system possesses multimodal visual information and recognizes user's locomotion intent in a complex scene, where multiple terrains are present. Additionally, based on the dynamic time warping algorithm, a fusion strategy was developed to align temporal predictions from individual modalities while producing flexible decisions on the timing of locomotion mode transition for wearable robot control. System performance was validated using experimental data collected from five participants, showing high accuracy (over 96% in average) of intent recognition and reliable decision-making on locomotion transition with adjustable lead time. The promising results demonstrate the potential of fusing human gaze and machine vision for locomotion intent recognition of lower limb wearable robots.


Subject(s)
Locomotion , Walking , Algorithms , Humans , Intention , Lower Extremity
2.
IEEE Trans Cybern ; 52(3): 1750-1762, 2022 Mar.
Article in English | MEDLINE | ID: mdl-32520717

ABSTRACT

Computer vision has shown promising potential in wearable robotics applications (e.g., human grasping target prediction and context understanding). However, in practice, the performance of computer vision algorithms is challenged by insufficient or biased training, observation noise, cluttered background, etc. By leveraging Bayesian deep learning (BDL), we have developed a novel, reliable vision-based framework to assist upper limb prosthesis grasping during arm reaching. This framework can measure different types of uncertainties from the model and data for grasping target recognition in realistic and challenging scenarios. A probability calibration network was developed to fuse the uncertainty measures into one calibrated probability for online decision making. We formulated the problem as the prediction of grasping target while arm reaching. Specifically, we developed a 3-D simulation platform to simulate and analyze the performance of vision algorithms under several common challenging scenarios in practice. In addition, we integrated our approach into a shared control framework of a prosthetic arm and demonstrated its potential at assisting human participants with fluent target reaching and grasping tasks.


Subject(s)
Artificial Limbs , Robotics , Arm , Bayes Theorem , Hand Strength , Humans , Upper Extremity
3.
Accid Anal Prev ; 137: 105432, 2020 Mar.
Article in English | MEDLINE | ID: mdl-32004860

ABSTRACT

Driving distraction is a leading cause of fatal car accidents, and almost nine people are killed in the US each day because of distracting activities. Therefore, reducing the number of distraction-affected traffic accidents remains an imperative issue. A novel algorithm for detection of drivers' manual distraction was proposed in this manuscript. The detection algorithm consists of two modules. The first module predicts the bounding boxes of the driver's right hand and right ear from RGB images. The second module takes the bounding boxes as input and predicts the type of distraction. 106,677 frames extracted from videos, which were collected from twenty participants in a driving simulator, were used for training (50%) and testing (50%). For distraction classification, the results indicated that the proposed framework could detect normal driving, using the touchscreen, and talking with a phone with F1-score 0.84, 0.69, 0.82, respectively. For overall distraction detection, it achieved F1-score of 0.74. The whole framework ran at 28 frames per second. The algorithm achieved comparable overall accuracy with similar research, and was more efficient than other methods. A demo video for the algorithm can be found at https://youtu.be/NKclK1bHRd4.


Subject(s)
Accidents, Traffic/prevention & control , Distracted Driving , Pattern Recognition, Automated/methods , Adult , Algorithms , Data Collection , Ear/physiology , Female , Hand/physiology , Humans , Male , Neural Networks, Computer
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3163-3166, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31946559

ABSTRACT

This paper aims to investigate the visual strategy of transtibial amputees while they are approaching the transition between level-ground and stairs and compare it with that of able-bodied individuals. To this end, we conducted a pilot study where two transtibial amputee subjects and two able-bodied subjects transitioned from level-ground to stairs and vice versa while wearing eye tracking glasses to record gaze fixations. To investigate how vision functioned to both populations for preparing locomotion on new terrains, gaze fixation behavior before the new terrains were analyzed and compared between two populations across all transition cases in the study. Our results presented that, unlike the able-bodied population, amputees had most of their fixations directed on the transition region prior to new terrains. Furthermore, amputees showed an increased need for visual information during transition regions before navigation on stairs than that before navigation onto level-ground. The insights about amputees' visual behavior gained by the study may lead the future development of technologies related to the intention prediction and the locomotion recognition for amputees.


Subject(s)
Amputees , Artificial Limbs , Eye Movement Measurements/instrumentation , Fixation, Ocular , Gait , Biomechanical Phenomena , Humans , Pilot Projects
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 4623-4626, 2018 Jul.
Article in English | MEDLINE | ID: mdl-30441382

ABSTRACT

Physiological responses are essential for health monitoring. However, modeling the complex interactions be- tween them across activity and environmental factors can be challenging. In this paper, we introduce a framework that identifies the state of an individual based on their activity, trains predictive models for their physiological response within these states, and jointly optimizes for the states and the models. We apply this framework to respiratory rate prediction based on heart rate and physical activity, and test it on a dataset of 9 individuals performing various activities of daily life.


Subject(s)
Activities of Daily Living , Exercise , Heart Rate , Respiratory Rate , Humans , Linear Models
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 1817-1820, 2018 Jul.
Article in English | MEDLINE | ID: mdl-30440748

ABSTRACT

Lower-limb robotic prosthetics can benefit from context awareness to provide comfort and safety to the amputee. In this work, we developed a terrain identification and surface inclination estimation system for a prosthetic leg using visual and inertial sensors. We built a dataset from which images with high sharpness are selected using the IMU signal. The images are used for terrain identification and inclination is also computed simultaneously. With such information, the control of a robotized prosthetic leg can be adapted to changes in its surrounding.


Subject(s)
Amputees , Artificial Limbs , Humans , Locomotion , Lower Extremity
7.
IEEE Trans Cybern ; 47(11): 3706-3718, 2017 11.
Article in English | MEDLINE | ID: mdl-28113386

ABSTRACT

Visual tracking is a critical task in many computer vision applications such as surveillance and robotics. However, although the robustness to local corruptions has been improved, prevailing trackers are still sensitive to large scale corruptions, such as occlusions and illumination variations. In this paper, we propose a novel robust object tracking technique depends on subspace learning-based appearance model. Our contributions are twofold. First, mask templates produced by frame difference are introduced into our template dictionary. Since the mask templates contain abundant structure information of corruptions, the model could encode information about the corruptions on the object more efficiently. Meanwhile, the robustness of the tracker is further enhanced by adopting system dynamic, which considers the moving tendency of the object. Second, we provide the theoretic guarantee that by adapting the modulated template dictionary system, our new sparse model can be solved by the accelerated proximal gradient algorithm as efficient as in traditional sparse tracking methods. Extensive experimental evaluations demonstrate that our method significantly outperforms 21 other cutting-edge algorithms in both speed and tracking accuracy, especially when there are challenges such as pose variation, occlusion, and illumination changes.

SELECTION OF CITATIONS
SEARCH DETAIL
...