Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
J Biomech ; 155: 111617, 2023 06.
Article in English | MEDLINE | ID: mdl-37220709

ABSTRACT

Inertial sensing and computer vision are promising alternatives to traditional optical motion tracking, but until now these data sources have been explored either in isolation or fused via unconstrained optimization, which may not take full advantage of their complementary strengths. By adding physiological plausibility and dynamical robustness to a proposed solution, biomechanical modeling may enable better fusion than unconstrained optimization. To test this hypothesis, we fused video and inertial sensing data via dynamic optimization with a nine degree-of-freedom model and investigated when this approach outperforms video-only, inertial-sensing-only, and unconstrained-fusion methods. We used both experimental and synthetic data that mimicked different ranges of video and inertial measurement unit (IMU) data noise. Fusion with a dynamically constrained model significantly improved estimation of lower-extremity kinematics over the video-only approach and estimation of joint centers over the IMU-only approach. It consistently outperformed single-modality approaches across different noise profiles. When the quality of video data was high and that of inertial data was low, dynamically constrained fusion improved estimation of joint kinematics and joint centers over unconstrained fusion, while unconstrained fusion was advantageous in the opposite scenario. These findings indicate that complementary modalities and techniques can improve motion tracking by clinically meaningful margins and that data quality and computational complexity must be considered when selecting the most appropriate method for a particular application.


Subject(s)
Lower Extremity , Vision, Ocular , Motion , Biomechanical Phenomena , Information Sources
2.
IEEE Trans Biomed Eng ; 70(11): 3082-3092, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37171931

ABSTRACT

OBJECTIVE: Marker-based motion capture, considered the gold standard in human motion analysis, is expensive and requires trained personnel. Advances in inertial sensing and computer vision offer new opportunities to obtain research-grade assessments in clinics and natural environments. A challenge that discourages clinical adoption, however, is the need for careful sensor-to-body alignment, which slows the data collection process in clinics and is prone to errors when patients take the sensors home. METHODS: We propose deep learning models to estimate human movement with noisy data from videos (VideoNet), inertial sensors (IMUNet), and a combination of the two (FusionNet), obviating the need for careful calibration. The video and inertial sensing data used to train the models were generated synthetically from a marker-based motion capture dataset of a broad range of activities and augmented to account for sensor-misplacement and camera-occlusion errors. The models were tested using real data that included walking, jogging, squatting, sit-to-stand, and other activities. RESULTS: On calibrated data, IMUNet was as accurate as state-of-the-art models, while VideoNet and FusionNet reduced mean ± std root-mean-squared errors by 7.6 ± 5.4 ° and 5.9 ± 3.3 °, respectively. Importantly, all the newly proposed models were less sensitive to noise than existing approaches, reducing errors by up to 14.0 ± 5.3 ° for sensor-misplacement errors of up to 30.0 ± 13.7 ° and by up to 7.4 ± 5.5 ° for joint-center-estimation errors of up to 101.1 ± 11.2 mm, across joints. CONCLUSION: These tools offer clinicians and patients the opportunity to estimate movement with research-grade accuracy, without the need for time-consuming calibration steps or the high costs associated with commercial products such as Theia3D or Xsens, helping democratize the diagnosis, prognosis, and treatment of neuromusculoskeletal conditions.

3.
J Biomech ; 129: 110650, 2021 12 02.
Article in English | MEDLINE | ID: mdl-34644610

ABSTRACT

The field of biomechanics is at a turning point, with marker-based motion capture set to be replaced by portable and inexpensive hardware, rapidly improving markerless tracking algorithms, and open datasets that will turn these new technologies into field-wide team projects. Despite progress, several challenges inhibit both inertial and vision-based motion tracking from reaching the high accuracies that many biomechanics applications require. Their complementary strengths, however, could be harnessed toward better solutions than those offered by either modality alone. The drift from inertial measurement units (IMUs) could be corrected by video data, while occlusions in videos could be corrected by inertial data. To expedite progress in this direction, we have collected the CMU Panoptic Dataset 2.0, which contains 86 subjects captured with 140 VGA cameras, 31 HD cameras, and 15 IMUs, performing on average 6.5 min of activities, including range of motion activities and tasks of daily living. To estimate ground-truth kinematics, we imposed simultaneous consistency with the video and IMU data. Three-dimensional joint centers were first computed by geometrically triangulating proposals from a convolutional neural network applied to each video independently. A statistical meshed model parametrized in terms of body shape and pose was then fit through a top-down optimization approach that enforced consistency with both the video-based joint centers and IMU data. As proof of concept, we used this dataset to benchmark pose estimation from a sparse set of sensors, showing that incorporation of complementary modalities is a promising frontier that can be further strengthened through physics-informed frameworks.


Subject(s)
Awards and Prizes , Gait Analysis , Biomechanical Phenomena , Gait , Humans , Motion , Neural Networks, Computer
4.
J Biomech ; 116: 110229, 2021 02 12.
Article in English | MEDLINE | ID: mdl-33485143

ABSTRACT

The difficulty of estimating joint kinematics remains a critical barrier toward widespread use of inertial measurement units in biomechanics. Traditional sensor-fusion filters are largely reliant on magnetometer readings, which may be disturbed in uncontrolled environments. Careful sensor-to-segment alignment and calibration strategies are also necessary, which may burden users and lead to further error in uncontrolled settings. We introduce a new framework that combines deep learning and top-down optimization to accurately predict lower extremity joint angles directly from inertial data, without relying on magnetometer readings. We trained deep neural networks on a large set of synthetic inertial data derived from a clinical marker-based motion-tracking database of hundreds of subjects. We used data augmentation techniques and an automated calibration approach to reduce error due to variability in sensor placement and limb alignment. On left-out subjects, lower extremity kinematics could be predicted with a mean (±STD) root mean squared error of less than 1.27° (±0.38°) in flexion/extension, less than 2.52° (±0.98°) in ad/abduction, and less than 3.34° (±1.02°) internal/external rotation, across walking and running trials. Errors decreased exponentially with the amount of training data, confirming the need for large datasets when training deep neural networks. While this framework remains to be validated with true inertial measurement unit data, the results presented here are a promising advance toward convenient estimation of gait kinematics in natural environments. Progress in this direction could enable large-scale studies and offer new perspective into disease progression, patient recovery, and sports biomechanics.


Subject(s)
Deep Learning , Biomechanical Phenomena , Gait , Humans , Range of Motion, Articular , Walking
SELECTION OF CITATIONS
SEARCH DETAIL
...