Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Med Biol Eng Comput ; 59(10): 2127-2137, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34467447

ABSTRACT

A human motion capture system using an RGB-D camera could be a good option to understand the trunk limitations in spondyloarthritis. The aim of this study is to validate a human motion capture system using an RGB-D camera to analyse trunk movement limitations in spondyloarthritis patients. Cross-sectional study was performed where spondyloarthritis patients were diagnosed with a rheumatologist. The RGB-D camera analysed the kinematics of each participant during seven functional tasks based on rheumatologic assessment. The OpenNI2 library collected the depth data, the NiTE2 middleware detected a virtual skeleton and the MRPT library recorded the trunk positions. The gold standard was registered using an inertial measurement unit. The outcome variables were angular displacement, angular velocity and lineal acceleration of the trunk. Criterion validity and the reliability were calculated. Seventeen subjects (54.35 (11.75) years) were measured. The Bending task obtained moderate results in validity (r = 0.55-0.62) and successful results in reliability (ICC = 0.80-0.88) and validity and reliability of angular kinematic results in Chair task were moderate and (r = 0.60-0.74, ICC = 0.61-0.72). The kinematic results in Timed Up and Go test were less consistent. The RGB-D camera was documented to be a reliable tool to assess the movement limitations in spondyloarthritis depending on the functional tasks: Bending task. Chair task needs further research and the TUG analysis was not validated. Comparation of both systems, required software for camera analysis, outcomes and final results of validity and reliability of each test.


Subject(s)
Movement , Postural Balance , Spondylarthritis , Biomechanical Phenomena , Cross-Sectional Studies , Humans , Reproducibility of Results , Spondylarthritis/physiopathology , Time and Motion Studies
2.
Sensors (Basel) ; 21(7)2021 Apr 02.
Article in English | MEDLINE | ID: mdl-33918493

ABSTRACT

This paper addresses appearance-based robot localization in 2D with a sparse, lightweight map of the environment composed of descriptor-pose image pairs. Based on previous research in the field, we assume that image descriptors are samples of a low-dimensional Descriptor Manifold that is locally articulated by the camera pose. We propose a piecewise approximation of the geometry of such Descriptor Manifold through a tessellation of so-called Patches of Smooth Appearance Change (PSACs), which defines our appearance map. Upon this map, the presented robot localization method applies both a Gaussian Process Particle Filter (GPPF) to perform camera tracking and a Place Recognition (PR) technique for relocalization within the most likely PSACs according to the observed descriptor. A specific Gaussian Process (GP) is trained for each PSAC to regress a Gaussian distribution over the descriptor for any particle pose lying within that PSAC. The evaluation of the observed descriptor in this distribution gives us a likelihood, which is used as the weight for the particle. Besides, we model the impact of appearance variations on image descriptors as a white noise distribution within the GP formulation, ensuring adequate operation under lighting and scene appearance changes with respect to the conditions in which the map was constructed. A series of experiments with both real and synthetic images show that our method outperforms state-of-the-art appearance-based localization methods in terms of robustness and accuracy, with median errors below 0.3 m and 6°.

3.
J Biomech ; 116: 110212, 2021 02 12.
Article in English | MEDLINE | ID: mdl-33401131

ABSTRACT

Low back pain (LBP) can lead to motor control disturbance which can be one of the causes of reoccurrence of the complaint. It is important to improve our knowledge of movement related disturbances during assessment in LBP and to classify patients according to the severity. The aim of this study is to present differences in kinematic variables using a RGB-D camera in order to classify LBP patients with different severity. A cross-sectional study was carried out. Subjects with non-specific subacute and chronic LBP were screened 6 weeks following an episode. Functional tests were bending trunk test, sock test and sit to stand test. Participants performed as many repetitions as possible during 30 s for each functional test. Angular displacement, velocity and acceleration, linear acceleration, time and repetitions were analysed. Participants were divided into two groups to determine their different LBP severity with a k-means clusters according to the results obtained in Roland Morris questionnaire (RMQ). Comparing different severity groups based on RMQ score (high impact = 17.15, low impact = 7.47), bending trunk test obtained significative differences in linear acceleration (p = 0.002-0.01). The differences of total linear acceleration during the Sit to Stand test were significative (p = 0.004-0.02). Sock test showed not significative differences between groups (p > 0.05). Linear acceleration variables during Sit to Stand test and Bending trunk test were significatively different between the different severity groups. RGB-D camera system and functional tests can detect kinematic differences in different type of LBP according to the functionality. Trial registration: ClinicalTrials.gov NCT03293095 "Functional Task Kinematic in Musculoskeletal Pathology" September 26, 2017.


Subject(s)
Low Back Pain , Biomechanical Phenomena , Cross-Sectional Studies , Humans , Low Back Pain/diagnosis , Movement , Range of Motion, Articular
4.
Sensors (Basel) ; 20(3)2020 Jan 27.
Article in English | MEDLINE | ID: mdl-32012763

ABSTRACT

BACKGROUND: The RGB-D camera is an alternative to asses kinematics in order to obtain objective measurements of functional limitations. The aim of this study is to analyze the validity, reliability, and responsiveness of the motion capture depth camera in sub-acute and chronic low back pain patients. METHODS: Thirty subjects (18-65 years) with non-specific lumbar pain were screened 6 weeks following an episode. RGB-D camera measurements were compared with an inertial measurement unit. Functional tests included climbing stairs, bending, reaching sock, lie-to-sit, sit-to-stand, and timed up-and-go. Subjects performed the maximum number of repetitions during 30 s. Validity was analyzed using Spearman's correlation, reliability of repetitions was calculated by the intraclass correlation coefficient and the standard error of measurement, and receiver operating characteristic curves were calculated to assess the responsiveness. RESULTS: The kinematic analysis obtained variable results according to the test. The time variable had good values in the validity and reliability of all tests (r = 0.93-1.00, (intraclass correlation coefficient (ICC) = 0.62-0.93). Regarding kinematics, the best results were obtained in bending test, sock test, and sit-to-stand test (r = 0.53-0.80, ICC = 0.64-0.83, area under the curve (AUC) = 0.55-84). CONCLUSION: Functional tasks, such as bending, sit-to-stand, reaching, and putting on sock, assessed with the RGB-D camera, revealed acceptable validity, reliability, and responsiveness in the assessment of patients with low back pain (LBP). TRIAL REGISTRATION: ClinicalTrials.gov NCT03293095 "Functional Task Kinematic in Musculoskeletal Pathology" September 26, 2017.


Subject(s)
Chronic Pain/diagnostic imaging , Low Back Pain/diagnostic imaging , Range of Motion, Articular/physiology , Video Recording/methods , Adolescent , Adult , Aged , Biomechanical Phenomena , Chronic Pain/diagnosis , Chronic Pain/physiopathology , Disability Evaluation , Female , Humans , Low Back Pain/diagnosis , Low Back Pain/physiopathology , Male , Middle Aged , Movement , Pain Measurement , Posture/physiology , Young Adult
5.
Sensors (Basel) ; 19(22)2019 Nov 13.
Article in English | MEDLINE | ID: mdl-31766197

ABSTRACT

Human-Robot interaction represents a cornerstone of mobile robotics, especially within the field of social robots. In this context, user localization becomes of crucial importance for the interaction. This work investigates the capabilities of wide field-of-view RGB cameras to estimate the 3D position and orientation (i.e., the pose) of a user in the environment. For that, we employ a social robot endowed with a fish-eye camera hosted in a tilting head and develop two complementary approaches: (1) a fast method relying on a single image that estimates the user pose from the detection of their feet and does not require either the robot or the user to remain static during the reconstruction; and (2) a method that takes some views of the scene while the camera is being tilted and does not need the feet to be visible. Due to the particular setup of the tilting camera, special equations for 3D reconstruction have been developed. In both approaches, a CNN-based skeleton detector (OpenPose) is employed to identify humans within the image. A set of experiments with real data validate our two proposed methods, yielding similar results than commercial RGB-D cameras while surpassing them in terms of coverage of the scene (wider FoV and longer range) and robustness to light conditions.

6.
Sensors (Basel) ; 19(16)2019 Aug 09.
Article in English | MEDLINE | ID: mdl-31404963

ABSTRACT

Olfaction is a valuable source of information about the environment that has not been sufficiently exploited in mobile robotics yet. Certainly, odor information can contribute to other sensing modalities, e.g., vision, to accomplish high-level robot activities, such as task planning or execution in human environments. This paper organizes and puts together the developments and experiences on combining olfaction and vision into robotics applications, as the result of our five-years long project IRO: Improvement of the sensory and autonomous capability of Robots through Olfaction. Particularly, it investigates mechanisms to exploit odor information (usually coming in the form of the type of volatile and its concentration) in problems such as object recognition and scene-activity understanding. A distinctive aspect of this research is the special attention paid to the role of semantics within the robot perception and decision-making processes. The obtained results have improved the robot capabilities in terms of efficiency, autonomy, and usefulness, as reported in our publications.

7.
Sensors (Basel) ; 20(1)2019 Dec 31.
Article in English | MEDLINE | ID: mdl-31906184

ABSTRACT

In domestic robotics, passing through narrow areas becomes critical for safe and effective robot navigation. Due to factors like sensor noise or miscalibration, even if the free space is sufficient for the robot to pass through, it may not see enough clearance to navigate, hence limiting its operational space. An approach to facing this is to insert waypoints strategically placed within the problematic areas in the map, which are considered by the robot planner when generating a trajectory and help to successfully traverse them. This is typically carried out by a human operator either by relying on their experience or by trial-and-error. In this paper, we present an automatic procedure to perform this task that: (i) detects problematic areas in the map and (ii) generates a set of auxiliary navigation waypoints from which more suitable trajectories can be generated by the robot planner. Our proposal, fully compatible with the robotic operating system (ROS), has been successfully applied to robots deployed in different houses within the H2020 MoveCare project. Moreover, we have performed extensive simulations with four state-of-the-art robots operating within real maps. The results reveal significant improvements in the number of successful navigations for the evaluated scenarios, demonstrating its efficacy in realistic situations.

8.
Sensors (Basel) ; 18(12)2018 Nov 28.
Article in English | MEDLINE | ID: mdl-30487414

ABSTRACT

This paper addresses the localization of a gas emission source within a real-world human environment with a mobile robot. Our approach is based on an efficient and coherent system that fuses different sensor modalities (i.e., vision and chemical sensing) to exploit, for the first time, the semantic relationships among the detected gases and the objects visually recognized in the environment. This novel approach allows the robot to focus the search on a finite set of potential gas source candidates (dynamically updated as the robot operates), while accounting for the non-negligible uncertainties in the object recognition and gas classification tasks involved in the process. This approach is particularly interesting for structured indoor environments containing multiple obstacles and objects, enabling the inference of the relations between objects and between objects and gases. A probabilistic Bayesian framework is proposed to handle all these uncertainties and semantic relations, providing an ordered list of candidates to be the source. This candidate list is updated dynamically upon new sensor measurements to account for objects not previously considered in the search process. The exploitation of such probabilities together with information such as the locations of the objects, or the time needed to validate whether a given candidate is truly releasing gases, is delegated to a path planning algorithm based on Markov decision processes to minimize the search time. The system was tested in an office-like scenario, both with simulated and real experiments, to enable the comparison of different path planning strategies and to validate its efficiency under real-world conditions.


Subject(s)
Algorithms , Robotics , Artificial Intelligence , Bayes Theorem , Machine Learning , Pattern Recognition, Automated
9.
Sensors (Basel) ; 17(2)2017 Feb 22.
Article in English | MEDLINE | ID: mdl-28241455

ABSTRACT

In clinical practice, patients' balance can be assessed using standard scales. Two of the most validated clinical tests for measuring balance are the Timed Up and Go (TUG) test and the MultiDirectional Reach Test (MDRT). Nowadays, inertial sensors (IS) are employed for kinematic analysis of functional tests in the clinical setting, and have become an alternative to expensive, 3D optical motion capture systems. In daily clinical practice, however, IS-based setups are yet cumbersome and inconvenient to apply. Current depth cameras have the potential for such application, presenting many advantages as, for instance, being portable, low-cost and minimally-invasive. This paper aims at experimentally validating to what extent this technology can substitute IS for the parameterization and kinematic analysis of the TUG and the MDRT tests. Twenty healthy young adults were recruited as participants to perform five different balance tests while kinematic data from their movements were measured by both a depth camera and an inertial sensor placed on their trunk. The reliability of the camera's measurements is examined through the Interclass Correlation Coefficient (ICC), whilst the Pearson Correlation Coefficient (r) is computed to evaluate the correlation between both sensor's measurements, revealing excellent reliability and strong correlations in most cases.


Subject(s)
Movement , Biomechanical Phenomena , Humans , Postural Balance , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...