Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
MethodsX ; 12: 102556, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38283760

ABSTRACT

The integration of alternative data extraction approaches for multimodal data, can significantly reduce modeling difficulties for the automatic location assessment. We develop a method for assessing the quality of the immediate living environment by incorporating human judgments as ground truth into a neural network for generating new synthetic data and testing the effects in surrogate hedonic models. We expect that the quality of the data will be less biased if the annotation is performed by multiple independent persons applying repeated trials which should reduce the overall error variance and lead to more robust results. Experimental results show that linking repeated subjective judgements and Deep Learning can reliably determine the quality scores and thus expand the range of information for the quality assessment. The presented method is not computationally intensive, can be performed repetitively and can also be easily adapted to machine learning approaches in a broader sense or be transferred to other use cases. Following aspects are essential for the implementation of the method:•Sufficient amount of representative data for human assessment.•Repeated assessment trials by individuals.•Confident derivation of the effect of human judgments on property price as an approbation for further generation of synthetic data.

2.
PLoS One ; 18(8): e0288555, 2023.
Article in English | MEDLINE | ID: mdl-37566568

ABSTRACT

The correct estimation of gait events is essential for the interpretation and calculation of 3D gait analysis (3DGA) data. Depending on the severity of the underlying pathology and the availability of force plates, gait events can be set either manually by trained clinicians or detected by automated event detection algorithms. The downside of manually estimated events is the tedious and time-intensive work which leads to subjective assessments. For automated event detection algorithms, the drawback is, that there is no standardized method available. Algorithms show varying robustness and accuracy on different pathologies and are often dependent on setup or pathology-specific thresholds. In this paper, we aim at closing this gap by introducing a novel deep learning-based gait event detection algorithm called IntellEvent, which shows to be accurate and robust across multiple pathologies. For this study, we utilized a retrospective clinical 3DGA dataset of 1211 patients with four different pathologies (malrotation deformities of the lower limbs, club foot, infantile cerebral palsy (ICP), and ICP with only drop foot characteristics) and 61 healthy controls. We propose a recurrent neural network architecture based on long-short term memory (LSTM) and trained it with 3D position and velocity information to predict initial contact (IC) and foot off (FO) events. We compared IntellEvent to a state-of-the-art heuristic approach and a machine learning method called DeepEvent. IntellEvent outperforms both methods and detects IC events on average within 5.4 ms and FO events within 11.3 ms with a detection rate of ≥ 99% and ≥ 95%, respectively. Our investigation on generalizability across laboratories suggests that models trained on data from a different laboratory need to be applied with care due to setup variations or differences in capturing frequencies.


Subject(s)
Cerebral Palsy , Deep Learning , Humans , Retrospective Studies , Biomechanical Phenomena , Gait , Algorithms
3.
Comput Struct Biotechnol J ; 21: 3414-3423, 2023.
Article in English | MEDLINE | ID: mdl-37416082

ABSTRACT

Human gait is a complex and unique biological process that can offer valuable insights into an individual's health and well-being. In this work, we leverage a machine learning-based approach to model individual gait signatures and identify factors contributing to inter-individual variability in gait patterns. We provide a comprehensive analysis of gait individuality by (1) demonstrating the uniqueness of gait signatures in a large-scale dataset and (2) highlighting the gait characteristics that are most distinctive to each individual. We utilized the data from three publicly available datasets comprising 5368 bilateral ground reaction force recordings during level overground walking from 671 distinct healthy individuals. Our results show that individuals can be identified with a prediction accuracy of 99.3% by using the bilateral signals of all three ground reaction force components, with only 10 out of 1342 recordings in our test data being misclassified. This indicates that the combination of bilateral ground reaction force signals with all three components provides a more comprehensive and accurate representation of an individual's gait signature. The highest accuracy was achieved by (linear) Support Vector Machines (99.3%), followed by Random Forests (98.7%), Convolutional Neural Networks (95.8%), and Decision Trees (82.8%). The proposed approach provides a powerful tool to better understand biological individuality and has potential applications in personalized healthcare, clinical diagnosis, and therapeutic interventions.

4.
Mach Learn ; 111(9): 3203-3226, 2022.
Article in English | MEDLINE | ID: mdl-36124289

ABSTRACT

Recently enacted legislation grants individuals certain rights to decide in what fashion their personal data may be used and in particular a "right to be forgotten". This poses a challenge to machine learning: how to proceed when an individual retracts permission to use data which has been part of the training process of a model? From this question emerges the field of machine unlearning, which could be broadly described as the investigation of how to "delete training data from models". Our work complements this direction of research for the specific setting of class-wide deletion requests for classification models (e.g. deep neural networks). As a first step, we propose linear filtration as an intuitive, computationally efficient sanitization method. Our experiments demonstrate benefits in an adversarial setting over naive deletion schemes.

5.
Brain Sci ; 12(5)2022 Apr 27.
Article in English | MEDLINE | ID: mdl-35624953

ABSTRACT

Interdisciplinary research into the underlying neural processes of music therapy (MT) and subjective experiences of patients and therapists are largely lacking. The aim of the current study was to assess the feasibility of newly developed procedures (including electroencephalography/electrocardiography hyperscanning, synchronous audio-video monitoring, and qualitative interviews) to study the personal experiences and neuronal dynamics of moments of interest during MT with stroke survivors. The feasibility of our mobile setup and procedures as well as their clinical implementation in a rehabilitation centre and an acute hospital ward were tested with four phase C patients. Protocols and interviews were used for the documentation and analysis of the feasibility. Recruiting patients for MT sessions was feasible, although data collection on three consecutive weeks was not always possible due to organisational constraints, especially in the hospital with acute ward routines. Research procedures were successfully implemented, and according to interviews, none of the patients reported any burden, tiredness, or increased stress due to the research procedures, which lasted approx. 3 h (ranging from 135 min to 209 min) for each patient. Implementing the research procedures in a rehabilitation unit with stroke patients was feasible, and only small adaptations were made for further research.

6.
Sci Data ; 7(1): 143, 2020 05 12.
Article in English | MEDLINE | ID: mdl-32398644

ABSTRACT

The quantification of ground reaction forces (GRF) is a standard tool for clinicians to quantify and analyze human locomotion. Such recordings produce a vast amount of complex data and variables which are difficult to comprehend. This makes data interpretation challenging. Machine learning approaches seem to be promising tools to support clinicians in identifying and categorizing specific gait patterns. However, the quality of such approaches strongly depends on the amount of available annotated data to train the underlying models. Therefore, we present GAITREC, a comprehensive and completely annotated large-scale dataset containing bi-lateral GRF walking trials of 2,084 patients with various musculoskeletal impairments and data from 211 healthy controls. The dataset comprises data of patients after joint replacement, fractures, ligament ruptures, and related disorders at the hip, knee, ankle or calcaneus during their entire stay(s) at a rehabilitation center. The data sum up to a total of 75,732 bi-lateral walking trials and enable researchers to classify gait patterns at a large-scale as well as to analyze the entire recovery process of patients.


Subject(s)
Gait Analysis/instrumentation , Musculoskeletal System/physiopathology , Humans
7.
Gait Posture ; 76: 198-203, 2020 02.
Article in English | MEDLINE | ID: mdl-31862670

ABSTRACT

BACKGROUND: Quantitative gait analysis produces a vast amount of data, which can be difficult to analyze. Automated gait classification based on machine learning techniques bear the potential to support clinicians in comprehending these complex data. Even though these techniques are already frequently used in the scientific community, there is no clear consensus on how the data need to be preprocessed and arranged to assure optimal classification accuracy outcomes. RESEARCH QUESTION: Is there an optimal data aggregation and preprocessing workflow to optimize classification accuracy outcomes? METHODS: Based on our previous work on automated classification of ground reaction force (GRF) data, a sequential setup was followed: firstly, several aggregation methods - early fusion and late fusion - were compared, and secondly, based on the best aggregation method identified, the expressiveness of different combinations of signal representations was investigated. The employed dataset included data from 910 subjects, with four gait disorder classes and one healthy control group. The machine learning pipeline comprised principle component analysis (PCA), z-standardization and a support vector machine (SVM). RESULTS: The late fusion aggregation, i.e., utilizing majority voting on the classifier's predictions, performed best. In addition, the use of derived signal representations (relative changes and signal differences) seems to be advantageous as well. SIGNIFICANCE: Our results indicate that great caution is needed when data preprocessing and aggregation methods are selected, as these can have an impact on classification accuracies. These results shall serve future studies as a guideline for the choice of data aggregation and preprocessing techniques to be employed.


Subject(s)
Gait Analysis/methods , Gait Disorders, Neurologic/diagnosis , Gait/physiology , Support Vector Machine , Gait Disorders, Neurologic/physiopathology , Humans , Principal Component Analysis , Young Adult
8.
IEEE Trans Vis Comput Graph ; 25(3): 1528-1542, 2019 03.
Article in English | MEDLINE | ID: mdl-29993807

ABSTRACT

In 2014, more than 10 million people in the US were affected by an ambulatory disability. Thus, gait rehabilitation is a crucial part of health care systems. The quantification of human locomotion enables clinicians to describe and analyze a patient's gait performance in detail and allows them to base clinical decisions on objective data. These assessments generate a vast amount of complex data which need to be interpreted in a short time period. We conducted a design study in cooperation with gait analysis experts to develop a novel Knowledge-Assisted Visual Analytics solution for clinical Gait analysis (KAVAGait). KAVAGait allows the clinician to store and inspect complex data derived during clinical gait analysis. The system incorporates innovative and interactive visual interface concepts, which were developed based on the needs of clinicians. Additionally, an explicit knowledge store (EKS) allows externalization and storage of implicit knowledge from clinicians. It makes this information available for others, supporting the process of data inspection and clinical decision making. We validated our system by conducting expert reviews, a user study, and a case study. Results suggest that KAVAGait is able to support a clinician during clinical practice by visualizing complex gait data and providing knowledge of other clinicians.


Subject(s)
Gait Analysis/methods , Adult , Algorithms , Female , Gait Analysis/instrumentation , Humans , Image Processing, Computer-Assisted , Machine Learning , Male , Middle Aged , Signal Processing, Computer-Assisted , Walking/physiology , Young Adult
9.
IEEE J Biomed Health Inform ; 22(5): 1653-1661, 2018 09.
Article in English | MEDLINE | ID: mdl-29990052

ABSTRACT

This paper proposes a comprehensive investigation of the automatic classification of functional gait disorders (GDs) based solely on ground reaction force (GRF) measurements. The aim of this study is twofold: first, to investigate the suitability of the state-of-the-art GRF parameterization techniques (representations) for the discrimination of functional GDs; and second, to provide a first performance baseline for the automated classification of functional GDs for a large-scale dataset. The utilized database comprises GRF measurements from 279 patients with GDs and data from 161 healthy controls (N). Patients were manually classified into four classes with different functional impairments associated with the "hip", "knee", "ankle", and "calcaneus". Different parameterizations are investigated: GRF parameters, global principal component analysis (PCA) based representations, and a combined representation applying PCA on GRF parameters. The discriminative power of each parameterization for different classes is investigated by linear discriminant analysis. Based on this analysis, two classification experiments are pursued: distinction between healthy and impaired gait (N versus GD) and multiclass classification between healthy gait and all four GD classes. Experiments show promising results and reveal among others that several factors, such as imbalanced class cardinalities and varying numbers of measurement sessions per patient, have a strong impact on the classification accuracy and therefore need to be taken into account. The results represent a promising first step toward the automated classification of GDs and a first performance baseline for future developments in this direction.


Subject(s)
Gait Disorders, Neurologic/diagnosis , Gait Disorders, Neurologic/physiopathology , Signal Processing, Computer-Assisted , Adult , Case-Control Studies , Databases, Factual , Foot/physiology , Gait/physiology , Humans , Machine Learning , Middle Aged , Principal Component Analysis , Young Adult
10.
IEEE Trans Vis Comput Graph ; 24(1): 298-308, 2018 01.
Article in English | MEDLINE | ID: mdl-28866560

ABSTRACT

Labeling data instances is an important task in machine learning and visual analytics. Both fields provide a broad set of labeling strategies, whereby machine learning (and in particular active learning) follows a rather model-centered approach and visual analytics employs rather user-centered approaches (visual-interactive labeling). Both approaches have individual strengths and weaknesses. In this work, we conduct an experiment with three parts to assess and compare the performance of these different labeling strategies. In our study, we (1) identify different visual labeling strategies for user-centered labeling, (2) investigate strengths and weaknesses of labeling strategies for different labeling tasks and task complexities, and (3) shed light on the effect of using different visual encodings to guide the visual-interactive labeling process. We further compare labeling of single versus multiple instances at a time, and quantify the impact on efficiency. We systematically compare the performance of visual interactive labeling with that of active learning. Our main findings are that visual-interactive labeling can outperform active learning, given the condition that dimension reduction separates well the class distributions. Moreover, using dimension reduction in combination with additional visual encodings that expose the internal state of the learning model turns out to improve the performance of visual-interactive labeling.

11.
BMC Res Notes ; 8: 409, 2015 Sep 04.
Article in English | MEDLINE | ID: mdl-26338528

ABSTRACT

BACKGROUND: The decline of habitat for elephants due to expanding human activity is a serious conservation problem. This has continuously escalated the human-elephant conflict in Africa and Asia. Elephants make extensive use of powerful infrasonic calls (rumbles) that travel distances of up to several kilometers. This makes elephants well-suited for acoustic monitoring because it enables detecting elephants even if they are out of sight. In sight, their distinct visual appearance makes them a good candidate for visual monitoring. We provide an integrated overview of our interdisciplinary project that established the scientific fundamentals for a future early warning and monitoring system for humans who regularly experience serious conflict with elephants. We first draw the big picture of an early warning and monitoring system, then review the developed solutions for automatic acoustic and visual detection, discuss specific challenges and present open future work necessary to build a robust and reliable early warning and monitoring system that is able to operate in situ. FINDINGS: We present a method for the automated detection of elephant rumbles that is robust to the diverse noise sources present in situ. We evaluated the method on an extensive set of audio data recorded under natural field conditions. Results show that the proposed method outperforms existing approaches and accurately detects elephant rumbles. Our visual detection method shows that tracking elephants in wildlife videos (of different sizes and postures) is feasible and particularly robust at near distances. DISCUSSION: From our project results we draw a number of conclusions that are discussed and summarized. We clearly identified the most critical challenges and necessary improvements of the proposed detection methods and conclude that our findings have the potential to form the basis for a future automated early warning system for elephants. We discuss challenges that need to be solved and summarize open topics in the context of a future early warning and monitoring system. We conclude that a long-term evaluation of the presented methods in situ using real-time prototypes is the most important next step to transfer the developed methods into practical implementation.


Subject(s)
Ecosystem , Elephants/physiology , Locomotion/physiology , Vocalization, Animal/physiology , Africa , Animal Migration/physiology , Animals , Asia , Conservation of Natural Resources/methods , Humans , Image Processing, Computer-Assisted/methods , Signal Processing, Computer-Assisted , Sound Spectrography/methods , Videotape Recording/methods
12.
Bioacoustics ; 24(1): 13-29, 2015.
Article in English | MEDLINE | ID: mdl-25983398

ABSTRACT

The human-elephant conflict is one of the most serious conservation problems in Asia and Africa today. The involuntary confrontation of humans and elephants claims the lives of many animals and humans every year. A promising approach to alleviate this conflict is the development of an acoustic early warning system. Such a system requires the robust automated detection of elephant vocalizations under unconstrained field conditions. Today, no system exists that fulfills these requirements. In this paper, we present a method for the automated detection of elephant vocalizations that is robust to the diverse noise sources present in the field. We evaluate the method on a dataset recorded under natural field conditions to simulate a real-world scenario. The proposed method outperformed existing approaches and robustly and accurately detected elephants. It thus can form the basis for a future automated early warning system for elephants. Furthermore, the method may be a useful tool for scientists in bioacoustics for the study of wildlife recordings.

13.
Bioacoustics ; 23(3): 231-246, 2014 Mar 05.
Article in English | MEDLINE | ID: mdl-25821348

ABSTRACT

Animal vocal signals are increasingly used to monitor wildlife populations and to obtain estimates of species occurrence and abundance. In the future, acoustic monitoring should function not only to detect animals, but also to extract detailed information about populations by discriminating sexes, age groups, social or kin groups, and potentially individuals. Here we show that it is possible to estimate age groups of African elephants (Loxodonta africana) based on acoustic parameters extracted from rumbles recorded under field conditions in a National Park in South Africa. Statistical models reached up to 70 % correct classification to four age groups (infants, calves, juveniles, adults) and 95 % correct classification when categorising into two groups (infants/calves lumped into one group versus adults). The models revealed that parameters representing absolute frequency values have the most discriminative power. Comparable classification results were obtained by fully automated classification of rumbles by high-dimensional features that represent the entire spectral envelope, such as MFCC (75 % correct classification) and GFCC (74 % correct classification). The reported results and methods provide the scientific foundation for a future system that could potentially automatically estimate the demography of an acoustically monitored elephant group or population.

14.
EURASIP J Image Video Process ; 2013: 46, 2013 Aug 01.
Article in English | MEDLINE | ID: mdl-25902006

ABSTRACT

Biologists often have to investigate large amounts of video in behavioral studies of animals. These videos are usually not sufficiently indexed which makes the finding of objects of interest a time-consuming task. We propose a fully automated method for the detection and tracking of elephants in wildlife video which has been collected by biologists in the field. The method dynamically learns a color model of elephants from a few training images. Based on the color model, we localize elephants in video sequences with different backgrounds and lighting conditions. We exploit temporal clues from the video to improve the robustness of the approach and to obtain spatial and temporal consistent detections. The proposed method detects elephants (and groups of elephants) of different sizes and poses performing different activities. The method is robust to occlusions (e.g., by vegetation) and correctly handles camera motion and different lighting conditions. Experiments show that both near- and far-distant elephants can be detected and tracked reliably. The proposed method enables biologists efficient and direct access to their video collections which facilitates further behavioral and ecological studies. The method does not make hard constraints on the species of elephants themselves and is thus easily adaptable to other animal species.

15.
PLoS One ; 7(11): e48907, 2012.
Article in English | MEDLINE | ID: mdl-23155427

ABSTRACT

Recent comparative data reveal that formant frequencies are cues to body size in animals, due to a close relationship between formant frequency spacing, vocal tract length and overall body size. Accordingly, intriguing morphological adaptations to elongate the vocal tract in order to lower formants occur in several species, with the size exaggeration hypothesis being proposed to justify most of these observations. While the elephant trunk is strongly implicated to account for the low formants of elephant rumbles, it is unknown whether elephants emit these vocalizations exclusively through the trunk, or whether the mouth is also involved in rumble production. In this study we used a sound visualization method (an acoustic camera) to record rumbles of five captive African elephants during spatial separation and subsequent bonding situations. Our results showed that the female elephants in our analysis produced two distinct types of rumble vocalizations based on vocal path differences: a nasally- and an orally-emitted rumble. Interestingly, nasal rumbles predominated during contact calling, whereas oral rumbles were mainly produced in bonding situations. In addition, nasal and oral rumbles varied considerably in their acoustic structure. In particular, the values of the first two formants reflected the estimated lengths of the vocal paths, corresponding to a vocal tract length of around 2 meters for nasal, and around 0.7 meters for oral rumbles. These results suggest that African elephants may be switching vocal paths to actively vary vocal tract length (with considerable variation in formants) according to context, and call for further research investigating the function of formant modulation in elephant vocalizations. Furthermore, by confirming the use of the elephant trunk in long distance rumble production, our findings provide an explanation for the extremely low formants in these calls, and may also indicate that formant lowering functions to increase call propagation distances in this species'.


Subject(s)
Elephants/physiology , Vocalization, Animal/physiology , Acoustics , Animals , Female , Male , Social Behavior , Sound Spectrography
SELECTION OF CITATIONS
SEARCH DETAIL
...