Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38083196

RESUMO

Wearable sensors have become increasingly popular in recent years, with technological advances leading to cheaper, more widely available, and smaller devices. As a result, there has been a growing interest in applying machine learning techniques for Human Activity Recognition (HAR) in healthcare. These techniques can improve patient care and treatment by accurately detecting and analyzing various activities and behaviors. However, current approaches often require large amounts of labeled data, which can be difficult and time-consuming to obtain. In this study, we propose a new approach that uses synthetic sensor data generated by 3D engines and Generative Adversarial Networks to overcome this obstacle. We evaluate the synthetic data using several methods and compare them to real-world data, including classification results with baseline models. Our results show that synthetic data can improve the performance of deep neural networks, achieving a better F1-score for less complex activities on a known dataset by 8.4% to 73% than state-of-the-art results. However, as we showed in a self-recorded nursing activity dataset of longer duration, this effect diminishes with more complex activities. This research highlights the potential of synthetic sensor data generated from multiple sources to overcome data scarcity in HAR.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Humanos , Atividades Humanas , Reconhecimento Psicológico
2.
Sensors (Basel) ; 23(23)2023 Dec 02.
Artigo em Inglês | MEDLINE | ID: mdl-38067946

RESUMO

Sensor-based human activity recognition is becoming ever more prevalent. The increasing importance of distinguishing human movements, particularly in healthcare, coincides with the advent of increasingly compact sensors. A complex sequence of individual steps currently characterizes the activity recognition pipeline. It involves separate data collection, preparation, and processing steps, resulting in a heterogeneous and fragmented process. To address these challenges, we present a comprehensive framework, HARE, which seamlessly integrates all necessary steps. HARE offers synchronized data collection and labeling, integrated pose estimation for data anonymization, a multimodal classification approach, and a novel method for determining optimal sensor placement to enhance classification results. Additionally, our framework incorporates real-time activity recognition with on-device model adaptation capabilities. To validate the effectiveness of our framework, we conducted extensive evaluations using diverse datasets, including our own collected dataset focusing on nursing activities. Our results show that HARE's multimodal and on-device trained model outperforms conventional single-modal and offline variants. Furthermore, our vision-based approach for optimal sensor placement yields comparable results to the trained model. Our work advances the field of sensor-based human activity recognition by introducing a comprehensive framework that streamlines data collection and classification while offering a novel method for determining optimal sensor placement.


Assuntos
Lebres , Humanos , Animais , Fluxo de Trabalho , Atividades Humanas , Movimento
3.
Sci Data ; 10(1): 727, 2023 10 20.
Artigo em Inglês | MEDLINE | ID: mdl-37863902

RESUMO

Accurate and comprehensive nursing documentation is essential to ensure quality patient care. To streamline this process, we present SONAR, a publicly available dataset of nursing activities recorded using inertial sensors in a nursing home. The dataset includes 14 sensor streams, such as acceleration and angular velocity, and 23 activities recorded by 14 caregivers using five sensors for 61.7 hours. The caregivers wore the sensors as they performed their daily tasks, allowing for continuous monitoring of their activities. We additionally provide machine learning models that recognize the nursing activities given the sensor data. In particular, we present benchmarks for three deep learning model architectures and evaluate their performance using different metrics and sensor locations. Our dataset, which can be used for research on sensor-based human activity recognition in real-world settings, has the potential to improve nursing care by providing valuable insights that can identify areas for improvement, facilitate accurate documentation, and tailor care to specific patient conditions.


Assuntos
Aprendizado de Máquina , Cuidados de Enfermagem , Humanos , Enfermagem
4.
Sci Rep ; 13(1): 16043, 2023 09 25.
Artigo em Inglês | MEDLINE | ID: mdl-37749176

RESUMO

This study aimed to evaluate the use of novel optomyography (OMG) based smart glasses, OCOsense, for the monitoring and recognition of facial expressions. Experiments were conducted on data gathered from 27 young adult participants, who performed facial expressions varying in intensity, duration, and head movement. The facial expressions included smiling, frowning, raising the eyebrows, and squeezing the eyes. The statistical analysis demonstrated that: (i) OCO sensors based on the principles of OMG can capture distinct variations in cheek and brow movements with a high degree of accuracy and specificity; (ii) Head movement does not have a significant impact on how well these facial expressions are detected. The collected data were also used to train a machine learning model to recognise the four facial expressions and when the face enters a neutral state. We evaluated this model in conditions intended to simulate real-world use, including variations in expression intensity, head movement and glasses position relative to the face. The model demonstrated an overall accuracy of 93% (0.90 f1-score)-evaluated using a leave-one-subject-out cross-validation technique.


Assuntos
Reconhecimento Facial , Óculos Inteligentes , Adulto Jovem , Humanos , Expressão Facial , Sorriso , Movimento , Emoções
5.
Front Psychiatry ; 14: 1232433, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37614653

RESUMO

Background: Continuous assessment of affective behaviors could improve the diagnosis, assessment and monitoring of chronic mental health and neurological conditions such as depression. However, there are no technologies well suited to this, limiting potential clinical applications. Aim: To test if we could replicate previous evidence of hypo reactivity to emotional salient material using an entirely new sensing technique called optomyography which is well suited to remote monitoring. Methods: Thirty-eight depressed and 37 controls (≥18, ≤40 years) who met a research diagnosis of depression and an age-matched non-depressed control group. Changes in facial muscle activity over the brow (corrugator supercilli) and cheek (zygomaticus major) were measured whilst volunteers watched videos varying in emotional salience. Results: Across all participants, videos rated as subjectively positive were associated with activation of muscles in the cheek relative to videos rated as neutral or negative. Videos rated as subjectively negative were associated with brow activation relative to videos judged as neutral or positive. Self-reported arousal was associated with a step increase in facial muscle activation across the brow and cheek. Group differences were significantly reduced activation in facial muscles during videos considered subjectively negative or rated as high arousal in depressed volunteers compared with controls. Conclusion: We demonstrate for the first time that it is possible to detect facial expression hypo-reactivity in adults with depression in response to emotional content using glasses-based optomyography sensing. It is hoped these results may encourage the use of optomyography-based sensing to track facial expressions in the real-world, outside of a specialized testing environment.

6.
Sci Rep ; 12(1): 16876, 2022 10 07.
Artigo em Inglês | MEDLINE | ID: mdl-36207524

RESUMO

Using a novel wearable surface electromyography (sEMG), we investigated induced affective states by measuring the activation of facial muscles traditionally associated with positive (left/right orbicularis and left/right zygomaticus) and negative expressions (the corrugator muscle). In a sample of 38 participants that watched 25 affective videos in a virtual reality environment, we found that each of the three variables examined-subjective valence, subjective arousal, and objective valence measured via the validated video types (positive, neutral, and negative)-sEMG amplitude varied significantly depending on video content. sEMG aptitude from "positive muscles" increased when participants were exposed to positively valenced stimuli compared with stimuli that was negatively valenced. In contrast, activation of "negative muscles" was elevated following exposure to negatively valenced stimuli compared with positively valenced stimuli. High arousal videos increased muscle activations compared to low arousal videos in all the measured muscles except the corrugator muscle. In line with previous research, the relationship between sEMG amplitude as a function of subjective valence was V-shaped.


Assuntos
Músculos Faciais , Dispositivos Eletrônicos Vestíveis , Afeto/fisiologia , Nível de Alerta/fisiologia , Eletromiografia , Emoções/fisiologia , Face/fisiologia , Expressão Facial , Músculos Faciais/fisiologia , Humanos
7.
Sensors (Basel) ; 22(11)2022 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-35684600

RESUMO

There is growing interest in monitoring gait patterns in people with neurological conditions. The democratisation of wearable inertial sensors has enabled the study of gait in free living environments. One pivotal aspect of gait assessment in uncontrolled environments is the ability to accurately recognise gait instances. Previous work has focused on wavelet transform methods or general machine learning models to detect gait; the former assume a comparable gait pattern between people and the latter assume training datasets that represent a diverse population. In this paper, we argue that these approaches are unsuitable for people with severe motor impairments and their distinct gait patterns, and make the case for a lightweight personalised alternative. We propose an approach that builds on top of a general model, fine-tuning it with personalised data. A comparative proof-of-concept evaluation with general machine learning (NN and CNN) approaches and personalised counterparts showed that the latter improved the overall accuracy in 3.5% for the NN and 5.3% for the CNN. More importantly, participants that were ill-represented by the general model (the most extreme cases) had the recognition of gait instances improved by up to 16.9% for NN and 20.5% for CNN with the personalised approaches. It is common to say that people with neurological conditions, such as Parkinson's disease, present very individual motor patterns, and that in a sense they are all outliers; we expect that our results will motivate researchers to explore alternative approaches that value personalisation rather than harvesting datasets that are may be able to represent these differences.


Assuntos
Marcha , Doença de Parkinson , Humanos , Aprendizado de Máquina , Doença de Parkinson/diagnóstico , Estudo de Prova de Conceito , Análise de Ondaletas
8.
Sensors (Basel) ; 22(6)2022 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-35336250

RESUMO

Breathing rate is considered one of the fundamental vital signs and a highly informative indicator of physiological state. Given that the monitoring of heart activity is less complex than the monitoring of breathing, a variety of algorithms have been developed to estimate breathing activity from heart activity. However, estimating breathing rate from heart activity outside of laboratory conditions is still a challenge. The challenge is even greater when new wearable devices with novel sensor placements are being used. In this paper, we present a novel algorithm for breathing rate estimation from photoplethysmography (PPG) data acquired from a head-worn virtual reality mask equipped with a PPG sensor placed on the forehead of a subject. The algorithm is based on advanced signal processing and machine learning techniques and includes a novel quality assessment and motion artifacts removal procedure. The proposed algorithm is evaluated and compared to existing approaches from the related work using two separate datasets that contains data from a total of 37 subjects overall. Numerous experiments show that the proposed algorithm outperforms the compared algorithms, achieving a mean absolute error of 1.38 breaths per minute and a Pearson's correlation coefficient of 0.86. These results indicate that reliable estimation of breathing rate is possible based on PPG data acquired from a head-worn device.


Assuntos
Fotopletismografia , Taxa Respiratória , Frequência Cardíaca/fisiologia , Humanos , Aprendizado de Máquina , Fotopletismografia/métodos , Processamento de Sinais Assistido por Computador
9.
Sensors (Basel) ; 21(5)2021 Mar 09.
Artigo em Inglês | MEDLINE | ID: mdl-33803121

RESUMO

Understanding people's eating habits plays a crucial role in interventions promoting a healthy lifestyle. This requires objective measurement of the time at which a meal takes place, the duration of the meal, and what the individual eats. Smartwatches and similar wrist-worn devices are an emerging technology that offers the possibility of practical and real-time eating monitoring in an unobtrusive, accessible, and affordable way. To this end, we present a novel approach for the detection of eating segments with a wrist-worn device and fusion of deep and classical machine learning. It integrates a novel data selection method to create the training dataset, and a method that incorporates knowledge from raw and virtual sensor modalities for training with highly imbalanced datasets. The proposed method was evaluated using data from 12 subjects recorded in the wild, without any restriction about the type of meals that could be consumed, the cutlery used for the meal, or the location where the meal took place. The recordings consist of data from accelerometer and gyroscope sensors. The experiments show that our method for detection of eating segments achieves precision of 0.85, recall of 0.81, and F1-score of 0.82 in a person-independent manner. The results obtained in this study indicate that reliable eating detection using in the wild recorded data is possible with the use of wearable sensors on the wrist.

10.
Sensors (Basel) ; 20(18)2020 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-32961750

RESUMO

Falls are a significant threat to the health and independence of elderly people and represent an enormous burden on the healthcare system. Successfully predicting falls could be of great help, yet this requires a timely and accurate fall risk assessment. Gait abnormalities are one of the best predictive signs of underlying locomotion conditions and precursors of falls. The advent of wearable sensors and wrist-worn devices provides new opportunities for continuous and unobtrusive monitoring of gait during daily activities, including the identification of unexpected changes in gait. To this end, we present in this paper a novel method for determining gait abnormalities based on a wrist-worn device and a deep neural network. It integrates convolutional and bidirectional long short-term memory layers for successful learning of spatiotemporal features from multiple sensor signals. The proposed method was evaluated using data from 18 subjects, who recorded their normal gait and simulated abnormal gait while wearing impairment glasses. The data consist of inertial measurement unit (IMU) sensor signals obtained from smartwatches that the subjects wore on both wrists. Numerous experiments showed that the proposed method provides better results than the compared methods, achieving 88.9% accuracy, 90.6% sensitivity, and 86.2% specificity in the detection of abnormal walking patterns using data from an accelerometer, gyroscope, and rotation vector sensor. These results indicate that reliable fall risk assessment is possible based on the detection of walking abnormalities with the use of wearable sensors on a wrist.


Assuntos
Acidentes por Quedas/prevenção & controle , Aprendizado Profundo , Análise da Marcha , Dispositivos Eletrônicos Vestíveis , Idoso , Humanos , Medição de Risco , Punho
11.
J Biomed Inform ; 73: 159-170, 2017 09.
Artigo em Inglês | MEDLINE | ID: mdl-28803947

RESUMO

Being able to detect stress as it occurs can greatly contribute to dealing with its negative health and economic consequences. However, detecting stress in real life with an unobtrusive wrist device is a challenging task. The objective of this study is to develop a method for stress detection that can accurately, continuously and unobtrusively monitor psychological stress in real life. First, we explore the problem of stress detection using machine learning and signal processing techniques in laboratory conditions, and then we apply the extracted laboratory knowledge to real-life data. We propose a novel context-based stress-detection method. The method consists of three machine-learning components: a laboratory stress detector that is trained on laboratory data and detects short-term stress every 2min; an activity recognizer that continuously recognizes the user's activity and thus provides context information; and a context-based stress detector that uses the outputs of the laboratory stress detector, activity recognizer and other contexts, in order to provide the final decision on 20-min intervals. Experiments on 55days of real-life data showed that the method detects (recalls) 70% of the stress events with a precision of 95%.


Assuntos
Aprendizado de Máquina , Monitorização Fisiológica , Processamento de Sinais Assistido por Computador , Estresse Psicológico , Humanos , Acontecimentos que Mudam a Vida , Punho
12.
Sensors (Basel) ; 16(6)2016 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-27258282

RESUMO

Although wearable accelerometers can successfully recognize activities and detect falls, their adoption in real life is low because users do not want to wear additional devices. A possible solution is an accelerometer inside a wrist device/smartwatch. However, wrist placement might perform poorly in terms of accuracy due to frequent random movements of the hand. In this paper we perform a thorough, large-scale evaluation of methods for activity recognition and fall detection on four datasets. On the first two we showed that the left wrist performs better compared to the dominant right one, and also better compared to the elbow and the chest, but worse compared to the ankle, knee and belt. On the third (Opportunity) dataset, our method outperformed the related work, indicating that our feature-preprocessing creates better input data. And finally, on a real-life unlabeled dataset the recognized activities captured the subject's daily rhythm and activities. Our fall-detection method detected all of the fast falls and minimized the false positives, achieving 85% accuracy on the first dataset. Because the other datasets did not contain fall events, only false positives were evaluated, resulting in 9 for the second, 1 for the third and 15 for the real-life dataset (57 days data).


Assuntos
Acelerometria/instrumentação , Acidentes por Quedas/prevenção & controle , Monitorização Fisiológica/instrumentação , Atividades Cotidianas , Algoritmos , Humanos , Dispositivos Eletrônicos Vestíveis , Punho/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...