Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 371
Filtrar
1.
Sensors (Basel) ; 24(14)2024 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-39065939

RESUMO

The characterization of human behavior in real-world contexts is critical for developing a comprehensive model of human health. Recent technological advancements have enabled wearables and sensors to passively and unobtrusively record and presumably quantify human behavior. Better understanding human activities in unobtrusive and passive ways is an indispensable tool in understanding the relationship between behavioral determinants of health and diseases. Adult individuals (N = 60) emulated the behaviors of smoking, exercising, eating, and medication (pill) taking in a laboratory setting while equipped with smartwatches that captured accelerometer data. The collected data underwent expert annotation and was used to train a deep neural network integrating convolutional and long short-term memory architectures to effectively segment time series into discrete activities. An average macro-F1 score of at least 85.1 resulted from a rigorous leave-one-subject-out cross-validation procedure conducted across participants. The score indicates the method's high performance and potential for real-world applications, such as identifying health behaviors and informing strategies to influence health. Collectively, we demonstrated the potential of AI and its contributing role to healthcare during the early phases of diagnosis, prognosis, and/or intervention. From predictive analytics to personalized treatment plans, AI has the potential to assist healthcare professionals in making informed decisions, leading to more efficient and tailored patient care.


Assuntos
Atividades Humanas , Redes Neurais de Computação , Dispositivos Eletrônicos Vestíveis , Humanos , Adulto , Masculino , Feminino , Acelerometria/métodos , Exercício Físico/fisiologia
2.
Sensors (Basel) ; 24(14)2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39066043

RESUMO

Human activity recognition (HAR) is pivotal in advancing applications ranging from healthcare monitoring to interactive gaming. Traditional HAR systems, primarily relying on single data sources, face limitations in capturing the full spectrum of human activities. This study introduces a comprehensive approach to HAR by integrating two critical modalities: RGB imaging and advanced pose estimation features. Our methodology leverages the strengths of each modality to overcome the drawbacks of unimodal systems, providing a richer and more accurate representation of activities. We propose a two-stream network that processes skeletal and RGB data in parallel, enhanced by pose estimation techniques for refined feature extraction. The integration of these modalities is facilitated through advanced fusion algorithms, significantly improving recognition accuracy. Extensive experiments conducted on the UTD multimodal human action dataset (UTD MHAD) demonstrate that the proposed approach exceeds the performance of existing state-of-the-art algorithms, yielding improved outcomes. This study not only sets a new benchmark for HAR systems but also highlights the importance of feature engineering in capturing the complexity of human movements and the integration of optimal features. Our findings pave the way for more sophisticated, reliable, and applicable HAR systems in real-world scenarios.


Assuntos
Algoritmos , Atividades Humanas , Humanos , Processamento de Imagem Assistida por Computador/métodos , Movimento/fisiologia , Postura/fisiologia , Reconhecimento Automatizado de Padrão/métodos
3.
Sensors (Basel) ; 24(14)2024 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-39066103

RESUMO

As Canada's population of older adults rises, the need for aging-in-place solutions is growing due to the declining quality of long-term-care homes and long wait times. While the current standards include questionnaire-based assessments for monitoring activities of daily living (ADLs), there is an urgent need for advanced indoor localization technologies that ensure privacy. This study explores the use of Ultra-Wideband (UWB) technology for activity recognition in a mock condo in the Glenrose Rehabilitation Hospital. UWB systems with built-in Inertial Measurement Unit (IMU) sensors were tested, using anchors set up across the condo and a tag worn by patients. We tested various UWB setups, changed the number of anchors, and varied the tag placement (on the wrist or chest). Wrist-worn tags consistently outperformed chest-worn tags, and the nine-anchor configuration yielded the highest accuracy. Machine learning models were developed to classify activities based on UWB and IMU data. Models that included positional data significantly outperformed those that did not. The Random Forest model with a 4 s data window achieved an accuracy of 94%, compared to 79.2% when positional data were excluded. These findings demonstrate that incorporating positional data with IMU sensors is a promising method for effective remote patient monitoring.


Assuntos
Atividades Cotidianas , Aprendizado de Máquina , Humanos , Monitorização Ambulatorial/métodos , Monitorização Ambulatorial/instrumentação , Dispositivos Eletrônicos Vestíveis , Acelerometria/instrumentação , Acelerometria/métodos , Monitorização Fisiológica/métodos , Monitorização Fisiológica/instrumentação
4.
Sci Rep ; 14(1): 15310, 2024 07 03.
Artigo em Inglês | MEDLINE | ID: mdl-38961136

RESUMO

Human activity recognition has a wide range of applications in various fields, such as video surveillance, virtual reality and human-computer intelligent interaction. It has emerged as a significant research area in computer vision. GCN (Graph Convolutional networks) have recently been widely used in these fields and have made great performance. However, there are still some challenges including over-smoothing problem caused by stack graph convolutions and deficient semantics correlation to capture the large movements between time sequences. Vision Transformer (ViT) is utilized in many 2D and 3D image fields and has surprised results. In our work, we propose a novel human activity recognition method based on ViT (HAR-ViT). We integrate enhanced AGCL (eAGCL) in 2s-AGCN to ViT to make it process spatio-temporal data (3D skeleton) and make full use of spatial features. The position encoder module orders the non-sequenced information while the transformer encoder efficiently compresses sequence data features to enhance calculation speed. Human activity recognition is accomplished through multi-layer perceptron (MLP) classifier. Experimental results demonstrate that the proposed method achieves SOTA performance on three extensively used datasets, NTU RGB+D 60, NTU RGB+D 120 and Kinetics-Skeleton 400.


Assuntos
Atividades Humanas , Humanos , Redes Neurais de Computação , Algoritmos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos
5.
Sensors (Basel) ; 24(13)2024 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-39001122

RESUMO

Human Activity Recognition (HAR), alongside Ambient Assisted Living (AAL), are integral components of smart homes, sports, surveillance, and investigation activities. To recognize daily activities, researchers are focusing on lightweight, cost-effective, wearable sensor-based technologies as traditional vision-based technologies lack elderly privacy, a fundamental right of every human. However, it is challenging to extract potential features from 1D multi-sensor data. Thus, this research focuses on extracting distinguishable patterns and deep features from spectral images by time-frequency-domain analysis of 1D multi-sensor data. Wearable sensor data, particularly accelerator and gyroscope data, act as input signals of different daily activities, and provide potential information using time-frequency analysis. This potential time series information is mapped into spectral images through a process called use of 'scalograms', derived from the continuous wavelet transform. The deep activity features are extracted from the activity image using deep learning models such as CNN, MobileNetV3, ResNet, and GoogleNet and subsequently classified using a conventional classifier. To validate the proposed model, SisFall and PAMAP2 benchmark datasets are used. Based on the experimental results, this proposed model shows the optimal performance for activity recognition obtaining an accuracy of 98.4% for SisFall and 98.1% for PAMAP2, using Morlet as the mother wavelet with ResNet-101 and a softmax classifier, and outperforms state-of-the-art algorithms.


Assuntos
Atividades Humanas , Análise de Ondaletas , Humanos , Atividades Humanas/classificação , Algoritmos , Aprendizado Profundo , Dispositivos Eletrônicos Vestíveis , Atividades Cotidianas , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos
6.
Data Brief ; 55: 110621, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39006348

RESUMO

Timed Up and Go (TUG) test is one of the most popular clinical tools aimed at the assessment of functional mobility and fall risk in older adults. The automation of the analysis of TUG movements is of great medical interest not only to speed up the test but also to maximize the information inferred from the subjects under study. In this context, this article describes a dataset collected from a cohort of 69 experimental subjects (including 30 adults over 60 years), during the execution of several repetitions of the TUG test. In particular, the dataset includes the measurements gathered with four wearables devices embedding four sensors (accelerometer, gyroscope magnetometer and barometer) located on four body locations (waist, wrist, ankle and chest). As a particularity, the dataset also includes the same measurements recorded when the young subjects repeat the test while wearing a commercial geriatric simulator, consisting of a set of weighted vests and other elements intended to replicate the limitations caused by aging. Thus, the generated dataset also enables the investigation into the potential of such tools to emulate the actual dynamics of older individuals.

7.
Data Brief ; 55: 110673, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39049967

RESUMO

Human Activity Recognition (HAR) has emerged as a critical research area due to its extensive applications in various real-world domains. Numerous CSI-based datasets have been established to support the development and evaluation of advanced HAR algorithms. However, existing CSI-based HAR datasets are frequently limited by a dearth of complexity and diversity in the activities represented, hindering the design of robust HAR models. These limitations typically manifest as a narrow focus on a limited range of activities or the exclusion of factors influencing real-world CSI measurements. Consequently, the scarcity of diverse training data can impede the development of efficient HAR systems. To address the limitations of existing datasets, this paper introduces a novel dataset that captures spatial diversity through multiple transceiver orientations over a high dimensional space encompassing a large number of subcarriers. The dataset incorporates a wider range of real-world factors including extensive activity range, a spectrum of human movements (encompassing both micro-and macro-movements), variations in body composition, and diverse environmental conditions (noise and interference). The experiment is performed in a controlled laboratory environment with dimensions of 5 m (width) × 8 m (length) × 3 m (height) to capture CSI measurements for various human activities. Four ESP32-S3-DevKitC-1 devices, configured as transceiver pairs with unique Media Access Control (MAC) addresses, collect CSI data according to the Wi-Fi IEEE 802.11n standard. Mounted on tripods at a height of 1.5 m, the transmitter devices (powered by external power banks) positioned at north and east send multiple Wi-Fi beacons to their respective receivers (connected to laptops via USB for data collection) located at south and west. To capture multi-perspective CSI data, all six participants sequentially performed designated activities while standing in the centre of the tripod arrangement for 5 s per sample. The system collected approximately 300-450 packets per sample for approximately 1200 samples per activity, capturing CSI information across the 166 subcarriers employed in the Wi-Fi IEEE 802.11n standard. By leveraging the richness of this dataset, HAR researchers can develop more robust and generalizable CSI-based HAR models. Compared to traditional HAR approaches, these CSI-based models hold the promise of significantly enhanced accuracy and robustness when deployed in real-world scenarios. This stems from their ability to capture the nuanced dynamics of human movement through the analysis of wireless channel characteristic from different spatial variations (utilizing two-diagonal ESP32 transceivers configuration) with higher degree of dimensionality (166 subcarriers).

8.
Int J Behav Nutr Phys Act ; 21(1): 77, 2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39020353

RESUMO

BACKGROUND: The more accurate we can assess human physical behaviour in free-living conditions the better we can understand its relationship with health and wellbeing. Thigh-worn accelerometry can be used to identify basic activity types as well as different postures with high accuracy. User-friendly software without the need for specialized programming may support the adoption of this method. This study aims to evaluate the classification accuracy of two novel no-code classification methods, namely SENS motion and ActiPASS. METHODS: A sample of 38 healthy adults (30.8 ± 9.6 years; 53% female) wore the SENS motion accelerometer (12.5 Hz; ±4 g) on their thigh during various physical activities. Participants completed standardized activities with varying intensities in the laboratory. Activities included walking, running, cycling, sitting, standing, and lying down. Subsequently, participants performed unrestricted free-living activities outside of the laboratory while being video-recorded with a chest-mounted camera. Videos were annotated using a predefined labelling scheme and annotations served as a reference for the free-living condition. Classification output from the SENS motion software and ActiPASS software was compared to reference labels. RESULTS: A total of 63.6 h of activity data were analysed. We observed a high level of agreement between the two classification algorithms and their respective references in both conditions. In the free-living condition, Cohen's kappa coefficients were 0.86 for SENS and 0.92 for ActiPASS. The mean balanced accuracy ranged from 0.81 (cycling) to 0.99 (running) for SENS and from 0.92 (walking) to 0.99 (sedentary) for ActiPASS across all activity types. CONCLUSIONS: The study shows that two available no-code classification methods can be used to accurately identify basic physical activity types and postures. Our results highlight the accuracy of both methods based on relatively low sampling frequency data. The classification methods showed differences in performance, with lower sensitivity observed in free-living cycling (SENS) and slow treadmill walking (ActiPASS). Both methods use different sets of activity classes with varying definitions, which may explain the observed differences. Our results support the use of the SENS motion system and both no-code classification methods.


Assuntos
Acelerometria , Exercício Físico , Coxa da Perna , Caminhada , Humanos , Feminino , Masculino , Adulto , Acelerometria/métodos , Exercício Físico/fisiologia , Caminhada/fisiologia , Adulto Jovem , Algoritmos , Software , Corrida/fisiologia , Ciclismo/fisiologia , Postura
9.
Heliyon ; 10(13): e33295, 2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-39027497

RESUMO

Study objectives: To develop a non-invasive and practical wearable method for long-term tracking of infants' sleep. Methods: An infant wearable, NAPping PAnts (NAPPA), was constructed by combining a diaper cover and a movement sensor (triaxial accelerometer and gyroscope), allowing either real-time data streaming to mobile devices or offline feature computation stored in the sensor memory. A sleep state classifier (wake, N1/REM, N2/N3) was trained and tested for NAPPA recordings (N = 16649 epochs of 30 s), using hypnograms from co-registered polysomnography (PSG) as a training target in 33 infants (age 2 weeks to 18 months; Mean = 4). User experience was assessed from an additional group of 16 parents. Results: Overnight NAPPA recordings were successfully performed in all infants. The sleep state classifier showed good overall accuracy (78 %; Range 74-83 %) when using a combination of five features related to movement and respiration. Sleep depth trends were generated from the classifier outputs to visualise sleep state fluctuations, which closely aligned with PSG-derived hypnograms in all infants. Consistently positive parental feedback affirmed the effectiveness of the NAPPA-design. Conclusions: NAPPA offers a practical and feasible method for out-of-hospital assessment of infants' sleep behaviour. It can directly support large-scale quantitative studies and development of new paradigms in scientific research and infant healthcare. Moreover, NAPPA provides accurate and informative computational measures for body positions, respiration rates, and activity levels, each with their respective clinical and behavioural value.

10.
Sensors (Basel) ; 24(11)2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38894162

RESUMO

Composite indoor human activity recognition is very important in elderly health monitoring and is more difficult than identifying individual human movements. This article proposes a sensor-based human indoor activity recognition method that integrates indoor positioning. Convolutional neural networks are used to extract spatial information contained in geomagnetic sensors and ambient light sensors, while transform encoders are used to extract temporal motion features collected by gyroscopes and accelerometers. We established an indoor activity recognition model with a multimodal feature fusion structure. In order to explore the possibility of using only smartphones to complete the above tasks, we collected and established a multisensor indoor activity dataset. Extensive experiments verified the effectiveness of the proposed method. Compared with algorithms that do not consider the location information, our method has a 13.65% improvement in recognition accuracy.


Assuntos
Acelerometria , Algoritmos , Atividades Humanas , Redes Neurais de Computação , Smartphone , Humanos , Acelerometria/instrumentação , Acelerometria/métodos , Monitorização Fisiológica/instrumentação , Monitorização Fisiológica/métodos
11.
Sensors (Basel) ; 24(11)2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38894451

RESUMO

This study explored an indoor system for tracking multiple humans and detecting falls, employing three Millimeter-Wave radars from Texas Instruments. Compared to wearables and camera methods, Millimeter-Wave radar is not plagued by mobility inconveniences, lighting conditions, or privacy issues. We conducted an initial evaluation of radar characteristics, covering aspects such as interference between radars and coverage area. Then, we established a real-time framework to integrate signals received from these radars, allowing us to track the position and body status of human targets non-intrusively. Additionally, we introduced innovative strategies, including dynamic Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering based on signal SNR levels, a probability matrix for enhanced target tracking, target status prediction for fall detection, and a feedback loop for noise reduction. We conducted an extensive evaluation using over 300 min of data, which equated to approximately 360,000 frames. Our prototype system exhibited a remarkable performance, achieving a precision of 98.9% for tracking a single target and 96.5% and 94.0% for tracking two and three targets in human-tracking scenarios, respectively. Moreover, in the field of human fall detection, the system demonstrates a high accuracy rate of 96.3%, underscoring its effectiveness in distinguishing falls from other statuses.

12.
PeerJ Comput Sci ; 10: e2100, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38855220

RESUMO

Portable devices like accelerometers and physiological trackers capture movement and biometric data relevant to sports. This study uses data from wearable sensors to investigate deep learning techniques for recognizing human behaviors associated with sports and fitness. The proposed CNN-BiGRU-CBAM model, a unique hybrid architecture, combines convolutional neural networks (CNNs), bidirectional gated recurrent unit networks (BiGRUs), and convolutional block attention modules (CBAMs) for accurate activity recognition. CNN layers extract spatial patterns, BiGRU captures temporal context, and CBAM focuses on informative BiGRU features, enabling precise activity pattern identification. The novelty lies in seamlessly integrating these components to learn spatial and temporal relationships, prioritizing significant features for activity detection. The model and baseline deep learning models were trained on the UCI-DSA dataset, evaluating with 5-fold cross-validation, including multi-class classification accuracy, precision, recall, and F1-score. The CNN-BiGRU-CBAM model outperformed baseline models like CNN, LSTM, BiLSTM, GRU, and BiGRU, achieving state-of-the-art results with 99.10% accuracy and F1-score across all activity classes. This breakthrough enables accurate identification of sports and everyday activities using simplified wearables and advanced deep learning techniques, facilitating athlete monitoring, technique feedback, and injury risk detection. The proposed model's design and thorough evaluation significantly advance human activity recognition for sports and fitness.

13.
Sensors (Basel) ; 24(12)2024 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38931675

RESUMO

Human Activity Recognition (HAR) plays an important role in the automation of various tasks related to activity tracking in such areas as healthcare and eldercare (telerehabilitation, telemonitoring), security, ergonomics, entertainment (fitness, sports promotion, human-computer interaction, video games), and intelligent environments. This paper tackles the problem of real-time recognition and repetition counting of 12 types of exercises performed during athletic workouts. Our approach is based on the deep neural network model fed by the signal from a 9-axis motion sensor (IMU) placed on the chest. The model can be run on mobile platforms (iOS, Android). We discuss design requirements for the system and their impact on data collection protocols. We present architecture based on an encoder pretrained with contrastive learning. Compared to end-to-end training, the presented approach significantly improves the developed model's quality in terms of accuracy (F1 score, MAPE) and robustness (false-positive rate) during background activity. We make the AIDLAB-HAR dataset publicly available to encourage further research.


Assuntos
Atividades Humanas , Redes Neurais de Computação , Telemedicina , Humanos , Exercício Físico/fisiologia , Algoritmos
14.
Sensors (Basel) ; 24(12)2024 Jun 16.
Artigo em Inglês | MEDLINE | ID: mdl-38931682

RESUMO

Monitoring activities of daily living (ADLs) plays an important role in measuring and responding to a person's ability to manage their basic physical needs. Effective recognition systems for monitoring ADLs must successfully recognize naturalistic activities that also realistically occur at infrequent intervals. However, existing systems primarily focus on either recognizing more separable, controlled activity types or are trained on balanced datasets where activities occur more frequently. In our work, we investigate the challenges associated with applying machine learning to an imbalanced dataset collected from a fully in-the-wild environment. This analysis shows that the combination of preprocessing techniques to increase recall and postprocessing techniques to increase precision can result in more desirable models for tasks such as ADL monitoring. In a user-independent evaluation using in-the-wild data, these techniques resulted in a model that achieved an event-based F1-score of over 0.9 for brushing teeth, combing hair, walking, and washing hands. This work tackles fundamental challenges in machine learning that will need to be addressed in order for these systems to be deployed and reliably work in the real world.


Assuntos
Atividades Cotidianas , Atividades Humanas , Aprendizado de Máquina , Humanos , Algoritmos , Caminhada/fisiologia , Reconhecimento Automatizado de Padrão/métodos
15.
Sensors (Basel) ; 24(12)2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38931728

RESUMO

There has been a resurgence of applications focused on human activity recognition (HAR) in smart homes, especially in the field of ambient intelligence and assisted-living technologies. However, such applications present numerous significant challenges to any automated analysis system operating in the real world, such as variability, sparsity, and noise in sensor measurements. Although state-of-the-art HAR systems have made considerable strides in addressing some of these challenges, they suffer from a practical limitation: they require successful pre-segmentation of continuous sensor data streams prior to automated recognition, i.e., they assume that an oracle is present during deployment, and that it is capable of identifying time windows of interest across discrete sensor events. To overcome this limitation, we propose a novel graph-guided neural network approach that performs activity recognition by learning explicit co-firing relationships between sensors. We accomplish this by learning a more expressive graph structure representing the sensor network in a smart home in a data-driven manner. Our approach maps discrete input sensor measurements to a feature space through the application of attention mechanisms and hierarchical pooling of node embeddings. We demonstrate the effectiveness of our proposed approach by conducting several experiments on CASAS datasets, showing that the resulting graph-guided neural network outperforms the state-of-the-art method for HAR in smart homes across multiple datasets and by large margins. These results are promising because they push HAR for smart homes closer to real-world applications.


Assuntos
Atividades Humanas , Redes Neurais de Computação , Humanos , Algoritmos , Reconhecimento Automatizado de Padrão/métodos
16.
Sci Rep ; 14(1): 14006, 2024 06 18.
Artigo em Inglês | MEDLINE | ID: mdl-38890409

RESUMO

Smartphone sensors have gained considerable traction in Human Activity Recognition (HAR), drawing attention for their diverse applications. Accelerometer data monitoring holds promise in understanding students' physical activities, fostering healthier lifestyles. This technology tracks exercise routines, sedentary behavior, and overall fitness levels, potentially encouraging better habits, preempting health issues, and bolstering students' well-being. Traditionally, HAR involved analyzing signals linked to physical activities using handcrafted features. However, recent years have witnessed the integration of deep learning into HAR tasks, leveraging digital physiological signals from smartwatches and learning features automatically from raw sensory data. The Long Short-Term Memory (LSTM) network stands out as a potent algorithm for analyzing physiological signals, promising improved accuracy and scalability in automated signal analysis. In this article, we propose a feature analysis framework for recognizing student activity and monitoring health based on smartphone accelerometer data through an edge computing platform. Our objective is to boost HAR performance by accounting for the dynamic nature of human behavior. Nonetheless, the current LSTM network's presetting of hidden units and initial learning rate relies on prior knowledge, potentially leading to suboptimal states. To counter this, we employ Bidirectional LSTM (BiLSTM), enhancing sequence processing models. Furthermore, Bayesian optimization aids in fine-tuning the BiLSTM model architecture. Through fivefold cross-validation on training and testing datasets, our model showcases a classification accuracy of 97.5% on the tested dataset. Moreover, edge computing offers real-time processing, reduced latency, enhanced privacy, bandwidth efficiency, offline capabilities, energy efficiency, personalization, and scalability. Extensive experimental results validate that our proposed approach surpasses state-of-the-art methodologies in recognizing human activities and monitoring health based on smartphone accelerometer data.


Assuntos
Acelerometria , Exercício Físico , Smartphone , Estudantes , Humanos , Acelerometria/métodos , Acelerometria/instrumentação , Exercício Físico/fisiologia , Aprendizado Profundo , Algoritmos , Monitorização Fisiológica/métodos , Monitorização Fisiológica/instrumentação
17.
Sensors (Basel) ; 24(9)2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38732771

RESUMO

Human activity recognition (HAR) technology enables continuous behavior monitoring, which is particularly valuable in healthcare. This study investigates the viability of using an ear-worn motion sensor for classifying daily activities, including lying, sitting/standing, walking, ascending stairs, descending stairs, and running. Fifty healthy participants (between 20 and 47 years old) engaged in these activities while under monitoring. Various machine learning algorithms, ranging from interpretable shallow models to state-of-the-art deep learning approaches designed for HAR (i.e., DeepConvLSTM and ConvTransformer), were employed for classification. The results demonstrate the ear sensor's efficacy, with deep learning models achieving a 98% accuracy rate of classification. The obtained classification models are agnostic regarding which ear the sensor is worn and robust against moderate variations in sensor orientation (e.g., due to differences in auricle anatomy), meaning no initial calibration of the sensor orientation is required. The study underscores the ear's efficacy as a suitable site for monitoring human daily activity and suggests its potential for combining HAR with in-ear vital sign monitoring. This approach offers a practical method for comprehensive health monitoring by integrating sensors in a single anatomical location. This integration facilitates individualized health assessments, with potential applications in tele-monitoring, personalized health insights, and optimizing athletic training regimes.


Assuntos
Dispositivos Eletrônicos Vestíveis , Humanos , Adulto , Masculino , Feminino , Pessoa de Meia-Idade , Adulto Jovem , Atividades Humanas , Orelha/fisiologia , Algoritmos , Atividades Cotidianas , Aprendizado de Máquina , Aprendizado Profundo , Monitorização Fisiológica/instrumentação , Monitorização Fisiológica/métodos , Movimento (Física) , Caminhada/fisiologia
18.
Sensors (Basel) ; 24(10)2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38793858

RESUMO

Inertial signals are the most widely used signals in human activity recognition (HAR) applications, and extensive research has been performed on developing HAR classifiers using accelerometer and gyroscope data. This study aimed to investigate the potential enhancement of HAR models through the fusion of biological signals with inertial signals. The classification of eight common low-, medium-, and high-intensity activities was assessed using machine learning (ML) algorithms, trained on accelerometer (ACC), blood volume pulse (BVP), and electrodermal activity (EDA) data obtained from a wrist-worn sensor. Two types of ML algorithms were employed: a random forest (RF) trained on features; and a pre-trained deep learning (DL) network (ResNet-18) trained on spectrogram images. Evaluation was conducted on both individual activities and more generalized activity groups, based on similar intensity. Results indicated that RF classifiers outperformed corresponding DL classifiers at both individual and grouped levels. However, the fusion of EDA and BVP signals with ACC data improved DL classifier performance compared to a baseline DL model with ACC-only data. The best performance was achieved by a classifier trained on a combination of ACC, EDA, and BVP images, yielding F1-scores of 69 and 87 for individual and grouped activity classifications, respectively. For DL models trained with additional biological signals, almost all individual activity classifications showed improvement (p-value < 0.05). In grouped activity classifications, DL model performance was enhanced for low- and medium-intensity activities. Exploring the classification of two specific activities, ascending/descending stairs and cycling, revealed significantly improved results using a DL model trained on combined ACC, BVP, and EDA spectrogram images (p-value < 0.05).


Assuntos
Acelerometria , Algoritmos , Aprendizado de Máquina , Fotopletismografia , Humanos , Fotopletismografia/métodos , Acelerometria/métodos , Masculino , Adulto , Processamento de Sinais Assistido por Computador , Feminino , Atividades Humanas , Resposta Galvânica da Pele/fisiologia , Dispositivos Eletrônicos Vestíveis , Adulto Jovem
19.
Sci Rep ; 14(1): 12411, 2024 05 30.
Artigo em Inglês | MEDLINE | ID: mdl-38816446

RESUMO

Knowledge distillation is an effective approach for training robust multi-modal machine learning models when synchronous multimodal data are unavailable. However, traditional knowledge distillation techniques have limitations in comprehensively transferring knowledge across modalities and models. This paper proposes a multiscale knowledge distillation framework to address these limitations. Specifically, we introduce a multiscale semantic graph mapping (SGM) loss function to enable more comprehensive knowledge transfer between teacher and student networks at multiple feature scales. We also design a fusion and tuning (FT) module to fully utilize correlations within and between different data types of the same modality when training teacher networks. Furthermore, we adopt transformer-based backbones to improve feature learning compared to traditional convolutional neural networks. We apply the proposed techniques to multimodal human activity recognition and compared with the baseline method, it improved by 2.31% and 0.29% on the MMAct and UTD-MHAD datasets. Ablation studies validate the necessity of each component.


Assuntos
Atividades Humanas , Aprendizado de Máquina , Redes Neurais de Computação , Humanos , Algoritmos , Atenção
20.
Data Brief ; 54: 110364, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38590617

RESUMO

Shadow, a natural phenomenon resulting from the absence of direct lighting, finds diverse real-world applications beyond computer vision, such as studying its effect on photosynthesis in plants and on the reduction of solar energy harvesting through photovoltaic panels. This article presents a dataset comprising 50,000 pairs of photorealistic computer-rendered images along with their corresponding physics-based shadow masks, primarily focused on agricultural settings with human activity in the field. The images are generated by simulating a scene in 3D modeling software to produce a pair of top-down images, consisting of a regular image and an overexposed image achieved by adjusting lighting parameters. Specifically, the strength of the light source representing the sun is increased, and all indirect lighting, including global illumination and light bouncing, is disabled. The resulting overexposed image is later converted into a physically accurate shadow mask with minimal annotation errors through post-processing techniques. This dataset holds promise for future research, serving as a basis for transfer learning or as a benchmark for model evaluation in the realm of shadow-related applications such as shadow detection and removal.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...