Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Heliyon ; 10(8): e29398, 2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38655356

RESUMO

-The automatic identification of human physical activities, commonly referred to as Human Activity Recognition (HAR), has garnered significant interest and application across various sectors, including entertainment, sports, and notably health. Within the realm of health, a myriad of applications exists, contingent upon the nature of experimentation, the activities under scrutiny, and the methodology employed for data and information acquisition. This diversity opens doors to multifaceted applications, including support for the well-being and safeguarding of elderly individuals afflicted with neurodegenerative diseases, especially in the context of smart homes. Within the existing literature, a multitude of datasets from both indoor and outdoor environments have surfaced, significantly contributing to the activity identification processes. One prominent dataset, the CASAS project developed by Washington State University (WSU) University, encompasses experiments conducted in indoor settings. This dataset facilitates the identification of a range of activities, such as cleaning, cooking, eating, washing hands, and even making phone calls. This article introduces a model founded on the principles of Semi-supervised Ensemble Learning, enabling the harnessing of the potential inherent in distance-based clustering analysis. This technique aids in the identification of distinct clusters, each encapsulating unique activity characteristics. These clusters serve as pivotal inputs for the subsequent classification process, which leverages supervised techniques. The outcomes of this approach exhibit great promise, as evidenced by the quality metrics' analysis, showcasing favorable results compared to the existing state-of-the-art methods. This integrated framework not only contributes to the field of HAR but also holds immense potential for enhancing the capabilities of smart homes and related applications.

2.
Sensors (Basel) ; 23(22)2023 Nov 10.
Artigo em Inglês | MEDLINE | ID: mdl-38005488

RESUMO

By observing the actions taken by operators, it is possible to determine the risk level of a work task. One method for achieving this is the recognition of human activity using biosignals and inertial measurements provided to a machine learning algorithm performing such recognition. The aim of this research is to propose a method to automatically recognize physical exertion and reduce noise as much as possible towards the automation of the Job Strain Index (JSI) assessment by using a motion capture wearable device (MindRove armband) and training a quadratic support vector machine (QSVM) model, which is responsible for predicting the exertion depending on the patterns identified. The highest accuracy of the QSVM model was 95.7%, which was achieved by filtering the data, removing outliers and offsets, and performing zero calibration; in addition, EMG signals were normalized. It was determined that, given the job strain index's purpose, physical exertion detection is crucial to computing its intensity in future work.


Assuntos
Ergonomia , Esforço Físico , Humanos , Eletromiografia/métodos , Ergonomia/métodos , Algoritmos , Aprendizado de Máquina
3.
Sensors (Basel) ; 23(17)2023 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-37687949

RESUMO

The recognition of human activities (HAR) using wearable device data, such as smartwatches, has gained significant attention in the field of computer science due to its potential to provide insights into individuals' daily activities. This article aims to conduct a comparative study of deep learning techniques for recognizing activities of daily living (ADL). A mapping of HAR techniques was performed, and three techniques were selected for evaluation, along with a dataset. Experiments were conducted using the selected techniques to assess their performance in ADL recognition, employing standardized evaluation metrics, such as accuracy, precision, recall, and F1-score. Among the evaluated techniques, the DeepConvLSTM architecture, consisting of recurrent convolutional layers and a single LSTM layer, achieved the most promising results. These findings suggest that software applications utilizing this architecture can assist smartwatch users in understanding their movement routines more quickly and accurately.


Assuntos
Atividades Cotidianas , Aprendizado Profundo , Humanos , Reconhecimento Psicológico , Benchmarking , Movimento
4.
Sensors (Basel) ; 23(9)2023 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-37177616

RESUMO

Human Activity Recognition (HAR) is a complex problem in deep learning, and One-Dimensional Convolutional Neural Networks (1D CNNs) have emerged as a popular approach for addressing it. These networks efficiently learn features from data that can be utilized to classify human activities with high performance. However, understanding and explaining the features learned by these networks remains a challenge. This paper presents a novel eXplainable Artificial Intelligence (XAI) method for generating visual explanations of features learned by one-dimensional CNNs in its training process, utilizing t-Distributed Stochastic Neighbor Embedding (t-SNE). By applying this method, we provide insights into the decision-making process through visualizing the information obtained from the model's deepest layer before classification. Our results demonstrate that the learned features from one dataset can be applied to differentiate human activities in other datasets. Our trained networks achieved high performance on two public databases, with 0.98 accuracy on the SHO dataset and 0.93 accuracy on the HAPT dataset. The visualization method proposed in this work offers a powerful means to detect bias issues or explain incorrect predictions. This work introduces a new type of XAI application, enhancing the reliability and practicality of CNN models in real-world scenarios.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Humanos , Reprodutibilidade dos Testes , Atividades Humanas , Bases de Dados Factuais
5.
Sensors (Basel) ; 23(3)2023 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-36772438

RESUMO

Recently, the scientific community has placed great emphasis on the recognition of human activity, especially in the area of health and care for the elderly. There are already practical applications of activity recognition and unusual conditions that use body sensors such as wrist-worn devices or neck pendants. These relatively simple devices may be prone to errors, might be uncomfortable to wear, might be forgotten or not worn, and are unable to detect more subtle conditions such as incorrect postures. Therefore, other proposed methods are based on the use of images and videos to carry out human activity recognition, even in open spaces and with multiple people. However, the resulting increase in the size and complexity involved when using image data requires the use of the most recent advanced machine learning and deep learning techniques. This paper presents an approach based on deep learning with attention to the recognition of activities from multiple frames. Feature extraction is performed by estimating the pose of the human skeleton, and classification is performed using a neural network based on Bidirectional Encoder Representation of Transformers (BERT). This algorithm was trained with the UP-Fall public dataset, generating more balanced artificial data with a Generative Adversarial Neural network (GAN), and evaluated with real data, outperforming the results of other activity recognition methods using the same dataset.


Assuntos
Algoritmos , Redes Neurais de Computação , Humanos , Idoso , Aprendizado de Máquina , Esqueleto , Postura
6.
Sensors (Basel) ; 22(24)2022 Dec 19.
Artigo em Inglês | MEDLINE | ID: mdl-36560385

RESUMO

(1) Background: The research area of video surveillance anomaly detection aims to automatically detect the moment when a video surveillance camera captures something that does not fit the normal pattern. This is a difficult task, but it is important to automate, improve, and lower the cost of the detection of crimes and other accidents. The UCF-Crime dataset is currently the most realistic crime dataset, and it contains hundreds of videos distributed in several categories; it includes a robbery category, which contains videos of people stealing material goods using violence, but this category only includes a few videos. (2) Methods: This work focuses only on the robbery category, presenting a new weakly labelled dataset that contains 486 new real-world robbery surveillance videos acquired from public sources. (3) Results: We have modified and applied three state-of-the-art video surveillance anomaly detection methods to create a benchmark for future studies. We showed that in the best scenario, taking into account only the anomaly videos in our dataset, the best method achieved an AUC of 66.35%. When all anomaly and normal videos were taken into account, the best method achieved an AUC of 88.75%. (4) Conclusion: This result shows that there is a huge research opportunity to create new methods and approaches that can improve robbery detection in video surveillance.


Assuntos
Crime , Roubo , Humanos , Benchmarking , Gravação de Videoteipe
7.
Artigo em Inglês | MEDLINE | ID: mdl-36231583

RESUMO

Research into assisted living environments -within the area of Ambient Assisted Living (ALL)-focuses on generating innovative technology, products, and services to provide medical treatment and rehabilitation to the elderly, with the purpose of increasing the time in which these people can live independently, whether they suffer from neurodegenerative diseases or disabilities. This key area is responsible for the development of activity recognition systems (ARS) which are a valuable tool to identify the types of activities carried out by the elderly, and to provide them with effective care that allows them to carry out daily activities normally. This article aims to review the literature to outline the evolution of the different data mining techniques applied to this health area, by showing the metrics used by researchers in this area of knowledge in recent experiments.


Assuntos
Atividades Humanas , Aprendizado de Máquina , Idoso , Mineração de Dados , Humanos , Tecnologia
8.
Sensors (Basel) ; 22(15)2022 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-35957201

RESUMO

Due to wearables' popularity, human activity recognition (HAR) plays a significant role in people's routines. Many deep learning (DL) approaches have studied HAR to classify human activities. Previous studies employ two HAR validation approaches: subject-dependent (SD) and subject-independent (SI). Using accelerometer data, this paper shows how to generate visual explanations about the trained models' decision making on both HAR and biometric user identification (BUI) tasks and the correlation between them. We adapted gradient-weighted class activation mapping (grad-CAM) to one-dimensional convolutional neural networks (CNN) architectures to produce visual explanations of HAR and BUI models. Our proposed networks achieved 0.978 and 0.755 accuracy, employing both SD and SI. The proposed BUI network achieved 0.937 average accuracy. We demonstrate that HAR's high performance with SD comes not only from physical activity learning but also from learning an individual's signature, as in BUI models. Our experiments show that CNN focuses on larger signal sections in BUI, while HAR focuses on smaller signal segments. We also use the grad-CAM technique to identify database bias problems, such as signal discontinuities. Combining explainable techniques with deep learning can help models design, avoid results overestimation, find bias problems, and improve generalization capability.


Assuntos
Identificação Biométrica , Redes Neurais de Computação , Bases de Dados Factuais , Atividades Humanas , Humanos
9.
Sensors (Basel) ; 22(14)2022 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-35891090

RESUMO

The accurate recognition of activities is fundamental for following up on the health progress of people with dementia (PwD), thereby supporting subsequent diagnosis and treatments. When monitoring the activities of daily living (ADLs), it is feasible to detect behaviour patterns, parse out the disease evolution, and consequently provide effective and timely assistance. However, this task is affected by uncertainties derived from the differences in smart home configurations and the way in which each person undertakes the ADLs. One adjacent pathway is to train a supervised classification algorithm using large-sized datasets; nonetheless, obtaining real-world data is costly and characterized by a challenging recruiting research process. The resulting activity data is then small and may not capture each person's intrinsic properties. Simulation approaches have risen as an alternative efficient choice, but synthetic data can be significantly dissimilar compared to real data. Hence, this paper proposes the application of Partial Least Squares Regression (PLSR) to approximate the real activity duration of various ADLs based on synthetic observations. First, the real activity duration of each ADL is initially contrasted with the one derived from an intelligent environment simulator. Following this, different PLSR models were evaluated for estimating real activity duration based on synthetic variables. A case study including eight ADLs was considered to validate the proposed approach. The results revealed that simulated and real observations are significantly different in some ADLs (p-value < 0.05), nevertheless synthetic variables can be further modified to predict the real activity duration with high accuracy (R2(pred)>90%).


Assuntos
Atividades Cotidianas , Demência , Algoritmos , Demência/diagnóstico , Humanos , Análise dos Mínimos Quadrados
10.
Sensors (Basel) ; 22(11)2022 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-35684613

RESUMO

In recent years, much effort has been devoted to the development of applications capable of detecting different types of human activity. In this field, fall detection is particularly relevant, especially for the elderly. On the one hand, some applications use wearable sensors that are integrated into cell phones, necklaces or smart bracelets to detect sudden movements of the person wearing the device. The main drawback of these types of systems is that these devices must be placed on a person's body. This is a major drawback because they can be uncomfortable, in addition to the fact that these systems cannot be implemented in open spaces and with unfamiliar people. In contrast, other approaches perform activity recognition from video camera images, which have many advantages over the previous ones since the user is not required to wear the sensors. As a result, these applications can be implemented in open spaces and with unknown people. This paper presents a vision-based algorithm for activity recognition. The main contribution of this work is to use human skeleton pose estimation as a feature extraction method for activity detection in video camera images. The use of this method allows the detection of multiple people's activities in the same scene. The algorithm is also capable of classifying multi-frame activities, precisely for those that need more than one frame to be detected. The method is evaluated with the public UP-FALL dataset and compared to similar algorithms using the same dataset.


Assuntos
Algoritmos , Atividades Humanas , Idoso , Humanos , Esqueleto
11.
Sensors (Basel) ; 22(9)2022 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-35591054

RESUMO

Indoor localization and human activity recognition are two important sources of information to provide context-based assistance. This information is relevant in ambient assisted living (AAL) scenarios, where older adults usually need supervision and assistance in their daily activities. However, indoor localization and human activity recognition have been mostly considered isolated problems. This work presents and evaluates a framework that takes advantage of the relationship between location and activity to simultaneously perform indoor localization, mapping, and human activity recognition. The proposed framework provides a non-intrusive configuration, which fuses data from an inertial measurement unit (IMU) placed in the person's shoe, with proximity and human activity-related data from Bluetooth low energy beacons (BLE) deployed in the indoor environment. A variant of the simultaneous location and mapping (SLAM) framework was used to fuse the location and human activity recognition (HAR) data. HAR was performed using data streaming algorithms. The framework was evaluated in a pilot study, using data from 22 people, 11 young people, and 11 older adults (people aged 65 years or older). As a result, seven activities of daily living were recognized with an F1 score of 88%, and the in-door location error was 0.98 ± 0.36 m for the young and 1.02 ± 0.24 m for the older adults. Furthermore, there were no significant differences between the groups, indicating that our proposed method works adequately in broad age ranges.


Assuntos
Inteligência Ambiental , Atividades Cotidianas , Adolescente , Idoso , Algoritmos , Atividades Humanas , Humanos , Projetos Piloto
12.
Sensors (Basel) ; 22(9)2022 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-35591091

RESUMO

The Assisted Living Environments Research Area-AAL (Ambient Assisted Living), focuses on generating innovative technology, products, and services to assist, medical care and rehabilitation to older adults, to increase the time in which these people can live. independently, whether they suffer from neurodegenerative diseases or some disability. This important area is responsible for the development of activity recognition systems-ARS (Activity Recognition Systems), which is a valuable tool when it comes to identifying the type of activity carried out by older adults, to provide them with assistance. that allows you to carry out your daily activities with complete normality. This article aims to show the review of the literature and the evolution of the different techniques for processing this type of data from supervised, unsupervised, ensembled learning, deep learning, reinforcement learning, transfer learning, and metaheuristics approach applied to this sector of science. health, showing the metrics of recent experiments for researchers in this area of knowledge. As a result of this article, it can be identified that models based on reinforcement or transfer learning constitute a good line of work for the processing and analysis of human recognition activities.


Assuntos
Inteligência Ambiental , Pessoas com Deficiência , Atividades Cotidianas , Idoso , Atividades Humanas , Humanos , Tecnologia
13.
Sensors (Basel) ; 22(6)2022 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-35336529

RESUMO

In this article, we introduce explainable methods to understand how Human Activity Recognition (HAR) mobile systems perform based on the chosen validation strategies. Our results introduce a new way to discover potential bias problems that overestimate the prediction accuracy of an algorithm because of the inappropriate choice of validation methodology. We show how the SHAP (Shapley additive explanations) framework, used in literature to explain the predictions of any machine learning model, presents itself as a tool that can provide graphical insights into how human activity recognition models achieve their results. Now it is possible to analyze which features are important to a HAR system in each validation methodology in a simplified way. We not only demonstrate that the validation procedure k-folds cross-validation (k-CV), used in most works to evaluate the expected error in a HAR system, can overestimate by about 13% the prediction accuracy in three public datasets but also choose a different feature set when compared with the universal model. Combining explainable methods with machine learning algorithms has the potential to help new researchers look inside the decisions of the machine learning algorithms, avoiding most times the overestimation of prediction accuracy, understanding relations between features, and finding bias before deploying the system in real-world scenarios.


Assuntos
Atividades Humanas , Aprendizado de Máquina , Algoritmos , Humanos
14.
Sensors (Basel) ; 21(3)2021 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-33498829

RESUMO

Worldwide demographic projections point to a progressively older population. This fact has fostered research on Ambient Assisted Living, which includes developments on smart homes and social robots. To endow such environments with truly autonomous behaviours, algorithms must extract semantically meaningful information from whichever sensor data is available. Human activity recognition is one of the most active fields of research within this context. Proposed approaches vary according to the input modality and the environments considered. Different from others, this paper addresses the problem of recognising heterogeneous activities of daily living centred in home environments considering simultaneously data from videos, wearable IMUs and ambient sensors. For this, two contributions are presented. The first is the creation of the Heriot-Watt University/University of Sao Paulo (HWU-USP) activities dataset, which was recorded at the Robotic Assisted Living Testbed at Heriot-Watt University. This dataset differs from other multimodal datasets due to the fact that it consists of daily living activities with either periodical patterns or long-term dependencies, which are captured in a very rich and heterogeneous sensing environment. In particular, this dataset combines data from a humanoid robot's RGBD (RGB + depth) camera, with inertial sensors from wearable devices, and ambient sensors from a smart home. The second contribution is the proposal of a Deep Learning (DL) framework, which provides multimodal activity recognition based on videos, inertial sensors and ambient sensors from the smart home, on their own or fused to each other. The classification DL framework has also validated on our dataset and on the University of Texas at Dallas Multimodal Human Activities Dataset (UTD-MHAD), a widely used benchmark for activity recognition based on videos and inertial sensors, providing a comparative analysis between the results on the two datasets considered. Results demonstrate that the introduction of data from ambient sensors expressively improved the accuracy results.


Assuntos
Atividades Cotidianas , Dispositivos Eletrônicos Vestíveis , Algoritmos , Inteligência Ambiental , Atividades Humanas , Humanos
15.
Sensors (Basel) ; 20(17)2020 Aug 23.
Artigo em Inglês | MEDLINE | ID: mdl-32842459

RESUMO

Activity recognition is one of the most active areas of research in ubiquitous computing. In particular, gait activity recognition is useful to identify various risk factors in people's health that are directly related to their physical activity. One of the issues in activity recognition, and gait in particular, is that often datasets are unbalanced (i.e., the distribution of classes is not uniform), and due to this disparity, the models tend to categorize into the class with more instances. In the present study, two methods for classifying gait activities using accelerometer and gyroscope data from a large-scale public dataset were evaluated and compared. The gait activities in this dataset are: (i) going down an incline, (ii) going up an incline, (iii) walking on level ground, (iv) going down stairs, and (v) going up stairs. The proposed methods are based on conventional (shallow) and deep learning techniques. In addition, data were evaluated from three data treatments: original unbalanced data, sampled data, and augmented data. The latter was based on the generation of synthetic data according to segmented gait data. The best results were obtained with classifiers built with augmented data, with F-measure results of 0.812 (σ = 0.078) for the shallow learning approach, and of 0.927 (σ = 0.033) for the deep learning approach. In addition, the data augmentation strategy proposed to deal with the unbalanced problem resulted in increased classification performance using both techniques.


Assuntos
Aprendizado Profundo , Análise da Marcha , Humanos , Subida de Escada , Caminhada
16.
Sensors (Basel) ; 20(15)2020 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-32751345

RESUMO

Activity recognition (AR) from an applied perspective of ambient assisted living (AAL) and smart homes (SH) has become a subject of great interest. Promising a better quality of life, AR applied in contexts such as health, security, and energy consumption can lead to solutions capable of reaching even the people most in need. This study was strongly motivated because levels of development, deployment, and technology of AR solutions transferred to society and industry are based on software development, but also depend on the hardware devices used. The current paper identifies contributions to hardware uses for activity recognition through a scientific literature review in the Web of Science (WoS) database. This work found four dominant groups of technologies used for AR in SH and AAL-smartphones, wearables, video, and electronic components-and two emerging technologies: Wi-Fi and assistive robots. Many of these technologies overlap across many research works. Through bibliometric networks analysis, the present review identified some gaps and new potential combinations of technologies for advances in this emerging worldwide field and their uses. The review also relates the use of these six technologies in health conditions, health care, emotion recognition, occupancy, mobility, posture recognition, localization, fall detection, and generic activity recognition applications. The above can serve as a road map that allows readers to execute approachable projects and deploy applications in different socioeconomic contexts, and the possibility to establish networks with the community involved in this topic. This analysis shows that the research field in activity recognition accepts that specific goals cannot be achieved using one single hardware technology, but can be using joint solutions, this paper shows how such technology works in this regard.


Assuntos
Inteligência Ambiental , Atividades Humanas , Tecnologia , Acidentes por Quedas , Atenção à Saúde , Eletrônica , Emoções , Habitação , Humanos , Postura , Qualidade de Vida , Robótica , Smartphone , Dispositivos Eletrônicos Vestíveis
17.
Sensors (Basel) ; 20(9)2020 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-32397446

RESUMO

Currently, many applications have emerged from the implementation of software development and hardware use, known as the Internet of things. One of the most important application areas of this type of technology is in health care. Various applications arise daily in order to improve the quality of life and to promote an improvement in the treatments of patients at home that suffer from different pathologies. That is why there has emerged a line of work of great interest, focused on the study and analysis of daily life activities, on the use of different data analysis techniques to identify and to help manage this type of patient. This article shows the result of the systematic review of the literature on the use of the Clustering method, which is one of the most used techniques in the analysis of unsupervised data applied to activities of daily living, as well as the description of variables of high importance as a year of publication, type of article, most used algorithms, types of dataset used, and metrics implemented. These data will allow the reader to locate the recent results of the application of this technique to a particular area of knowledge.


Assuntos
Atividades Cotidianas , Análise por Conglomerados , Qualidade de Vida , Algoritmos , Humanos
18.
Sensors (Basel) ; 20(7)2020 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-32230830

RESUMO

Smartphones have emerged as a revolutionary technology for monitoring everyday life, and they have played an important role in Human Activity Recognition (HAR) due to its ubiquity. The sensors embedded in these devices allows recognizing human behaviors using machine learning techniques. However, not all solutions are feasible for implementation in smartphones, mainly because of its high computational cost. In this context, the proposed method, called HAR-SR, introduces information theory quantifiers as new features extracted from sensors data to create simple activity classification models, increasing in this way the efficiency in terms of computational cost. Three public databases (SHOAIB, UCI, WISDM) are used in the evaluation process. The results have shown that HAR-SR can classify activities with 93% accuracy when using a leave-one-subject-out cross-validation procedure (LOSO).


Assuntos
Atividades Humanas , Teoria da Informação , Aprendizado de Máquina , Monitorização Fisiológica , Acelerometria , Algoritmos , Bases de Dados Factuais , Humanos , Smartphone
19.
Sensors (Basel) ; 20(1)2019 Dec 19.
Artigo em Inglês | MEDLINE | ID: mdl-31861639

RESUMO

In this work, authors address workload computation combining human activity recognition and heart rate measurements to establish a scalable framework for health at work and fitness-related applications. The proposed architecture consists of two wearable sensors: one for motion, and another for heart rate. The system employs machine learning algorithms to determine the activity performed by a user, and takes a concept from ergonomics, the Frimat's score, to compute the corresponding physical workload from measured heart rate values providing in addition a qualitative description of the workload. A random forest activity classifier is trained and validated with data from nine subjects, achieving an accuracy of 97.5%. Then, tests with 20 subjects show the reliability of the activity classifier, which keeps an accuracy up to 92% during real-time testing. Additionally, a single-subject twenty-day physical workload tracking case study evinces the system capabilities to detect body adaptation to a custom exercise routine. The proposed system enables remote and multi-user workload monitoring, which facilitates the job for experts in ergonomics and workplace health.


Assuntos
Acelerometria/métodos , Atividades Humanas , Dispositivos Eletrônicos Vestíveis , Acelerometria/instrumentação , Adulto , Exercício Físico , Humanos , Aprendizado de Máquina , Masculino , Sistemas Microeletromecânicos
20.
Sensors (Basel) ; 19(17)2019 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-31484423

RESUMO

In Ambient Intelligence (AmI), the activity a user is engaged in is an essential part of the context, so its recognition is of paramount importance for applications in areas like sports, medicine, personal safety, and so forth. The concurrent use of multiple sensors for recognition of human activities in AmI is a good practice because the information missed by one sensor can sometimes be provided by the others and many works have shown an accuracy improvement compared to single sensors. However, there are many different ways of integrating the information of each sensor and almost every author reporting sensor fusion for activity recognition uses a different variant or combination of fusion methods, so the need for clear guidelines and generalizations in sensor data integration seems evident. In this survey we review, following a classification, the many fusion methods for information acquired from sensors that have been proposed in the literature for activity recognition; we examine their relative merits, either as they are reported and sometimes even replicated and a comparison of these methods is made, as well as an assessment of the trends in the area.


Assuntos
Atividades Cotidianas , Técnicas Biossensoriais/métodos , Atividades Humanas , Humanos , Inquéritos e Questionários
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA