Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Type of study
Language
Publication year range
1.
Diagnostics (Basel) ; 14(5)2024 Feb 26.
Article in English | MEDLINE | ID: mdl-38472973

ABSTRACT

This study investigates the prediction of mental well-being factors-depression, stress, and anxiety-using the NetHealth dataset from college students. The research addresses four key questions, exploring the impact of digital biomarkers on these factors, their alignment with conventional psychology literature, the time-based performance of applied methods, and potential enhancements through multitask learning. The findings reveal modality rankings aligned with psychology literature, validated against paper-based studies. Improved predictions are noted with temporal considerations, and further enhanced by multitasking. Mental health multitask prediction results show aligned baseline and multitask performances, with notable enhancements using temporal aspects, particularly with the random forest (RF) classifier. Multitask learning improves outcomes for depression and stress but not anxiety using RF and XGBoost.

3.
Sensors (Basel) ; 23(21)2023 Nov 05.
Article in English | MEDLINE | ID: mdl-37960685

ABSTRACT

Wearable devices have become ubiquitous, collecting rich temporal data that offers valuable insights into human activities, health monitoring, and behavior analysis. Leveraging these data, researchers have developed innovative approaches to classify and predict time-based patterns and events in human life. Time-based techniques allow the capture of intricate temporal dependencies, which is the nature of the data coming from wearable devices. This paper focuses on predicting well-being factors, such as stress, anxiety, and positive and negative affect, on the Tesserae dataset collected from office workers. We examine the performance of different methodologies, including deep-learning architectures, LSTM, ensemble techniques, Random Forest (RF), and XGBoost, and compare their performances for time-based and non-time-based versions. In time-based versions, we investigate the effect of previous records of well-being factors on the upcoming ones. The overall results show that time-based LSTM performs the best among conventional (non-time-based) RF, XGBoost, and LSTM. The performance even increases when we consider a more extended previous period, in this case, 3 past-days rather than 1 past-day to predict the next day. Furthermore, we explore the corresponding biomarkers for each well-being factor using feature ranking. The obtained rankings are compatible with the psychological literature. In this work, we validated them based on device measurements rather than subjective survey responses.


Subject(s)
Wearable Electronic Devices , Humans , Anxiety , Human Activities , Anxiety Disorders , Affect
4.
Nat Hum Behav ; 7(12): 2099-2110, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37904020

ABSTRACT

The extent to which languages share properties reflecting the non-linguistic constraints of the speakers who speak them is key to the debate regarding the relationship between language and cognition. A critical case is spatial communication, where it has been argued that semantic universals should exist, if anywhere. Here, using an experimental paradigm able to separate variation within a language from variation between languages, we tested the use of spatial demonstratives-the most fundamental and frequent spatial terms across languages. In n = 874 speakers across 29 languages, we show that speakers of all tested languages use spatial demonstratives as a function of being able to reach or act on an object being referred to. In some languages, the position of the addressee is also relevant in selecting between demonstrative forms. Commonalities and differences across languages in spatial communication can be understood in terms of universal constraints on action shaping spatial language and cognition.


Subject(s)
Language , Semantics , Humans , Cognition
5.
Sensors (Basel) ; 16(4): 426, 2016 Mar 24.
Article in English | MEDLINE | ID: mdl-27023543

ABSTRACT

The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such as smoking, eating, drinking coffee and giving a talk. To recognize such activities, wrist-worn motion sensors are used. However, these two positions are mainly used in isolation. To use richer context information, we evaluate three motion sensors (accelerometer, gyroscope and linear acceleration sensor) at both wrist and pocket positions. Using three classifiers, we show that the combination of these two positions outperforms the wrist position alone, mainly at smaller segmentation windows. Another problem is that less-repetitive activities, such as smoking, eating, giving a talk and drinking coffee, cannot be recognized easily at smaller segmentation windows unlike repetitive activities, like walking, jogging and biking. For this purpose, we evaluate the effect of seven window sizes (2-30 s) on thirteen activities and show how increasing window size affects these various activities in different ways. We also propose various optimizations to further improve the recognition of these activities. For reproducibility, we make our dataset publicly available.


Subject(s)
Activities of Daily Living , Movement/physiology , Pattern Recognition, Automated/methods , Smartphone , Accelerometry/methods , Humans , Smoking/adverse effects , Walking/physiology , Wrist/physiology
6.
Sensors (Basel) ; 15(10): 25474-506, 2015 Oct 05.
Article in English | MEDLINE | ID: mdl-26445046

ABSTRACT

Phone placement, i.e., where the phone is carried/stored, is an important source of information for context-aware applications. Extracting information from the integrated smart phone sensors, such as motion, light and proximity, is a common technique for phone placement detection. In this paper, the efficiency of an accelerometer-only solution is explored, and it is investigated whether the phone position can be detected with high accuracy by analyzing the movement, orientation and rotation changes. The impact of these changes on the performance is analyzed individually and both in combination to explore which features are more efficient, whether they should be fused and, if yes, how they should be fused. Using three different datasets, collected from 35 people from eight different positions, the performance of different classification algorithms is explored. It is shown that while utilizing only motion information can achieve accuracies around 70%, this ratio increases up to 85% by utilizing information also from orientation and rotation changes. The performance of an accelerometer-only solution is compared to solutions where linear acceleration, gyroscope and magnetic field sensors are used, and it is shown that the accelerometer-only solution performs as well as utilizing other sensing information. Hence, it is not necessary to use extra sensing information where battery power consumption may increase. Additionally, I explore the impact of the performed activities on position recognition and show that the accelerometer-only solution can achieve 80% recognition accuracy with stationary activities where movement data are very limited. Finally, other phone placement problems, such as in-pocket and on-body detections, are also investigated, and higher accuracies, ranging from 88% to 93%, are reported, with an accelerometer-only solution.

7.
Sensors (Basel) ; 15(1): 2059-85, 2015 Jan 19.
Article in English | MEDLINE | ID: mdl-25608213

ABSTRACT

Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous devices are equipped with various sensors, ranging from accelerometers to magnetic field sensors. In most of the current studies, sensor data collected for activity recognition are analyzed offline using machine learning tools. However, there is now a trend towards implementing activity recognition systems on these devices in an online manner, since modern mobile phones have become more powerful in terms of available resources, such as CPU, memory and battery. The research on offline activity recognition has been reviewed in several earlier studies in detail. However, work done on online activity recognition is still in its infancy and is yet to be reviewed. In this paper, we review the studies done so far that implement activity recognition systems on mobile phones and use only their on-board sensors. We discuss various aspects of these studies. Moreover, we discuss their limitations and present various recommendations for future research.


Subject(s)
Cell Phone , Monitoring, Ambulatory/instrumentation , Accelerometry , Humans , Motor Activity , Online Systems , Quality of Health Care
8.
Sensors (Basel) ; 14(6): 9692-719, 2014 May 30.
Article in English | MEDLINE | ID: mdl-24887044

ABSTRACT

Human activity recognition and behavior monitoring in a home setting using wireless sensor networks (WSNs) provide a great potential for ambient assisted living (AAL) applications, ranging from health and wellbeing monitoring to resource consumption monitoring. However, due to the limitations of the sensor devices, challenges in wireless communication and the challenges in processing large amounts of sensor data in order to recognize complex human activities, WSN-based AAL systems are not effectively integrated in the home environment. Additionally, given the variety of sensor types and activities, selecting the most suitable set of sensors in the deployment is an important task. In order to investigate and propose solutions to such challenges, we introduce a WSN-based multimodal AAL system compatible for homes with multiple residents. Particularly, we focus on the details of the system architecture, including the challenges of sensor selection, deployment, networking and data collection and provide guidelines for the design and deployment of an effective AAL system. We also present the details of the field study we conducted, using the systems deployed in two different real home environments with multiple residents. With these systems, we are able to collect ambient sensor data from multiple homes. This data can be used to assess the wellbeing of the residents and identify deviations from everyday routines, which may be indicators of health problems. Finally, in order to elaborate on the possible applications of the proposed AAL system and to exemplify directions for processing the collected data, we provide the results of several human activity inference experiments, along with examples on how such results could be interpreted. We believe that the experiences shared in this work will contribute towards accelerating the acceptance of WSN-based AAL systems in the home setting.


Subject(s)
Assisted Living Facilities/methods , Human Activities/classification , Monitoring, Ambulatory/methods , Telemetry/methods , Adult , Computer Communication Networks , Female , Humans , Male , Models, Theoretical , Monitoring, Ambulatory/instrumentation , Telemetry/instrumentation
9.
Sensors (Basel) ; 14(6): 10146-76, 2014 Jun 10.
Article in English | MEDLINE | ID: mdl-24919015

ABSTRACT

For physical activity recognition, smartphone sensors, such as an accelerometer and a gyroscope, are being utilized in many research studies. So far, particularly, the accelerometer has been extensively studied. In a few recent studies, a combination of a gyroscope, a magnetometer (in a supporting role) and an accelerometer (in a lead role) has been used with the aim to improve the recognition performance. How and when are various motion sensors, which are available on a smartphone, best used for better recognition performance, either individually or in combination? This is yet to be explored. In order to investigate this question, in this paper, we explore how these various motion sensors behave in different situations in the activity recognition process. For this purpose, we designed a data collection experiment where ten participants performed seven different activities carrying smart phones at different positions. Based on the analysis of this data set, we show that these sensors, except the magnetometer, are each capable of taking the lead roles individually, depending on the type of activity being recognized, the body position, the used data features and the classification method employed (personalized or generalized). We also show that their combination only improves the overall recognition performance when their individual performances are not very high, so that there is room for performance improvement. We have made our data set and our data collection application publicly available, thereby making our experiments reproducible.


Subject(s)
Activities of Daily Living/classification , Cell Phone , Monitoring, Physiologic/methods , Movement/physiology , Pattern Recognition, Automated/methods , Accelerometry/instrumentation , Accelerometry/methods , Adult , Algorithms , Humans , Male , Models, Statistical , Monitoring, Physiologic/instrumentation , Walking/classification
SELECTION OF CITATIONS
SEARCH DETAIL
...