Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 14(1): 15797, 2024 Jul 09.
Article in English | MEDLINE | ID: mdl-38982105

ABSTRACT

This work presents a novel and versatile approach to employ textile capacitive sensing as an effective solution for capturing human body movement through fashionable and everyday-life garments. Conductive textile patches are utilized for sensing the movement, working without the need for strain or direct body contact, wherefore the patches can sense only from their deformation within the garment. This principle allows the sensing area to be decoupled from the wearer's body for improved wearing comfort and more pleasant integration. We demonstrate our technology based on multiple prototypes which have been developed by an interdisciplinary team of electrical engineers, computer scientists, digital artists, and smart fashion designers through several iterations to seamlessly incorporate the technology of capacitive sensing with corresponding design considerations into textile materials. The resulting accumulation of textile capacitive sensing wearables showcases the versatile application possibilities of our technology from single-joint angle measurements towards multi-joint body part tracking.


Subject(s)
Movement , Textiles , Wearable Electronic Devices , Humans , Electric Capacitance , Equipment Design
2.
Sensors (Basel) ; 21(16)2021 Aug 20.
Article in English | MEDLINE | ID: mdl-34451046

ABSTRACT

We propose to use ambient sound as a privacy-aware source of information for COVID-19-related social distance monitoring and contact tracing. The aim is to complement currently dominant Bluetooth Low Energy Received Signal Strength Indicator (BLE RSSI) approaches. These often struggle with the complexity of Radio Frequency (RF) signal attenuation, which is strongly influenced by specific surrounding characteristics. This in turn renders the relationship between signal strength and the distance between transmitter and receiver highly non-deterministic. We analyze spatio-temporal variations in what we call "ambient sound fingerprints". We leverage the fact that ambient sound received by a mobile device is a superposition of sounds from sources at many different locations in the environment. Such a superposition is determined by the relative position of those sources with respect to the receiver. We present a method for using the above general idea to classify proximity between pairs of users based on Kullback-Leibler distance between sound intensity histograms. The method is based on intensity analysis only, and does not require the collection of any privacy sensitive signals. Further, we show how this information can be fused with BLE RSSI features using adaptive weighted voting. We also take into account that sound is not available in all windows. Our approach is evaluated in elaborate experiments in real-world settings. The results show that both Bluetooth and sound can be used to differentiate users within and out of critical distance (1.5 m) with high accuracies of 77% and 80% respectively. Their fusion, however, improves this to 86%, making evident the merit of augmenting BLE RSSI with sound. We conclude by discussing strengths and limitations of our approach and highlighting directions for future work.


Subject(s)
COVID-19 , Privacy , Contact Tracing , Humans , Physical Distancing , SARS-CoV-2
3.
Sensors (Basel) ; 20(17)2020 Aug 30.
Article in English | MEDLINE | ID: mdl-32872633

ABSTRACT

Many human activities and states are related to the facial muscles' actions: from the expression of emotions, stress, and non-verbal communication through health-related actions, such as coughing and sneezing to nutrition and drinking. In this work, we describe, in detail, the design and evaluation of a wearable system for facial muscle activity monitoring based on a re-configurable differential array of stethoscope-microphones. In our system, six stethoscopes are placed at locations that could easily be integrated into the frame of smart glasses. The paper describes the detailed hardware design and selection and adaptation of appropriate signal processing and machine learning methods. For the evaluation, we asked eight participants to imitate a set of facial actions, such as expressions of happiness, anger, surprise, sadness, upset, and disgust, and gestures, like kissing, winkling, sticking the tongue out, and taking a pill. An evaluation of a complete data set of 2640 events with 66% training and a 33% testing rate has been performed. Although we encountered high variability of the volunteers' expressions, our approach shows a recall = 55%, precision = 56%, and f1-score of 54% for the user-independent scenario(9% chance-level). On a user-dependent basis, our worst result has an f1-score = 60% and best result with f1-score = 89%. Having a recall ≥60% for expressions like happiness, anger, kissing, sticking the tongue out, and neutral(Null-class).


Subject(s)
Facial Recognition , Stethoscopes , Emotions , Facial Expression , Facial Muscles , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...