Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 22(3)2022 Jan 28.
Article in English | MEDLINE | ID: mdl-35161756

ABSTRACT

Studies have shown that ordinary color cameras can detect the subtle color changes of the skin caused by the heartbeat cycle. Therefore, cameras can be used to remotely monitor the pulse in a non-contact manner. The technology for non-contact physiological measurement in this way is called remote photoplethysmography (rPPG). Heart rate variability (HRV) analysis, as a very important physiological feature, requires us to be able to accurately recover the peak time locations of the rPPG signal. This paper proposes an efficient spatiotemporal attention network (ESA-rPPGNet) to recover high-quality rPPG signal for heart rate variability analysis. First, 3D depth-wise separable convolution and a structure based on mobilenet v3 are used to greatly reduce the time complexity of the network. Next, a lightweight attention block called 3D shuffle attention (3D-SA), which integrates spatial attention and channel attention, is designed to enable the network to effectively capture inter-channel dependencies and pixel-level dependencies. Moreover, ConvGRU is introduced to further improve the network's ability to learn long-term spatiotemporal feature information. Compared with existing methods, the experimental results show that the method proposed in this paper has better performance and robustness on the remote HRV analysis.


Subject(s)
Algorithms , Signal Processing, Computer-Assisted , Heart Rate , Photoplethysmography , Skin
2.
Sensors (Basel) ; 22(2)2022 Jan 17.
Article in English | MEDLINE | ID: mdl-35062649

ABSTRACT

Remote photoplethysmography (rPPG) is a video-based non-contact heart rate measurement technology. It is a fact that most existing rPPG methods fail to deal with the spatiotemporal features of the video, which is significant for the extraction of the rPPG signal. In this paper, we propose a 3D central difference convolutional network (CDCA-rPPGNet) to measure heart rate, with an attention mechanism to combine spatial and temporal features. First, we crop and stitch the region of interest together through facial landmarks. Next, the high-quality regions of interest are fed to CDCA-rPPGNet based on a central difference convolution, which can enhance the spatiotemporal representation and capture rich relevant time contexts by collecting time difference information. In addition, we integrate the attention module into the neural network, aiming to strengthen the ability of the neural network to extract video channels and spatial features, so as to obtain more accurate rPPG signals. In summary, the three main contributions of this paper are as follows: (1) the proposed network base on central difference convolution could better capture the subtle color changes to recover the rPPG signals; (2) the proposed ROI extraction method provides high-quality input to the network; (3) the attention module is used to strengthen the ability of the network to extract features. Extensive experiments are conducted on two public datasets-the PURE dataset and the UBFC-rPPG dataset. In terms of the experiment results, our proposed method achieves 0.46 MAE (bpm), 0.90 RMSE (bpm) and 0.99 R value of Pearson's correlation coefficient on the PURE dataset, and 0.60 MAE (bpm), 1.38 RMSE (bpm) and 0.99 R value of Pearson's correlation coefficient on the UBFC dataset, which proves the effectiveness of our proposed approach.


Subject(s)
Algorithms , Signal Processing, Computer-Assisted , Face , Heart Rate , Photoplethysmography
3.
Sensors (Basel) ; 19(11)2019 Jun 07.
Article in English | MEDLINE | ID: mdl-31181668

ABSTRACT

Human motion classification based on micro-Doppler effect has been widely used in various fields. However, the motion classification performance would be greatly degraded if the wireless environment has non-target micro-motion interference. In this case, the interference signal aliases with the signal of target human motions and then generates cross-terms, making the signals hard to be used to identify target human motions. Existing methods do not consider this non-target micro-motion interference and have poor resistance to such interference. In this paper, we propose a target human motion classification system that can work in the scenarios with non-target micro-motion interference. Specifically, we build a continuous wave radar transceiver working in a low-frequency radar band using the software defined radio equipment Universal Software Radio Peripheral (USRP) N210 to collect signals. Moreover, we use Empirical Mode Decomposition and S-transform successively to remove non-target micro-motion interference and improve the time-frequency resolution of the raw signal. Then, an Energy Aggregation method based on S-method is proposed, which can suppress cross-terms and background noise. Furthermore, we extract a set of features and classify four human motions by adopting Bagged Trees. Extensive experiments using the test-bed show that under the scenarios with non-target micro-motion interference, 97.3% classification accuracy can be achieved.

4.
Sensors (Basel) ; 17(3)2017 Mar 20.
Article in English | MEDLINE | ID: mdl-28335540

ABSTRACT

Recognizing how a vehicle is steered and then alerting drivers in real time is of utmost importance to the vehicle and driver's safety, since fatal accidents are often caused by dangerous vehicle maneuvers, such as rapid turns, fast lane-changes, etc. Existing solutions using video or in-vehicle sensors have been employed to identify dangerous vehicle maneuvers, but these methods are subject to the effects of the environmental elements or the hardware is very costly. In the mobile computing era, smartphones have become key tools to develop innovative mobile context-aware systems. In this paper, we present a recognition system for dangerous vehicle steering based on the low-cost sensors found in a smartphone: i.e., the gyroscope and the accelerometer. To identify vehicle steering maneuvers, we focus on the vehicle's angular velocity, which is characterized by gyroscope data from a smartphone mounted in the vehicle. Three steering maneuvers including turns, lane-changes and U-turns are defined, and a vehicle angular velocity matching algorithm based on Fast Dynamic Time Warping (FastDTW) is adopted to recognize the vehicle steering. The results of extensive experiments show that the average accuracy rate of the presented recognition reaches 95%, which implies that the proposed smartphone-based method is suitable for recognizing dangerous vehicle steering maneuvers.

SELECTION OF CITATIONS
SEARCH DETAIL
...