Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE J Biomed Health Inform ; 28(5): 3015-3028, 2024 May.
Article in English | MEDLINE | ID: mdl-38446652

ABSTRACT

The infant sleep-wake behavior is an essential indicator of physiological and neurological system maturity, the circadian transition of which is important for evaluating the recovery of preterm infants from inadequate physiological function and cognitive disorders. Recently, camera-based infant sleep-wake monitoring has been investigated, but the challenges of generalization caused by variance in infants and clinical environments are not addressed for this application. In this paper, we conducted a multi-center clinical trial at four hospitals to improve the generalization of camera-based infant sleep-wake monitoring. Using the face videos of 64 term and 39 preterm infants recorded in NICUs, we proposed a novel sleep-wake classification strategy, called consistent deep representation constraint (CDRC), that forces the convolutional neural network (CNN) to make consistent predictions for the samples from different conditions but with the same label, to address the variances caused by infants and environments. The clinical validation shows that by using CDRC, all CNN backbones obtain over 85% accuracy, sensitivity, and specificity in both the cross-age and cross-environment experiments, improving the ones without CDRC by almost 15% in all metrics. This demonstrates that by improving the consistency of the deep representation of samples with the same state, we can significantly improve the generalization of infant sleep-wake classification.


Subject(s)
Intensive Care Units, Neonatal , Sleep , Video Recording , Humans , Infant, Newborn , Video Recording/methods , Sleep/physiology , Monitoring, Physiologic/methods , Male , Female , Infant, Premature/physiology , Neural Networks, Computer , Wakefulness/physiology , Infant , Image Processing, Computer-Assisted/methods
2.
Article in English | MEDLINE | ID: mdl-38082835

ABSTRACT

Newborn face recognition is a meaningful application for obstetrics in the hospital, as it enhances security measures against infant swapping and abduction through authentication protocols. Due to limited newborn face datasets, this topic was not thoroughly studied. We conducted a clinical trial to create a dataset that collects face images from 200 newborns within an hour after birth, namely NEWBORN200. To our best knowledge, this is the largest newborn face dataset collected in the hospital for this application. The dataset was used to evaluate the four latest ResNet-based deep models for newborn face recognition, including ArcFace, CurricularFace, MagFace, and AdaFace. The experimental results show that AdaFace has the best performance, obtaining 55.24% verification accuracy at 0.1% false accept rate in the open set while achieving 78.76% rank-1 identification accuracy in a closed set. It demonstrates the feasibility of using deep learning for newborn face recognition, also indicating the direction of improvement could be the robustness to varying postures.


Subject(s)
Biometric Identification , Facial Recognition , Humans , Infant , Infant, Newborn , Benchmarking , Biometric Identification/methods , Databases, Factual , Face
3.
Article in English | MEDLINE | ID: mdl-38082939

ABSTRACT

It has been reported that the monitoring of sleep postures is useful for the treatment and prevention of sleep diseases such as obstructive sleep apnea and heart failure. Camera-based sleep posture detection is attractive for the nature of comfort and convenience of use. However, the main challenge is to detect postures from images of the body that are occluded by bed sheets or covers. To address this issue, we propose a novel occlusion-robust sleep posture detection method exploiting the body rolling motion in a video. It uses the head orientation to indicate the posture direction (supine, left or right lateral), triggered by the full-body rolling motion (as a sign of posture change). The experimental results show that our proposed method, as compared with the state-of-the-art approaches such as skeleton-based (MediaPipe) and full-image ResNet based methods, obtained clear improvements on sleep posture detection with heavy body occlusions, with an averaged precision, recall and F1-score of 0.974, 0.993 and 0.983, respectively. The next step is to integrate the sleep posture detection algorithm into a camera-based sleep monitoring system for clinical validations.


Subject(s)
Sleep Apnea, Obstructive , Sleep , Humans , Posture , Algorithms , Motion
4.
Article in English | MEDLINE | ID: mdl-38083039

ABSTRACT

Multi-wavelength pulse transmit time (MV-PTT) is a potential tool for remote blood pressure (BP) monitoring. It uses two wavelengths, typically green (G) and near-infrared (NIR), that have different skin penetration depths to measure the PTT between artery and arterioles of a single site of the skin for BP estimation. However, the impact of wavelength selection for MV-PTT based BP calibration is unknown. In this paper, we explore the combination of different wavelengths of camera photoplethysmography for BP measurement using a modified narrow-band camera centered at G-550/R-660/NIR-850 nm, especially focused on the comparison between G-R (full visible) and G-NIR (hybrid). The experiment was conducted on 17 adult participants in a dark chamber with their BP significantly changed by the protocol of ice water stimulation. The experimental results show that the MV-PTT obtained by G-NIR has a higher correlation with BP, and the fitted model has lower MAE in both the systolic pressure (5.78 mmHg) and diastolic pressure (6.67 mmHg) than others. It is confirmed that a hybrid wavelength of visible (G) and NIR is still essential for accurate BP calibration due to their difference in skin penetration depth that allows proper sensing of different skin layers for this measurement.


Subject(s)
Blood Pressure Determination , Pulse Wave Analysis , Adult , Humans , Blood Pressure/physiology , Blood Pressure Determination/methods , Heart Rate/physiology , Monitoring, Physiologic
5.
Article in English | MEDLINE | ID: mdl-38083776

ABSTRACT

Infant cry provides useful clinical insights for caregivers to make appropriate medical decisions, such as in obstetrics. However, robust infant cry detection in real clinical settings (e.g. obstetrics) is still challenging due to the limited training data in this scenario. In this paper, we propose a scene adaption framework (SAF) including two different learning stages that can quickly adapt the cry detection model to a new environment. The first stage uses the acoustic principle that mixture sources in audio signals are approximately additive to imitate the sounds in clinical settings using public datasets. The second stage utilizes mutual learning to mine the shared characteristics of infant cry between the clinical setting and public dataset to adapt the scene in an unsupervised manner. The clinical trial was conducted in Obstetrics, where the crying audios from 200 infants were collected. The experimented four classifiers used for infant cry detection have nearly 30% improvement on the F1-score by using SAF, which achieves similar performance as the supervised learning based on the target setting. SAF is demonstrated to be an effective plug- and-play tool for improving infant cry detection in new clinical settings. Our code is available at https://github.com/contactless-healthcare/Scene-Adaption-for-Infant-Cry-Detection.


Subject(s)
Crying , Obstetrics , Humans , Infant , Acoustics , Sound , Sound Spectrography
6.
Mil Med Res ; 10(1): 44, 2023 09 26.
Article in English | MEDLINE | ID: mdl-37749643

ABSTRACT

Auscultation is crucial for the diagnosis of respiratory system diseases. However, traditional stethoscopes have inherent limitations, such as inter-listener variability and subjectivity, and they cannot record respiratory sounds for offline/retrospective diagnosis or remote prescriptions in telemedicine. The emergence of digital stethoscopes has overcome these limitations by allowing physicians to store and share respiratory sounds for consultation and education. On this basis, machine learning, particularly deep learning, enables the fully-automatic analysis of lung sounds that may pave the way for intelligent stethoscopes. This review thus aims to provide a comprehensive overview of deep learning algorithms used for lung sound analysis to emphasize the significance of artificial intelligence (AI) in this field. We focus on each component of deep learning-based lung sound analysis systems, including the task categories, public datasets, denoising methods, and, most importantly, existing deep learning methods, i.e., the state-of-the-art approaches to convert lung sounds into two-dimensional (2D) spectrograms and use convolutional neural networks for the end-to-end recognition of respiratory diseases or abnormal lung sounds. Additionally, this review highlights current challenges in this field, including the variety of devices, noise sensitivity, and poor interpretability of deep models. To address the poor reproducibility and variety of deep learning in this field, this review also provides a scalable and flexible open-source framework that aims to standardize the algorithmic workflow and provide a solid basis for replication and future extension: https://github.com/contactless-healthcare/Deep-Learning-for-Lung-Sound-Analysis .


Subject(s)
Deep Learning , Stethoscopes , Humans , Artificial Intelligence , Respiratory Sounds/diagnosis , Reproducibility of Results , Retrospective Studies
7.
IEEE Trans Biomed Eng ; 70(7): 2215-2226, 2023 07.
Article in English | MEDLINE | ID: mdl-37021995

ABSTRACT

Community-Acquired Pneumonia (CAP) is a significant cause of child mortality globally, due to the lack of ubiquitous monitoring methods. Clinically, the wireless stethoscope can be a promising solution since lung sounds with crackles and tachypnea are considered as the typical symptoms of CAP. In this paper, we carried out a multi-center clinical trial in four hospitals to investigate the feasibility of using a wireless stethoscope for children CAP diagnosis and prognosis. The trial collects both the left and right lung sounds from children with CAP at the time of diagnosis, improvement, and recovery. A bilateral pulmonary audio-auxiliary model (BPAM) is proposed for lung sound analysis. It learns the underlying pathological paradigm for the CAP classification by mining the contextual information of audio while preserving the structured information of breathing cycle. The clinical validation shows that the specificity and sensitivity of BPAM are over 92% in both the CAP diagnosis and prognosis for the subject-dependent experiment, over 50% in CAP diagnosis and 39% in CAP prognosis for the subject-independent experiment. Almost all benchmarked methods have improved performance by fusing left and right lung sounds, indicating the direction of hardware design and algorithmic improvement.


Subject(s)
Pneumonia , Stethoscopes , Humans , Child , Respiratory Sounds/diagnosis , Pneumonia/diagnosis , Pneumonia/pathology , Lung
8.
Chem Biol Interact ; 349: 109682, 2021 Nov 01.
Article in English | MEDLINE | ID: mdl-34610338

ABSTRACT

Although the toxicity of acrylamide (ACR) has been extensively investigated in different experimental models, its perturbations to multiple nodes of the cellular signaling network have not been systematically associated. In this study, changes at different omics layers in ACR exposed Saccharomyces cerevisiae cells were monitored using a multi-omics strategy. The results of the analysis highlighted the impairment of oxidative-reductive balance, energy metabolism, lipid metabolism, nucleotide metabolism, and ribosome function in yeast cells. Response to acute ACR damage, glutathione synthesis was upregulated, the process of protein degradation was accelerated, and the autophagy flux was initiated. Meanwhile, yeast upregulates gene expression levels of enzymes in carbohydrate metabolism and speeds up the oxidation process of fatty acids to compensate for energy depletion. Importantly, the multi-omics strategy captures features that have rarely been addressed in previous studies on the toxicology of ACR, including blocked de novo nucleotide synthesis, decreased levels of metabolic enzyme cofactors thiamine and D-biotin, increased intracellular concentrations of neurotoxic N-methyl d-aspartic acid and l-glutamic acid, and release of death mediators ceramide. The ACR perturbation network constructed in this work and the discovery of new damage features provide a theoretical basis for subsequent point-to-point toxicological studies.


Subject(s)
Acrylamide/toxicity , Saccharomyces cerevisiae/drug effects , Carbohydrate Metabolism/drug effects , Metabolome/drug effects , Oxidation-Reduction
9.
R Soc Open Sci ; 8(8): 201976, 2021 Aug.
Article in English | MEDLINE | ID: mdl-34457321

ABSTRACT

In recent years, more and more researchers have focused on emotion recognition methods based on electroencephalogram (EEG) signals. However, most studies only consider the spatio-temporal characteristics of EEG and the modelling based on this feature, without considering personality factors, let alone studying the potential correlation between different subjects. Considering the particularity of emotions, different individuals may have different subjective responses to the same physical stimulus. Therefore, emotion recognition methods based on EEG signals should tend to be personalized. This paper models the personalized EEG emotion recognition from the macro and micro levels. At the macro level, we use personality characteristics to classify the individuals' personalities from the perspective of 'birds of a feather flock together'. At the micro level, we employ deep learning models to extract the spatio-temporal feature information of EEG. To evaluate the effectiveness of our method, we conduct an EEG emotion recognition experiment on the ASCERTAIN dataset. Our experimental results demonstrate that the recognition accuracy of our proposed method is 72.4% and 75.9% on valence and arousal, respectively, which is 10.2% and 9.1% higher than that of no consideration of personalization.

SELECTION OF CITATIONS
SEARCH DETAIL
...