Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Front Neurol ; 15: 1386728, 2024.
Article in English | MEDLINE | ID: mdl-38784909

ABSTRACT

Acuity assessments are vital for timely interventions and fair resource allocation in critical care settings. Conventional acuity scoring systems heavily depend on subjective patient assessments, leaving room for implicit bias and errors. These assessments are often manual, time-consuming, intermittent, and challenging to interpret accurately, especially for healthcare providers. This risk of bias and error is likely most pronounced in time-constrained and high-stakes environments, such as critical care settings. Furthermore, such scores do not incorporate other information, such as patients' mobility level, which can indicate recovery or deterioration in the intensive care unit (ICU), especially at a granular level. We hypothesized that wearable sensor data could assist in assessing patient acuity granularly, especially in conjunction with clinical data from electronic health records (EHR). In this prospective study, we evaluated the impact of integrating mobility data collected from wrist-worn accelerometers with clinical data obtained from EHR for estimating acuity. Accelerometry data were collected from 87 patients wearing accelerometers on their wrists in an academic hospital setting. The data was evaluated using five deep neural network models: VGG, ResNet, MobileNet, SqueezeNet, and a custom Transformer network. These models outperformed a rule-based clinical score (Sequential Organ Failure Assessment, SOFA) used as a baseline when predicting acuity state (for ground truth we labeled as unstable patients if they needed life-supporting therapies, and as stable otherwise), particularly regarding the precision, sensitivity, and F1 score. The results demonstrate that integrating accelerometer data with demographics and clinical variables improves predictive performance compared to traditional scoring systems in healthcare. Deep learning models consistently outperformed the SOFA score baseline across various scenarios, showing notable enhancements in metrics such as the area under the receiver operating characteristic (ROC) Curve (AUC), precision, sensitivity, specificity, and F1 score. The most comprehensive scenario, leveraging accelerometer, demographics, and clinical data, achieved the highest AUC of 0.73, compared to 0.53 when using SOFA score as the baseline, with significant improvements in precision (0.80 vs. 0.23), specificity (0.79 vs. 0.73), and F1 score (0.77 vs. 0.66). This study demonstrates a novel approach beyond the simplistic differentiation between stable and unstable conditions. By incorporating mobility and comprehensive patient information, we distinguish between these states in critically ill patients and capture essential nuances in physiology and functional status. Unlike rudimentary definitions, such as equating low blood pressure with instability, our methodology delves deeper, offering a more holistic understanding and potentially valuable insights for acuity assessment.

2.
IEEE Int Conf Bioinform Biomed Workshops ; 2023: 2207-2212, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38463539

ABSTRACT

Quantifying pain in patients admitted to intensive care units (ICUs) is challenging due to the increased prevalence of communication barriers in this patient population. Previous research has posited a positive correlation between pain and physical activity in critically ill patients. In this study, we advance this hypothesis by building machine learning classifiers to examine the ability of accelerometer data collected from daily wearables to predict self-reported pain levels experienced by patients in the ICU. We trained multiple Machine Learning (ML) models, including Logistic Regression, CatBoost, and XG-Boost, on statistical features extracted from the accelerometer data combined with previous pain measurements and patient demographics. Following previous studies that showed a change in pain sensitivity in ICU patients at night, we performed the task of pain classification separately for daytime and nighttime pain reports. In the pain versus no-pain classification setting, logistic regression gave the best classifier in daytime (AUC: 0.72, F1-score: 0.72), and CatBoost gave the best classifier at nighttime (AUC: 0.82, F1-score: 0.82). Performance of logistic regression dropped to 0.61 AUC, 0.62 F1-score (mild vs. moderate pain, nighttime), and CatBoost's performance was similarly affected with 0.61 AUC, 0.60 F1-score (moderate vs. severe pain, daytime). The inclusion of analgesic information benefited the classification between moderate and severe pain. SHAP analysis was conducted to find the most significant features in each setting. It assigned the highest importance to accelerometer-related features on all evaluated settings but also showed the contribution of the other features such as age and medications in specific contexts. In conclusion, accelerometer data combined with patient demographics and previous pain measurements can be used to screen painful from painless episodes in the ICU and can be combined with analgesic information to provide moderate classification between painful episodes of different severities.

3.
Comput Methods Programs Biomed ; 127: 144-64, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26775139

ABSTRACT

An electrocardiogram (ECG) measures the electric activity of the heart and has been widely used for detecting heart diseases due to its simplicity and non-invasive nature. By analyzing the electrical signal of each heartbeat, i.e., the combination of action impulse waveforms produced by different specialized cardiac tissues found in the heart, it is possible to detect some of its abnormalities. In the last decades, several works were developed to produce automatic ECG-based heartbeat classification methods. In this work, we survey the current state-of-the-art methods of ECG-based automated abnormalities heartbeat classification by presenting the ECG signal preprocessing, the heartbeat segmentation techniques, the feature description methods and the learning algorithms used. In addition, we describe some of the databases used for evaluation of methods indicated by a well-known standard developed by the Association for the Advancement of Medical Instrumentation (AAMI) and described in ANSI/AAMI EC57:1998/(R)2008 (ANSI/AAMI, 2008). Finally, we discuss limitations and drawbacks of the methods in the literature presenting concluding remarks and future challenges, and also we propose an evaluation process workflow to guide authors in future works.


Subject(s)
Arrhythmias, Cardiac/diagnosis , Heart Rate , Algorithms , Arrhythmias, Cardiac/physiopathology , Automation , Electrocardiography , Humans , Surveys and Questionnaires
4.
J Opt Soc Am A Opt Image Sci Vis ; 32(3): 431-42, 2015 Mar 01.
Article in English | MEDLINE | ID: mdl-26366654

ABSTRACT

Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios.

5.
IEEE Trans Vis Comput Graph ; 21(1): 4-17, 2015 Jan.
Article in English | MEDLINE | ID: mdl-26357017

ABSTRACT

Automatic data classification is a computationally intensive task that presents variable precision and is considerably sensitive to the classifier configuration and to data representation, particularly for evolving data sets. Some of these issues can best be handled by methods that support users' control over the classification steps. In this paper, we propose a visual data classification methodology that supports users in tasks related to categorization such as training set selection; model creation, application and verification; and classifier tuning. The approach is then well suited for incremental classification, present in many applications with evolving data sets. Data set visualization is accomplished by means of point placement strategies, and we exemplify the method through multidimensional projections and Neighbor Joining trees. The same methodology can be employed by a user who wishes to create his or her own ground truth (or perspective) from a previously unlabeled data set. We validate the methodology through its application to categorization scenarios of image and text data sets, involving the creation, application, verification, and adjustment of classification models.

6.
IEEE Trans Image Process ; 24(12): 4726-40, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26276988

ABSTRACT

Despite important recent advances, the vulnerability of biometric systems to spoofing attacks is still an open problem. Spoof attacks occur when impostor users present synthetic biometric samples of a valid user to the biometric system seeking to deceive it. Considering the case of face biometrics, a spoofing attack consists in presenting a fake sample (e.g., photograph, digital video, or even a 3D mask) to the acquisition sensor with the facial information of a valid user. In this paper, we introduce a low cost and software-based method for detecting spoofing attempts in face recognition systems. Our hypothesis is that during acquisition, there will be inevitable artifacts left behind in the recaptured biometric samples allowing us to create a discriminative signature of the video generated by the biometric sensor. To characterize these artifacts, we extract time-spectral feature descriptors from the video, which can be understood as a low-level feature descriptor that gathers temporal and spectral information across the biometric sample and use the visual codebook concept to find mid-level feature descriptors computed from the low-level ones. Such descriptors are more robust for detecting several kinds of attacks than the low-level ones. The experimental results show the effectiveness of the proposed method for detecting different types of attacks in a variety of scenarios and data sets, including photos, videos, and 3D masks.


Subject(s)
Biometric Identification/methods , Computer Security , Face/anatomy & histology , Image Processing, Computer-Assisted/methods , Video Recording/classification , Databases, Factual , Humans
7.
IEEE Trans Image Process ; 21(4): 2245-55, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22128005

ABSTRACT

With the goal of matching unknown faces against a gallery of known people, the face identification task has been studied for several decades. There are very accurate techniques to perform face identification in controlled environments, particularly when large numbers of samples are available for each face. However, face identification under uncontrolled environments or with a lack of training data is still an unsolved problem. We employ a large and rich set of feature descriptors (with more than 70,000 descriptors) for face identification using partial least squares to perform multichannel feature weighting. Then, we extend the method to a tree-based discriminative structure to reduce the time required to evaluate probe samples. The method is evaluated on Facial Recognition Technology (FERET) and Face Recognition Grand Challenge (FRGC) data sets. Experiments show that our identification method outperforms current state-of-the-art results, particularly for identifying faces acquired across varying conditions.


Subject(s)
Algorithms , Artificial Intelligence , Biometry/methods , Face/anatomy & histology , Image Interpretation, Computer-Assisted/methods , Information Storage and Retrieval/methods , Pattern Recognition, Automated/methods , Subtraction Technique , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...