Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Neural Netw ; 176: 106338, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38692190

ABSTRACT

Electroencephalography (EEG) based Brain Computer Interface (BCI) systems play a significant role in facilitating how individuals with neurological impairments effectively interact with their environment. In real world applications of BCI system for clinical assistance and rehabilitation training, the EEG classifier often needs to learn on sequentially arriving subjects in an online manner. As patterns of EEG signals can be significantly different for different subjects, the EEG classifier can easily erase knowledge of learnt subjects after learning on later ones as it performs decoding in online streaming scenario, namely catastrophic forgetting. In this work, we tackle this problem with a memory-based approach, which considers the following conditions: (1) subjects arrive sequentially in an online manner, with no large scale dataset available for joint training beforehand, (2) data volume from the different subjects could be imbalanced, (3) decoding difficulty of the sequential streaming signal vary, (4) continual classification for a long time is required. This online sequential EEG decoding problem is more challenging than classic cross subject EEG decoding as there is no large-scale training data from the different subjects available beforehand. The proposed model keeps a small balanced memory buffer during sequential learning, with memory data dynamically selected based on joint consideration of data volume and informativeness. Furthermore, for the more general scenarios where subject identity is unknown to the EEG decoder, aka. subject agnostic scenario, we propose a kernel based subject shift detection method that identifies underlying subject changes on the fly in a computationally efficient manner. We develop challenging benchmarks of streaming EEG data from sequentially arriving subjects with both balanced and imbalanced data volumes, and performed extensive experiments with a detailed ablation study on the proposed model. The results show the effectiveness of our proposed approach, enabling the decoder to maintain performance on all previously seen subjects over a long period of sequential decoding. The model demonstrates the potential for real-world applications.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Memory , Electroencephalography/methods , Humans , Memory/physiology , Signal Processing, Computer-Assisted , Brain/physiology , Algorithms
2.
IEEE Winter Conf Appl Comput Vis ; 2023: 455-466, 2023 Jan.
Article in English | MEDLINE | ID: mdl-38170053

ABSTRACT

Automated cellular instance segmentation is a process utilized for accelerating biological research for the past two decades, and recent advancements have produced higher quality results with less effort from the biologist. Most current endeavors focus on completely cutting the researcher out of the picture by generating highly generalized models. However, these models invariably fail when faced with novel data, distributed differently than the ones used for training. Rather than approaching the problem with methods that presume the availability of large amounts of target data and computing power for retraining, in this work we address the even greater challenge of designing an approach that requires minimal amounts of new annotated data as well as training time. We do so by designing specialized contrastive losses that leverage the few annotated samples very efficiently. A large set of results show that 3 to 5 annotations lead to models with accuracy that: 1) significantly mitigate the covariate shift effects; 2) matches or surpasses other adaptation methods; 3) even approaches methods that have been fully retrained on the target distribution. The adaptation training is only a few minutes, paving a path towards a balance between model performance, computing requirements and expert-level annotation needs.

3.
J Trauma Acute Care Surg ; 92(3): 499-503, 2022 03 01.
Article in English | MEDLINE | ID: mdl-35196303

ABSTRACT

INTRODUCTION: Shock index (SI) and delta shock index (∆SI) predict mortality and blood transfusion in trauma patients. This study aimed to evaluate the predictive ability of SI and ∆SI in a rural environment with prolonged transport times and transfers from critical access hospitals or level IV trauma centers. METHODS: We completed a retrospective database review at an American College of Surgeons verified level 1 trauma center for 2 years. Adult subjects analyzed sustained torso trauma. Subjects with missing data or severe head trauma were excluded. For analysis, poisson regression and binomial logistic regression were used to study the effect of time in transport and SI/∆SI on resource utilization and outcomes. p < 0.05 was considered significant. RESULTS: Complete data were available on 549 scene patients and 127 transfers. Mean Injury Severity Score was 11 (interquartile range, 9.0) for scene and 13 (interquartile range, 6.5) for transfers. Initial emergency medical services SI was the most significant predictor for blood transfusion and intensive care unit care in both scene and transferred patients (p < 0.0001) compared with trauma center arrival SI or transferring center SI. A negative ∆SI was significantly associated with the need for transfusion and the number of units transfused. Longer transport time also had a significant relationship with increasing intensive care unit length of stay. Cohorts were analyzed separately. CONCLUSION: Providers must maintain a high level of clinical suspicion for patients who had an initially elevated SI. Emergency medical services SI was the greatest predictor of injury and need for resources. Enroute SI and ∆SI were less predictive as time from injury increased. This highlights the improvements in en route care but does not eliminate the need for high-level trauma intervention. LEVEL OF EVIDENCE: Therapeutic/care management, level IV.


Subject(s)
Blood Component Transfusion/statistics & numerical data , Emergency Medical Services , Shock/classification , Shock/mortality , Thoracic Injuries/therapy , Wounds, Nonpenetrating/therapy , Critical Care/statistics & numerical data , Female , Humans , Injury Severity Score , Male , Predictive Value of Tests , Retrospective Studies , Time-to-Treatment , Trauma Centers , United States
4.
Appl Clin Inform ; 12(1): 10-16, 2021 01.
Article in English | MEDLINE | ID: mdl-33406541

ABSTRACT

BACKGROUND: The United States, and especially West Virginia, have a tremendous burden of coronary artery disease (CAD). Undiagnosed familial hypercholesterolemia (FH) is an important factor for CAD in the U.S. Identification of a CAD phenotype is an initial step to find families with FH. OBJECTIVE: We hypothesized that a CAD phenotype detection algorithm that uses discrete data elements from electronic health records (EHRs) can be validated from EHR information housed in a data repository. METHODS: We developed an algorithm to detect a CAD phenotype which searched through discrete data elements, such as diagnosis, problem lists, medical history, billing, and procedure (International Classification of Diseases [ICD]-9/10 and Current Procedural Terminology [CPT]) codes. The algorithm was applied to two cohorts of 500 patients, each with varying characteristics. The second (younger) cohort consisted of parents from a school child screening program. We then determined which patients had CAD by systematic, blinded review of EHRs. Following this, we revised the algorithm by refining the acceptable diagnoses and procedures. We ran the second algorithm on the same cohorts and determined the accuracy of the modification. RESULTS: CAD phenotype Algorithm I was 89.6% accurate, 94.6% sensitive, and 85.6% specific for group 1. After revising the algorithm (denoted CAD Algorithm II) and applying it to the same groups 1 and 2, sensitivity 98.2%, specificity 87.8%, and accuracy 92.4; accuracy 93% for group 2. Group 1 F1 score was 92.4%. Specific ICD-10 and CPT codes such as "coronary angiography through a vein graft" were more useful than generic terms. CONCLUSION: We have created an algorithm, CAD Algorithm II, that detects CAD on a large scale with high accuracy and sensitivity (recall). It has proven useful among varied patient populations. Use of this algorithm can extend to monitor a registry of patients in an EHR and/or to identify a group such as those with likely FH.


Subject(s)
Coronary Artery Disease , Coronary Artery Disease/diagnostic imaging , Electronic Health Records , Hospitals , Humans , International Classification of Diseases
5.
Brief Bioinform ; 22(2): 1767-1781, 2021 03 22.
Article in English | MEDLINE | ID: mdl-32363395

ABSTRACT

Modern machine learning techniques (such as deep learning) offer immense opportunities in the field of human biological aging research. Aging is a complex process, experienced by all living organisms. While traditional machine learning and data mining approaches are still popular in aging research, they typically need feature engineering or feature extraction for robust performance. Explicit feature engineering represents a major challenge, as it requires significant domain knowledge. The latest advances in deep learning provide a paradigm shift in eliciting meaningful knowledge from complex data without performing explicit feature engineering. In this article, we review the recent literature on applying deep learning in biological age estimation. We consider the current data modalities that have been used to study aging and the deep learning architectures that have been applied. We identify four broad classes of measures to quantify the performance of algorithms for biological age estimation and based on these evaluate the current approaches. The paper concludes with a brief discussion on possible future directions in biological aging research using deep learning. This study has significant potentials for improving our understanding of the health status of individuals, for instance, based on their physical activities, blood samples and body shapes. Thus, the results of the study could have implications in different health care settings, from palliative care to public health.


Subject(s)
Aging/physiology , Deep Learning , Anthropometry , Biomarkers/metabolism , Computational Biology/methods , Electronic Health Records , Epigenesis, Genetic , Exercise , Humans , Neural Networks, Computer
6.
PLoS One ; 12(1): e0166749, 2017.
Article in English | MEDLINE | ID: mdl-28045895

ABSTRACT

We present a virtual reality (VR) framework for the analysis of whole human body surface area. Usual methods for determining the whole body surface area (WBSA) are based on well known formulae, characterized by large errors when the subject is obese, or belongs to certain subgroups. For these situations, we believe that a computer vision approach can overcome these problems and provide a better estimate of this important body indicator. Unfortunately, using machine learning techniques to design a computer vision system able to provide a new body indicator that goes beyond the use of only body weight and height, entails a long and expensive data acquisition process. A more viable solution is to use a dataset composed of virtual subjects. Generating a virtual dataset allowed us to build a population with different characteristics (obese, underweight, age, gender). However, synthetic data might differ from a real scenario, typical of the physician's clinic. For this reason we develop a new virtual environment to facilitate the analysis of human subjects in 3D. This framework can simulate the acquisition process of a real camera, making it easy to analyze and to create training data for machine learning algorithms. With this virtual environment, we can easily simulate the real setup of a clinic, where a subject is standing in front of a camera, or may assume a different pose with respect to the camera. We use this newly designated environment to analyze the whole body surface area (WBSA). In particular, we show that we can obtain accurate WBSA estimations with just one view, virtually enabling the possibility to use inexpensive depth sensors (e.g., the Kinect) for large scale quantification of the WBSA from a single view 3D map.


Subject(s)
Body Surface Area , Imaging, Three-Dimensional/methods , User-Computer Interface , Adolescent , Adult , Algorithms , Computer Simulation , Female , Health Status , Humans , Male , Middle Aged , Models, Statistical , Posture , Regression Analysis , Young Adult
7.
Audiol Res ; 6(1): 137, 2016 Apr 20.
Article in English | MEDLINE | ID: mdl-27588160

ABSTRACT

The high-frequency region of vowel signals (above the third formant or F3) has received little research attention. Recent evidence, however, has documented the perceptual utility of high-frequency information in the speech signal above the traditional frequency bandwidth known to contain important cues for speech and speaker recognition. The purpose of this study was to determine if high-pass filtered vowels could be separated by vowel category and speaker type in a supervised learning framework. Mel frequency cepstral coefficients (MFCCs) were extracted from productions of six vowel categories produced by two male, two female, and two child speakers. Results revealed that the filtered vowels were well separated by vowel category and speaker type using MFCCs from the high-frequency spectrum. This demonstrates the presence of useful information for automated classification from the high-frequency region and is the first study to report findings of this nature in a supervised learning framework.

8.
Methods Mol Biol ; 1427: 277-90, 2016.
Article in English | MEDLINE | ID: mdl-27259933

ABSTRACT

Connectomics-the study of how neurons wire together in the brain-is at the forefront of modern neuroscience research. However, many connectomics studies are limited by the time and precision needed to correctly segment large volumes of electron microscopy (EM) image data. We present here a semi-automated segmentation pipeline using freely available software that can significantly decrease segmentation time for extracting both nuclei and cell bodies from EM image volumes.


Subject(s)
Image Processing, Computer-Assisted/methods , Neurons/ultrastructure , Pattern Recognition, Automated/methods , Automation, Laboratory , Cell Body/ultrastructure , Connectome , Humans , Imaging, Three-Dimensional/methods , Microscopy, Electron , Models, Neurological , Software
9.
Biomed Eng Online ; 14: 112, 2015 Dec 02.
Article in English | MEDLINE | ID: mdl-26626555

ABSTRACT

BACKGROUND: Gait analysis for therapy regimen prescription and monitoring requires patients to physically access clinics with specialized equipment. The timely availability of such infrastructure at the right frequency is especially important for small children. Besides being very costly, this is a challenge for many children living in rural areas. This is why this work develops a low-cost, portable, and automated approach for in-home gait analysis, based on the Microsoft Kinect. METHODS: A robust and efficient method for extracting gait parameters is introduced, which copes with the high variability of noisy Kinect skeleton tracking data experienced across the population of young children. This is achieved by temporally segmenting the data with an approach based on coupling a probabilistic matching of stride template models, learned offline, with the estimation of their global and local temporal scaling. A preliminary study conducted on healthy children between 2 and 4 years of age is performed to analyze the accuracy, precision, repeatability, and concurrent validity of the proposed method against the GAITRite when measuring several spatial and temporal children's gait parameters. RESULTS: The method has excellent accuracy and good precision, with segmenting temporal sequences of body joint locations into stride and step cycles. Also, the spatial and temporal gait parameters, estimated automatically, exhibit good concurrent validity with those provided by the GAITRite, as well as very good repeatability. In particular, on a range of nine gait parameters, the relative and absolute agreements were found to be good and excellent, and the overall agreements were found to be good and moderate. CONCLUSION: This work enables and validates the automated use of the Kinect for children's gait analysis in healthy subjects. In particular, the approach makes a step forward towards developing a low-cost, portable, parent-operated in-home tool for clinicians assisting young children.


Subject(s)
Computers , Gait , Automation , Child, Preschool , Female , Foot/physiology , Humans , Joints/physiology , Male , Signal Processing, Computer-Assisted , Video Games
10.
IEEE Trans Pattern Anal Mach Intell ; 28(12): 2006-19, 2006 Dec.
Article in English | MEDLINE | ID: mdl-17108373

ABSTRACT

We propose a model of the joint variation of shape and appearance of portions of an image sequence. The model is conditionally linear, and can be thought of as an extension of active appearance models to exploit the temporal correlation of adjacent image frames. Inference of the model parameters can be performed efficiently using established numerical optimization techniques borrowed from finite-element analysis and system identification techniques.


Subject(s)
Algorithms , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Models, Statistical , Pattern Recognition, Automated/methods , Computer Simulation , Information Storage and Retrieval/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...