Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38052485

ABSTRACT

OBJECTIVE: To predict ALS progression with varying observation and prediction window lengths, using machine learning (ML). METHODS: We used demographic, clinical, and laboratory parameters from 5030 patients in the Pooled Resource Open-Access ALS Clinical Trials (PRO-ACT) database to model ALS disease progression as fast (at least 1.5 points decline in ALS Functional Rating Scale-Revised (ALSFRS-R) per month) or non-fast, using Extreme Gradient Boosting (XGBoost) and Bayesian Long Short Term Memory (BLSTM). XGBoost identified predictors of progression while BLSTM provided a confidence level for each prediction. RESULTS: ML models achieved area under receiver-operating-characteristics curve (AUROC) of 0.570-0.748 and were non-inferior to clinician assessments. Performance was similar with observation lengths of a single visit, 3, 6, or 12 months and on a holdout validation dataset, but was better for longer prediction lengths. 21 important predictors were identified, with the top 3 being days since disease onset, past ALSFRS-R and forced vital capacity. Nonstandard predictors included phosphorus, chloride and albumin. BLSTM demonstrated higher performance for the samples about which it was most confident. Patient screening by models may reduce hypothetical Phase II/III clinical trial sizes by 18.3%. CONCLUSION: Similar accuracies across ML models using different observation lengths suggest that a clinical trial observation period could be shortened to a single visit and clinical trial sizes reduced. Confidence levels provided by BLSTM gave additional information on the trustworthiness of predictions, which could aid decision-making. The identified predictors of ALS progression are potential biomarkers and therapeutic targets for further research.


Subject(s)
Amyotrophic Lateral Sclerosis , Humans , Bayes Theorem , Disease Progression , Machine Learning , Databases, Factual
2.
Article in English | MEDLINE | ID: mdl-37794802

ABSTRACT

BACKGROUND, OBJECTIVES: Decrease in the revised ALS Functional Rating Scale (ALSFRS-R) score is currently the most widely used measure of disease progression. However, it does not sufficiently encompass the heterogeneity of ALS. We describe a measure of variability in ALSFRS-R scores and demonstrate its utility in disease characterization. METHODS: We used 5030 ALS clinical trial patients from the Pooled Resource Open-Access ALS Clinical Trials database to calculate variability in disease progression employing a novel measure and correlated variability with disease span. We characterized the more and less variable populations and designed a machine learning model that used clinical, laboratory and demographic data to predict class of variability. The model was validated with a holdout clinical trial dataset of 84 ALS patients (NCT00818389). RESULTS: Greater variability in disease progression was indicative of longer disease span on the patient-level. The machine learning model was able to predict class of variability with accuracy of 60.1-72.7% across different time periods and yielded a set of predictors based on clinical, laboratory and demographic data. A reduced set of 16 predictors and the holdout dataset yielded similar accuracy. DISCUSSION: This measure of variability is a significant determinant of disease span for fast-progressing patients. The predictors identified may shed light on pathophysiology of variability, with greater variability in fast-progressing patients possibly indicative of greater compensatory reinnervation and longer disease span. Increasing variability alongside decreasing rate of disease progression could be a future aim of trials for faster-progressing patients.


Subject(s)
Amyotrophic Lateral Sclerosis , Humans , Amyotrophic Lateral Sclerosis/diagnosis , Disease Progression
3.
Neural Netw ; 161: 449-465, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36805261

ABSTRACT

This paper takes a parallel learning approach in continual learning scenarios. We define parallel continual learning as learning a sequence of tasks where the data for the previous tasks, whose distribution may have shifted over time, are also available while learning new tasks. We propose a parallel continual learning method by assigning subnetworks to each task, and simultaneously training only the assigned subnetworks on their corresponding tasks. In doing so, some parts of the network will be shared across multiple tasks. This is unlike the existing literature in continual learning which aims at learning incoming tasks sequentially, with the assumption that the data for the previous tasks have a fixed distribution. Our proposed method offers promises in: (1) Transparency in the network and in the relationship across tasks by enabling examination of the learned representations by independent and shared subnetworks, (2) Representation generalizability through sharing and training subnetworks on multiple tasks simultaneously. Our analysis shows that compared to many competing approaches such as continual learning, neural architecture search, and multi-task learning, parallel continual learning is capable of learning more generalizable representations. Also, (3)Parallel continual learning overcomes the common issue of catastrophic forgetting in continual learning algorithms. This is the first effort to train a neural network on multiple tasks and input domains simultaneously in a continual learning scenario. Our code is available at https://github.com/yours-anonym/PaRT.


Subject(s)
Algorithms , Neural Networks, Computer
4.
IEEE Trans Cybern ; PP2022 Jun 24.
Article in English | MEDLINE | ID: mdl-35749333

ABSTRACT

This article presents a new approach for providing an interpretation for a spiking neural network classifier by transforming it to a multiclass additive model. The spiking classifier is a multiclass synaptic efficacy function-based leaky-integrate-fire neuron (Mc-SEFRON) classifier. As a first step, the SEFRON classifier for binary classification is extended to handle multiclass classification problems. Next, a new method is presented to transform the temporally distributed weights in a fully trained Mc-SEFRON classifier to shape functions in the feature space. A composite of these shape functions results in an interpretable classifier, namely, a directly interpretable multiclass additive model (DIMA). The interpretations of DIMA are also demonstrated using the multiclass Iris dataset. Further, the performances of both the Mc-SEFRON and DIMA classifiers are evaluated on ten benchmark datasets from the UCI machine learning repository and compared with the other state-of-the-art spiking neural classifiers. The performance study results show that Mc-SEFRON produces similar or better performances than other spiking neural classifiers with an added benefit of interpretability through DIMA. Furthermore, the minor differences in accuracies between Mc-SEFRON and DIMA indicate the reliability of the DIMA classifier. Finally, the Mc-SEFRON and DIMA are tested on three real-world credit scoring problems, and their performances are compared with state-of-the-art results using machine learning methods. The results clearly indicate that DIMA improves the classification accuracy by up to 12% over other interpretable classifiers indicating a better quality of interpretations on the highly imbalanced credit scoring datasets.

5.
AMIA Annu Symp Proc ; 2020: 432-441, 2020.
Article in English | MEDLINE | ID: mdl-33936416

ABSTRACT

Heart failure (HF) is a leading cause of hospital readmissions. There is great interest in approaches to efficiently predict emerging HF-readmissions in the community setting. We investigate the possibility of leveraging streaming telemonitored vital signs data alongside readily accessible patient profile information for predicting evolving 30-day HF-related readmission risk. We acquired data within a non-randomized controlled study that enrolled 150 HF patients over a 1-year post-discharge telemonitoring and telesupport programme. Using the sequential data and associated ground truth readmission outcomes, we developed a recurrent neural network model for dynamic risk prediction. The model detects emerging readmissions with sensitivity > 71%, specificity > 75%, AUROC ~80%. We characterize model performance in relation to telesupport based nurse assessments, and demonstrate strong sensitivity improvements. Our approach enables early stratification of high-risk patients and could enable adaptive targeting of care resources for managing patients with the most urgent needs at any given time.


Subject(s)
Heart Failure/diagnosis , Patient Readmission , Telemedicine , Vital Signs , Aftercare , Aged , Humans , Male , Middle Aged , Patient Discharge , Predictive Value of Tests , Research Design
SELECTION OF CITATIONS
SEARCH DETAIL
...