Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 744
Filter
1.
Heliyon ; 10(12): e32709, 2024 Jun 30.
Article in English | MEDLINE | ID: mdl-38975148

ABSTRACT

Background: Machine learning has shown to be an effective method for early prediction and intervention of Gestational diabetes mellitus (GDM), which greatly decreases GDM incidence, reduces maternal and infant complications and improves the prognosis. However, there is still much room for improvement in data quality, feature dimension, and accuracy. The contributions and mechanism explanations of clinical data at different pregnancy stages to the prediction accuracy are still lacking. More importantly, current models still face notable obstacles in practical applications due to the complex and diverse input features and difficulties in redeployment. As a result, a simple, practical but accurate enough model is urgently needed. Design and methods: In this study, 2309 samples from two public hospitals in Shenzhen, China were collected for analysis. Different algorithms were systematically compared to build a robust and stepwise prediction system (level A to C) based on advanced machine learning, and models under different levels were interpreted. Results: XGBoost reported the best performance with ACC of 0.922, 0.859 and 0.850, AUC of 0.974, 0.924 and 0.913 for the selected level A to C models in the test set, respectively. Tree-based feature importance and SHAP method successfully identified the commonly recognized risk factors, while indicated new inconsistent impact trends for GDM in different stages of pregnancy. Conclusion: A stepwise prediction system was successfully established. A practical tool that enables a quick prediction of GDM was released at https://github.com/ifyoungnet/MedGDM.This study is expected to provide a more detailed profiling of GDM risk and lay the foundation for the application of the model in practice.

2.
Epilepsy Res ; 205: 107397, 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38976953

ABSTRACT

BACKGROUND: Epilepsy is a serious complication after an ischemic stroke. Although two studies have developed prediction model for post-stroke epilepsy (PSE), their accuracy remains insufficient, and their applicability to different populations is uncertain. With the rapid advancement of computer technology, machine learning (ML) offers new opportunities for creating more accurate prediction models. However, the potential of ML in predicting PSE is still not well understood. The purpose of this study was to develop prediction models for PSE among ischemic stroke patients. METHODS: Patients with ischemic stroke from two stroke centers were included in this retrospective cohort study. At the baseline level, 33 input variables were considered candidate features. The 2-year PSE prediction models in the derivation cohort were built using six ML algorithms. The predictive performance of these machine learning models required further appraisal and comparison with the reference model using the conventional triage classification information. The Shapley additive explanation (SHAP), based on fair profit allocation among many stakeholders according to their contributions, is used to interpret the predicted outcomes of the naive Bayes (NB) model. RESULTS: A total of 1977 patients were included to build the predictive model for PSE. The Boruta method identified NIHSS score, hospital length of stay, D-dimer level, and cortical involvement as the optimal features, with the receiver operating characteristic curves ranging from 0.709 to 0.849. An additional 870 patients were used to validate the ML and reference models. The NB model achieved the best performance among the PSE prediction models with an area under the receiver operating curve of 0.757. At the 20 % absolute risk threshold, the NB model also provided a sensitivity of 0.739 and a specificity of 0.720. The reference model had poor sensitivities of only 0.15 despite achieving a helpful AUC of 0.732. Furthermore, the SHAP method analysis demonstrated that a higher NIHSS score, longer hospital length of stay, higher D-dimer level, and cortical involvement were positive predictors of epilepsy after ischemic stroke. CONCLUSIONS: Our study confirmed the feasibility of applying the ML method to use easy-to-obtain variables for accurate prediction of PSE and provided improved strategies and effective resource allocation for high-risk patients. In addition, the SHAP method could improve model transparency and make it easier for clinicians to grasp the prediction model's reliability.

3.
Molecules ; 29(12)2024 Jun 09.
Article in English | MEDLINE | ID: mdl-38930822

ABSTRACT

The investigation of cycloaddition reactions involving acridine-based dipolarophiles revealed distinct regioselectivity patterns influenced mainly by the electronic factor. Specifically, the reactions of methyl-(2E)-3-(acridin-4-yl)-prop-2-enoate and 4-[(1E)-2-phenylethenyl]acridine with unstable benzonitrile N-oxides were studied. For methyl-(2E)-3-(acridin-4-yl)-prop-2-enoate, the formation of two regioisomers favoured the 5-(acridin-4-yl)-4,5-dihydro-1,2-oxazole-4-carboxylates, with remarkable exclusivity in the case of 4-methoxybenzonitrile oxide. Conversely, 4-[(1E)-2-phenylethenyl]acridine displayed reversed regioselectivity, favouring products 4-[3-(substituted phenyl)-5-phenyl-4,5-dihydro-1,2-oxazol-4-yl]acridine. Subsequent hydrolysis of isolated methyl 5-(acridin-4-yl)-3-phenyl-4,5-dihydro-1,2-oxazole-4-carboxylates resulted in the production of carboxylic acids, with nearly complete conversion. During NMR measurements of carboxylic acids in CDCl3, decarboxylation was observed, indicating the formation of a new prochiral carbon centre C-4, further confirmed by a noticeable colour change. Overall, this investigation provides valuable insights into regioselectivity in cycloaddition reactions and subsequent transformations, suggesting potential applications across diverse scientific domains.

4.
Br J Psychol ; 2024 Jun 10.
Article in English | MEDLINE | ID: mdl-38858823

ABSTRACT

Explainable AI (XAI) methods provide explanations of AI models, but our understanding of how they compare with human explanations remains limited. Here, we examined human participants' attention strategies when classifying images and when explaining how they classified the images through eye-tracking and compared their attention strategies with saliency-based explanations from current XAI methods. We found that humans adopted more explorative attention strategies for the explanation task than the classification task itself. Two representative explanation strategies were identified through clustering: One involved focused visual scanning on foreground objects with more conceptual explanations, which contained more specific information for inferring class labels, whereas the other involved explorative scanning with more visual explanations, which were rated higher in effectiveness for early category learning. Interestingly, XAI saliency map explanations had the highest similarity to the explorative attention strategy in humans, and explanations highlighting discriminative features from invoking observable causality through perturbation had higher similarity to human strategies than those highlighting internal features associated with higher class score. Thus, humans use both visual and conceptual information during explanation, which serve different purposes, and XAI methods that highlight features informing observable causality match better with human explanations, potentially more accessible to users.

5.
Evol Anthropol ; : e22037, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38859704

ABSTRACT

Smith and Smith and Wood proposed that the human fossil record offers special challenges for causal hypotheses because "unique" adaptations resist the comparative method. We challenge their notions of "uniqueness" and offer a refutation of the idea that there is something epistemologically special about human prehistoric data. Although paleontological data may be sparse, there is nothing inherent about this information that prevents its use in the inductive or deductive process, nor in the generation and testing of scientific hypotheses. The imprecision of the fossil record is well-understood, and such imprecision is often factored into hypotheses and methods. While we acknowledge some oversteps within the discipline, we also note that the history of paleoanthropology is clearly one of progress, with ideas tested and resolution added as data (fossils) are uncovered and new technologies applied, much like in sciences as diverse as astronomy, molecular genetics, and geology.

6.
Biomolecules ; 14(6)2024 Jun 15.
Article in English | MEDLINE | ID: mdl-38927111

ABSTRACT

At the end of 2023, the Whole Mouse Brain Atlas was announced, revealing that there are about 5300 molecularly defined neuronal types in the mouse brain. We ask whether brain models exist that contemplate how this is possible. The conventional columnar model, implicitly used by the authors of the Atlas, is incapable of doing so with only 20 brain columns (5 brain vesicles with 4 columns each). We argue that the definition of some 1250 distinct progenitor microzones, each producing at least 4-5 neuronal types over time, may be sufficient. Presently, this is nearly achieved by the prosomeric model amplified by the secondary dorsoventral and anteroposterior microzonation of progenitor areas, plus the clonal variation in cell types produced, on average, by each of them.


Subject(s)
Brain , Neurons , Animals , Mice , Neurons/metabolism , Brain/metabolism
7.
Evol Anthropol ; : e22041, 2024 Jun 30.
Article in English | MEDLINE | ID: mdl-38944755

ABSTRACT

Smith and Wood reply to Villmoare and Kimbel regarding the scientific credibility of problems in paleoanthropology that require causal explanations for unique historical events.

8.
Cognition ; 250: 105860, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38941763

ABSTRACT

Why were women given the right to vote? "Because it is morally wrong to deny women the right to vote." This explanation does not seem to fit the typical pattern for explaining an event: rather than citing a cause, it appeals to an ethical claim. Do people judge ethical claims to be genuinely explanatory? And if so, why? In Studies 1 (N = 220) and 2 (N = 293), we find that many participants accept ethical explanations for social change and that this is predicted by their meta-ethical beliefs in moral progress and moral principles, suggesting that these participants treat morality as a directional feature of the world, somewhat akin to a causal force. In Studies 3 (N = 513) and 4 (N = 328), we find that participants recognize this relationship between ethical explanations and meta-ethical commitments, using the former to make inferences about individuals' beliefs in moral progress and moral principles. Together these studies demonstrate that our beliefs about the nature of morality shape our judgments of explanations and that explanations shape our inferences about others' moral commitments.


Subject(s)
Judgment , Morals , Social Change , Social Perception , Humans , Female , Adult , Male , Young Adult , Middle Aged , Adolescent
9.
Cognition ; 250: 105854, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38941764

ABSTRACT

People relish thinking about coincidences-we puzzle over their meanings and delight in conveying our experiences of them to others. But whereas some research has begun to explore how coincidences are represented by adults, little is known about the early development of these representations. Here we explored factors influencing coincidence representations in both adults and children. Across two experiments, participants read stories describing co-occurring events and then judged whether these constituted coincidences. In Experiment 1 we found that adults' coincidence judgments were highly sensitive to the presence or absence of plausible explanations: as expected, adults were more likely to judge co-occurrences as a coincidence when no explanation was available. Importantly, their coincidence judgments were also modulated by the number of events that co-occurred. Adults tended to reject scenarios involving too many co-occurring events as coincidences regardless of whether an explanation was present, suggesting that observing suspiciously many co-occurrences triggered them to infer their own underlying explanation (and thus blocking the events' interpretation as a coincidence). In Experiment 2 we found that 4- to 10-year-old children also represent coincidences, and identify them via the absence of plausible explanations. Older children, like adults, rejected suspiciously large numbers of co-occurring events as coincidental, whereas younger children did not exhibit this sensitivity. Overall, these results suggest that representation of coincidence is available from early in life, but undergoes developmental change during the early school-age years.


Subject(s)
Child Development , Humans , Child , Child, Preschool , Adult , Female , Male , Child Development/physiology , Young Adult , Judgment/physiology , Adolescent , Age Factors , Concept Formation/physiology
10.
Article in English | MEDLINE | ID: mdl-38787456

ABSTRACT

INTRODUCTION: Knee osteotomies are effective procedures to treat different deformities and to redistribute the load at the joint level, reducing the risk of wear and, consequently, the need for invasive procedures. Particularly, knee osteotomies are effective in treating early arthritis related to knee deformities in young and active patients with high functional demands, with excellent long-term results. Precise mathematical calculations are imperative during the preoperative phase to achieve tailored and accurate corrections for each patient and avoid complications, but sometimes those formulas are challenging to comprehend and apply. METHODS: Four specific questions regarding controversial topics (planning methods, patellar height, tibial slope, and limb length variation) were formulated. An electronic search was performed on PubMed and Cochrane Library to find articles containing detailed mathematical or trigonometrical explanations. A team of orthopedic surgeons and an engineer summarized the available Literature and mathematical rules, with a final clear mathematical explanation given by the engineer. Wherever the explanation was not available in Literature, it was postulated by the same engineer. RESULTS: After the exclusion process, five studies were analyzed. For three questions, no studies were found that provided mathematical analyses or explanations. Through independent calculations, it was demonstrated why Dugdale's method underestimates the correction angle compared to Miniaci's method, and it was shown that the variation in patellar height after osteotomy can be predicted using simple formulas. The five included studies examine postoperative variations in limb length and tibial slope, providing formulas applicable in preoperative planning. New formulas were independently computed, using the planned correction angle and preoperatively obtained measurements to predict the studied variations. CONCLUSIONS: There is a strict connection among surgery, planning, and mathematics formulas in knee osteotomies. The aim of this study was to analyze the current literature and provide mathematical and trigonometric explanations to important controversial topics in knee osteotomies. Simple and easy applicable formulas are provided to enhance the accuracy and outcomes of this surgical procedure.

11.
Acta Biotheor ; 72(2): 5, 2024 May 16.
Article in English | MEDLINE | ID: mdl-38753122

ABSTRACT

The etiological account of teleological function is beset by several difficulties, which I propose to solve by grafting onto the etiological theory a subordinated goal-contribution clause. This approach enables us to ascribe neither too many teleofunctions nor too few; to give a unitary, one-clause analysis that works just as well for teleological functions derived from Darwinian evolution, as for those derived from human intention; and finally, to save the etiological theory from falsification, by explaining how, in spite of appearances, the theory can allow for evolutionary function loss.


Subject(s)
Biological Evolution , Humans
12.
J Am Med Inform Assoc ; 31(7): 1540-1550, 2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38804963

ABSTRACT

OBJECTIVE: Predicting mortality after acute myocardial infarction (AMI) is crucial for timely prescription and treatment of AMI patients, but there are no appropriate AI systems for clinicians. Our primary goal is to develop a reliable and interpretable AI system and provide some valuable insights regarding short, and long-term mortality. MATERIALS AND METHODS: We propose the RIAS framework, an end-to-end framework that is designed with reliability and interpretability at its core and automatically optimizes the given model. Using RIAS, clinicians get accurate and reliable predictions which can be used as likelihood, with global and local explanations, and "what if" scenarios to achieve desired outcomes as well. RESULTS: We apply RIAS to AMI prognosis prediction data which comes from the Korean Acute Myocardial Infarction Registry. We compared FT-Transformer with XGBoost and MLP and found that FT-Transformer has superiority in sensitivity and comparable performance in AUROC and F1 score to XGBoost. Furthermore, RIAS reveals the significance of statin-based medications, beta-blockers, and age on mortality regardless of time period. Lastly, we showcase reliable and interpretable results of RIAS with local explanations and counterfactual examples for several realistic scenarios. DISCUSSION: RIAS addresses the "black-box" issue in AI by providing both global and local explanations based on SHAP values and reliable predictions, interpretable as actual likelihoods. The system's "what if" counterfactual explanations enable clinicians to simulate patient-specific scenarios under various conditions, enhancing its practical utility. CONCLUSION: The proposed framework provides reliable and interpretable predictions along with counterfactual examples.


Subject(s)
Artificial Intelligence , Myocardial Infarction , Humans , Myocardial Infarction/mortality , Myocardial Infarction/diagnosis , Prognosis , Male , Registries , Female , Republic of Korea , Reproducibility of Results , Aged , Middle Aged
13.
Sci Total Environ ; 935: 173382, 2024 Jul 20.
Article in English | MEDLINE | ID: mdl-38777050

ABSTRACT

With the development of monitoring technology, the variety of ozone precursors that can be detected by monitoring stations has been increased dramatically. And this has brought a great increment of information to ozone prediction and explanation studies. This study completes feature mining and reconstruction of multi-source data (meteorological data, conventional pollutant data, and precursors data) by using a machine learning approach, and built a cross-stacked ensemble learning model (CSEM). In the feature engineering process, this study reconstructed two VOCs variables most associated with ozone and found it works best to use the top seven variables with the highest contribution. The CSEM includes three base models: random forest, extreme gradient boosting tree, and LSTM, learning the parameters of the model under the integrated training of cross-stacking. The cross-stacked integrated training method enables the second-layer learner of the ensemble model to make full use of the learning results of the base models as training data, thereby improving the prediction performance of the model. The model predicted the hourly ozone concentration with R2 of 0.94, 0.97, and 0.96 for mild, moderate, and severe pollution cases, respectively; mean absolute error (MAE) of 4.48 µg/m3, 5.01 µg/m3, and 8.71 µg/m3, respectively. The model predicted ozone concentrations under different NOx and VOCs reduction scenarios, and the results show that with a 20 % reduction in VOCs and no change in NOx in the study area, 75.28 % of cases achieved reduction and 15.73 % of cases got below 200 µg/m3. In addition, a comprehensive evaluation index of the prediction model is proposed in this paper, which can be extended to any prediction model performance comparison and analysis. For practical application, machine learning feature selection and cross-stacked ensemble models can be jointly applied in ozone real-time prediction and emission reduction strategy analysis.

14.
Sci Total Environ ; 937: 173426, 2024 Aug 10.
Article in English | MEDLINE | ID: mdl-38796015

ABSTRACT

The artificial structures can influence wetland topology and sediment properties, thereby shaping plant distribution and composition. Macrobenthos composition was correlated with plant cover. Previous studies on the impact of artificial structures on plant distribution are scarce in incorporating time-series data or extended field surveys. In this study, a machine-learning-based species distribution model with decade-long observation was analyzed to investigate the correlation between the shift in the distribution of B. planiculmis, artificial structure-induced elevation changes and the expansion of other plants, as well as their connection to soil properties and crab composition dynamics under plants in Gaomei Wetland. Long short-term memory model (LSTM) with Shapley additive explanations (SHAP) was employed for predicting the distribution of B. planiculmis and explaining feature importance. The results indicated that wetland topology was influenced by both artificial structures and plants. Areas initially colonized by B. planiculmis were replaced by other species. Soil properties showed significant differences among plant patches; however, principal component analysis (PCA) of sediment properties and niche similarity analysis showed that the niche of plants was overlapped. Crab composition was different under different plants. The presence probability of B. planiculmis near woody paths decreased according to LSTM and field survey data. SHAP analysis suggested that the distribution of other plants, historical distribution of B. planiculmis and sediment properties significantly contributed to the presence probability of B. planiculmis. A sharp decrease in SHAP values with increasing NDVI at suitable elevations, overlap in PCA of sediment properties and niche similarity indicated potential competition among plants. This decade-long time-series field survey revealed the joint effects of artificial structure and vegetation on the topology and soil properties dynamics. These changes influenced the plant distribution through potential plant competition. LSTM with SHAP provided valuable insights in the underlying the mechanisms of artificial structure effects on the plant zonation process.


Subject(s)
Machine Learning , Wetlands , Brachyura , Environmental Monitoring/methods , Soil/chemistry , China , Plants , Animals
15.
Stud Hist Philos Sci ; 105: 50-58, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38754358

ABSTRACT

In this essay I suggest that we view design principles in systems biology as minimal models, for a design principle usually exhibits universal behaviors that are common to a whole range of heterogeneous (living and nonliving) systems with different underlying mechanisms. A well-known design principle in systems biology, integral feedback control, is discussed, showing that it satisfies all the conditions for a model to be a minimal model. This approach has significant philosophical implications: it not only accounts for how design principles explain, but also helps clarify one dispute over design principles, e.g., whether design principles provide mechanistic explanations or a distinct kind of explanations called design explanations.


Subject(s)
Systems Biology , Systems Biology/methods , Models, Biological
16.
Stud Hist Philos Sci ; 105: 109-119, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38761539

ABSTRACT

This paper investigates conceptions of explanation, teleology, and analogy in the works of Immanuel Kant (1724-1804) and Georges Cuvier (1769-1832). Richards (2000, 2002) and Zammito (2006, 2012, 2018) have argued that Kant's philosophy provided an obstacle for the project of establishing biology as a proper science around 1800. By contrast, Russell (1916), Outram (1986), and Huneman (2006, 2008) have argued, similar to suggestions from Lenoir (1989), that Kant's philosophy influenced the influential naturalist Georges Cuvier. In this article, I wish to expand on and further the work of Russell, Outram, and Huneman by adopting a novel perspective on Cuvier and considering (a) the similar conceptions of proper science and explanation of Kant and Cuvier, and (b) the similar conceptions of the role of teleology and analogy in the works of Kant and Cuvier. The similarities between Kant and Cuvier show, contrary to the interpretation of Richards and Zammito, that some of Kant's philosophical ideas, whether they derived from him or not, were fruitfully applied by some life scientists who wished to transform life sciences into proper sciences around 1800. However, I also show that Cuvier, in contrast to Kant, had a workable strategy for transforming the life sciences into proper sciences, and that he departed from Kant's philosophy of science in crucial respects.


Subject(s)
Anatomy, Comparative , Natural History , Philosophy , History, 19th Century , Philosophy/history , Natural History/history , History, 18th Century , Anatomy, Comparative/history
17.
Front Big Data ; 7: 1392662, 2024.
Article in English | MEDLINE | ID: mdl-38784676

ABSTRACT

In recent years, analyzing the explanation for the prediction of Graph Neural Networks (GNNs) has attracted increasing attention. Despite this progress, most existing methods do not adequately consider the inherent uncertainties stemming from the randomness of model parameters and graph data, which may lead to overconfidence and misguiding explanations. However, it is challenging for most of GNN explanation methods to quantify these uncertainties since they obtain the prediction explanation in a post-hoc and model-agnostic manner without considering the randomness of graph data and model parameters. To address the above problems, this paper proposes a novel uncertainty quantification framework for GNN explanations. For mitigating the randomness of graph data in the explanation, our framework accounts for two distinct data uncertainties, allowing for a direct assessment of the uncertainty in GNN explanations. For mitigating the randomness of learned model parameters, our method learns the parameter distribution directly from the data, obviating the need for assumptions about specific distributions. Moreover, the explanation uncertainty within model parameters is also quantified based on the learned parameter distributions. This holistic approach can integrate with any post-hoc GNN explanation methods. Empirical results from our study show that our proposed method sets a new standard for GNN explanation performance across diverse real-world graph benchmarks.

18.
Front Neurol ; 15: 1305543, 2024.
Article in English | MEDLINE | ID: mdl-38711558

ABSTRACT

Objective: Chronic subdural hematoma (CSDH) is a neurological condition with high recurrence rates, primarily observed in the elderly population. Although several risk factors have been identified, predicting CSDH recurrence remains a challenge. Given the potential of machine learning (ML) to extract meaningful insights from complex data sets, our study aims to develop and validate ML models capable of accurately predicting postoperative CSDH recurrence. Methods: Data from 447 CSDH patients treated with consecutive burr-hole irrigations at Wenzhou Medical University's First Affiliated Hospital (December 2014-April 2019) were studied. 312 patients formed the development cohort, while 135 comprised the test cohort. The Least Absolute Shrinkage and Selection Operator (LASSO) method was employed to select crucial features associated with recurrence. Eight machine learning algorithms were used to construct prediction models for hematoma recurrence, using demographic, laboratory, and radiological features. The Border-line Synthetic Minority Over-sampling Technique (SMOTE) was applied to address data imbalance, and Shapley Additive Explanation (SHAP) analysis was utilized to improve model visualization and interpretability. Model performance was assessed using metrics such as AUROC, sensitivity, specificity, F1 score, calibration plots, and decision curve analysis (DCA). Results: Our optimized ML models exhibited prediction accuracies ranging from 61.0% to 86.2% for hematoma recurrence in the validation set. Notably, the Random Forest (RF) model surpassed other algorithms, achieving an accuracy of 86.2%. SHAP analysis confirmed these results, highlighting key clinical predictors for CSDH recurrence risk, including age, alanine aminotransferase level, fibrinogen level, thrombin time, and maximum hematoma diameter. The RF model yielded an accuracy of 92.6% with an AUC value of 0.834 in the test dataset. Conclusion: Our findings underscore the efficacy of machine learning algorithms, notably the integration of the RF model with SMOTE, in forecasting the recurrence of postoperative chronic subdural hematoma. Leveraging the RF model, we devised an online calculator that may serve as a pivotal instrument in tailoring therapeutic strategies and implementing timely preventive interventions for high-risk patients.

19.
J Med Internet Res ; 26: e51354, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38691403

ABSTRACT

BACKGROUND: Acute kidney disease (AKD) affects more than half of critically ill elderly patients with acute kidney injury (AKI), which leads to worse short-term outcomes. OBJECTIVE: We aimed to establish 2 machine learning models to predict the risk and prognosis of AKD in the elderly and to deploy the models as online apps. METHODS: Data on elderly patients with AKI (n=3542) and AKD (n=2661) from the Medical Information Mart for Intensive Care IV (MIMIC-IV) database were used to develop 2 models for predicting the AKD risk and in-hospital mortality, respectively. Data collected from Xiangya Hospital of Central South University were for external validation. A bootstrap method was used for internal validation to obtain relatively stable results. We extracted the indicators within 24 hours of the first diagnosis of AKI and the fluctuation range of some indicators, namely delta (day 3 after AKI minus day 1), as features. Six machine learning algorithms were used for modeling; the area under the receiver operating characteristic curve (AUROC), decision curve analysis, and calibration curve for evaluating; Shapley additive explanation (SHAP) analysis for visually interpreting; and the Heroku platform for deploying the best-performing models as web-based apps. RESULTS: For the model of predicting the risk of AKD in elderly patients with AKI during hospitalization, the Light Gradient Boosting Machine (LightGBM) showed the best overall performance in the training (AUROC=0.844, 95% CI 0.831-0.857), internal validation (AUROC=0.853, 95% CI 0.841-0.865), and external (AUROC=0.755, 95% CI 0.699-0.811) cohorts. In addition, LightGBM performed well for the AKD prognostic prediction in the training (AUROC=0.861, 95% CI 0.843-0.878), internal validation (AUROC=0.868, 95% CI 0.851-0.885), and external (AUROC=0.746, 95% CI 0.673-0.820) cohorts. The models deployed as online prediction apps allowed users to predict and provide feedback to submit new data for model iteration. In the importance ranking and correlation visualization of the model's top 10 influencing factors conducted based on the SHAP value, partial dependence plots revealed the optimal cutoff of some interventionable indicators. The top 5 factors predicting the risk of AKD were creatinine on day 3, sepsis, delta blood urea nitrogen (BUN), diastolic blood pressure (DBP), and heart rate, while the top 5 factors determining in-hospital mortality were age, BUN on day 1, vasopressor use, BUN on day 3, and partial pressure of carbon dioxide (PaCO2). CONCLUSIONS: We developed and validated 2 online apps for predicting the risk of AKD and its prognostic mortality in elderly patients, respectively. The top 10 factors that influenced the AKD risk and mortality during hospitalization were identified and explained visually, which might provide useful applications for intelligent management and suggestions for future prospective research.


Subject(s)
Acute Kidney Injury , Critical Illness , Hospitalization , Internet , Machine Learning , Humans , Aged , Critical Illness/mortality , Prognosis , Acute Kidney Injury/mortality , Acute Kidney Injury/diagnosis , Female , Male , Hospitalization/statistics & numerical data , Aged, 80 and over , Hospital Mortality , Risk Assessment/methods
20.
Sci Rep ; 14(1): 12541, 2024 May 31.
Article in English | MEDLINE | ID: mdl-38821997

ABSTRACT

Accurate prediction of the remaining useful life (RUL) of lithium-ion batteries is advantageous for maintaining the stability of electrical systems. In this paper, an interpretable online method which can reflect capacity regeneration is proposed to accurately estimate the RUL. Firstly, four health indicators (HIs) are extracted from the charging and discharging process for online prediction. Then, the HIs model is trained using support vector regression to obtain future features. And the capacity model of Gaussian process regression (GPR) is trained and analyzed by Shapley additive explanation (SHAP). Meanwhile, the state space for capacity prediction is constructed with the addition of Gaussian non-white noise to simulate the capacity regeneration. And the modified predicted HIs and noise are obtained by unscented Kalman filter. Finally, according to SHAP explainer, the predicted HIs acting as the baseline and the modified HIs containing information on capacity regeneration are chosen to predict RUL. In addition, the bounds of confidence intervals (CIs) are calculated separately to reflect the regenerated capacity. The experimental results demonstrate that the proposed online method can achieve high accuracy and effectively capture the capacity regeneration. The absolute error of failure RUL is below 5 and the minimum confidence interval is only 2.

SELECTION OF CITATIONS
SEARCH DETAIL
...