Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Int J Surg ; 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38833337

RESUMO

BACKGROUND: Warfarin is a common oral anticoagulant, and its effects vary widely among individuals. Numerous dose-prediction algorithms have been reported based on cross-sectional data generated via multiple linear regression or machine learning. This study aimed to construct an information fusion perturbation theory and machine learning prediction model of warfarin blood levels based on clinical longitudinal data from cardiac surgery patients. METHODS AND MATERIAL: The data of 246 patients were obtained from electronic medical records. Continuous variables were processed by calculating the distance of the raw data with the moving average (MA ∆vki(sj)), and categorical variables in different attribute groups were processed using Euclidean distance (ED ǁ∆vk(sj)ǁ). Regression and classification analyses were performed on the raw data, MA ∆vki(sj), and ED ǁ∆vk(sj)ǁ. Different machine-learning algorithms were chosen for the STATISTICA and WEKA software. RESULTS: The random forest (RF) algorithm was the best for predicting continuous outputs using the raw data. The correlation coefficients of the RF algorithm were 0.978 and 0.595 for the training and validation sets, respectively, and the mean absolute errors were 0.135 and 0.362 for the training and validation sets, respectively. The proportion of ideal predictions of the RF algorithm was 59.0%. General discriminant analysis (GDA) was the best algorithm for predicting the categorical outputs using the MA ∆vki(sj) data. The GDA algorithm's total true positive rate (TPR) was 95.4% and 95.6% for the training and validation sets, respectively, with MA ∆vki(sj) data. CONCLUSIONS: An information fusion perturbation theory and machine learning model for predicting warfarin blood levels was established. A model based on the RF algorithm could be used to predict the target international normalized ratio (INR), and a model based on the GDA algorithm could be used to predict the probability of being within the target INR range under different clinical scenarios.

2.
Adv Sci (Weinh) ; 11(13): e2305177, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38258479

RESUMO

Familial hypercholesterolemia (FH) is an inherited metabolic disease affecting cholesterol metabolism, with 90% of cases caused by mutations in the LDL receptor gene (LDLR), primarily missense mutations. This study aims to integrate six commonly used predictive software to create a new model for predicting LDLR mutation pathogenicity and mapping hot spot residues. Six predictive-software are selected: Polyphen-2, SIFT, MutationTaster, REVEL, VARITY, and MLb-LDLr. Software accuracy is tested with the characterized variants annotated in ClinVar and, by bioinformatic and machine learning techniques all models are integrated into a more accurate one. The resulting optimized model presents a specificity of 96.71% and a sensitivity of 98.36%. Hot spot residues with high potential of pathogenicity appear across all domains except for the signal peptide and the O-linked domain. In addition, translating this information into 3D structure of the LDLr highlights potentially pathogenic clusters within the different domains, which may be related to specific biological function. The results of this work provide a powerful tool to classify LDLR pathogenic variants. Moreover, an open-access guide user interface (OptiMo-LDLr) is provided to the scientific community. This study shows that combination of several predictive software results in a more accurate prediction to help clinicians in FH diagnosis.


Assuntos
Hiperlipoproteinemia Tipo II , Humanos , Fenótipo , Mutação , Hiperlipoproteinemia Tipo II/diagnóstico , Hiperlipoproteinemia Tipo II/genética , Receptores de LDL/genética , Receptores de LDL/metabolismo , Simulação por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...