Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
BMJ Open ; 13(4): e066249, 2023 04 28.
Artigo em Inglês | MEDLINE | ID: mdl-37116996

RESUMO

INTRODUCTION: Meta-analytical evidence confirms a range of interventions, including mindfulness, physical activity and sleep hygiene, can reduce psychological distress in university students. However, it is unclear which intervention is most effective. Artificial intelligence (AI)-driven adaptive trials may be an efficient method to determine what works best and for whom. The primary purpose of the study is to rank the effectiveness of mindfulness, physical activity, sleep hygiene and an active control on reducing distress, using a multiarm contextual bandit-based AI-adaptive trial method. Furthermore, the study will explore which interventions have the largest effect for students with different levels of baseline distress severity. METHODS AND ANALYSIS: The Vibe Up study is a pragmatically oriented, decentralised AI-adaptive group sequential randomised controlled trial comparing the effectiveness of one of three brief, 2-week digital self-guided interventions (mindfulness, physical activity or sleep hygiene) or active control (ecological momentary assessment) in reducing self-reported psychological distress in Australian university students. The adaptive trial methodology involves up to 12 sequential mini-trials that allow for the optimisation of allocation ratios. The primary outcome is change in psychological distress (Depression, Anxiety and Stress Scale, 21-item version, DASS-21 total score) from preintervention to postintervention. Secondary outcomes include change in physical activity, sleep quality and mindfulness from preintervention to postintervention. Planned contrasts will compare the four groups (ie, the three intervention and control) using self-reported psychological distress at prespecified time points for interim analyses. The study aims to determine the best performing intervention, as well as ranking of other interventions. ETHICS AND DISSEMINATION: Ethical approval was sought and obtained from the UNSW Sydney Human Research Ethics Committee (HREC A, HC200466). A trial protocol adhering to the requirements of the Guideline for Good Clinical Practice was prepared for and approved by the Sponsor, UNSW Sydney (Protocol number: HC200466_CTP). TRIAL REGISTRATION NUMBER: ACTRN12621001223820.


Assuntos
Atenção Plena , Angústia Psicológica , Humanos , Universidades , Inteligência Artificial , Austrália , Atenção Plena/métodos , Estudantes/psicologia , Estresse Psicológico/prevenção & controle , Estresse Psicológico/psicologia , Ensaios Clínicos Controlados Aleatórios como Assunto
2.
Antibiotics (Basel) ; 12(3)2023 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-36978331

RESUMO

Oxazolidinones are a broad-spectrum class of synthetic antibiotics that bind to the 50S ribosomal subunit of Gram-positive and Gram-negative bacteria. Many crystal structures of the ribosomes with oxazolidinone ligands have been reported in the literature, facilitating structure-based design using methods such as molecular docking. It would be of great interest to know in advance how well docking methods can reproduce the correct ligand binding modes and rank these correctly. We examined the performance of five molecular docking programs (AutoDock 4, AutoDock Vina, DOCK 6, rDock, and RLDock) for their ability to model ribosomal-ligand interactions with oxazolidinones. Eleven ribosomal crystal structures with oxazolidinones as the ligands were docked. The accuracy was evaluated by calculating the docked complexes' root-mean-square deviation (RMSD) and the program's internal scoring function. The rankings for each program based on the median RMSD between the native and predicted were DOCK 6 > AD4 > Vina > RDOCK >> RLDOCK. Results demonstrate that the top-performing program, DOCK 6, could accurately replicate the ligand binding in only four of the eleven ribosomes due to the poor electron density of said ribosomal structures. In this study, we have further benchmarked the performance of the DOCK 6 docking algorithm and scoring in improving virtual screening (VS) enrichment using the dataset of 285 oxazolidinone derivatives against oxazolidinone binding sites in the S. aureus ribosome. However, there was no clear trend between the structure and activity of the oxazolidinones in VS. Overall, the docking performance indicates that the RNA pocket's high flexibility does not allow for accurate docking prediction, highlighting the need to validate VS. protocols for ligand-RNA before future use. Later, we developed a re-scoring method incorporating absolute docking scores and molecular descriptors, and the results indicate that the descriptors greatly improve the correlation of docking scores and pMIC values. Morgan fingerprint analysis was also used, suggesting that DOCK 6 underpredicted molecules with tail modifications with acetamide, n-methylacetamide, or n-ethylacetamide and over-predicted molecule derivatives with methylamino bits. Alternatively, a ligand-based approach similar to a field template was taken, indicating that each derivative's tail groups have strong positive and negative electrostatic potential contributing to microbial activity. These results indicate that one should perform VS. campaigns of ribosomal antibiotics with care and that more comprehensive strategies, including molecular dynamics simulations and relative free energy calculations, might be necessary in conjunction with VS. and docking.

3.
Front Neurol ; 12: 670379, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34646226

RESUMO

Aim: To use available electronic administrative records to identify data reliability, predict discharge destination, and identify risk factors associated with specific outcomes following hospital admission with stroke, compared to stroke specific clinical factors, using machine learning techniques. Method: The study included 2,531 patients having at least one admission with a confirmed diagnosis of stroke, collected from a regional hospital in Australia within 2009-2013. Using machine learning (penalized regression with Lasso) techniques, patients having their index admission between June 2009 and July 2012 were used to derive predictive models, and patients having their index admission between July 2012 and June 2013 were used for validation. Three different stroke types [intracerebral hemorrhage (ICH), ischemic stroke, transient ischemic attack (TIA)] were considered and five different comparison outcome settings were considered. Our electronic administrative record based predictive model was compared with a predictive model composed of "baseline" clinical features, more specific for stroke, such as age, gender, smoking habits, co-morbidities (high cholesterol, hypertension, atrial fibrillation, and ischemic heart disease), types of imaging done (CT scan, MRI, etc.), and occurrence of in-hospital pneumonia. Risk factors associated with likelihood of negative outcomes were identified. Results: The data was highly reliable at predicting discharge to rehabilitation and all other outcomes vs. death for ICH (AUC 0.85 and 0.825, respectively), all discharge outcomes except home vs. rehabilitation for ischemic stroke, and discharge home vs. others and home vs. rehabilitation for TIA (AUC 0.948 and 0.873, respectively). Electronic health record data appeared to provide improved prediction of outcomes over stroke specific clinical factors from the machine learning models. Common risk factors associated with a negative impact on expected outcomes appeared clinically intuitive, and included older age groups, prior ventilatory support, urinary incontinence, need for imaging, and need for allied health input. Conclusion: Electronic administrative records from this cohort produced reliable outcome prediction and identified clinically appropriate factors negatively impacting most outcome variables following hospital admission with stroke. This presents a means of future identification of modifiable factors associated with patient discharge destination. This may potentially aid in patient selection for certain interventions and aid in better patient and clinician education regarding expected discharge outcomes.

4.
Health Care Manag Sci ; 24(4): 786-798, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34389924

RESUMO

PURPOSE: Our objective is to identify the predictive factors and predict hospital length of stay (LOS) in dengue patients, for efficient utilization of hospital resources. METHODS: We collected 1360 medical patient records of confirmed dengue infection from 2012 to 2017 at Max group of hospitals in India. We applied two different data mining algorithms, logistic regression (LR) with elastic-net, and random forest to extract predictive factors and predict the LOS. We used an area under the curve (AUC), sensitivity, and specificity to evaluate the performance of the classifiers. RESULTS: The classifiers performed well, with logistic regression (LR) with elastic-net providing an AUC score of 0.75 and random forest providing a score of 0.72. Out of 1148 patients, 364 (32%) patients had prolonged length of stay (LOS) (> 5 days) and overall hospitalization mean was 4.03 ± 2.44 days (median ± IQR). The highest number of dengue cases belonged to the age group of 10-20 years (21.1%) with a male predominance. Moreover, the study showed that blood transfusion, emergency admission, assisted ventilation, low haemoglobin, high total leucocyte count (TLC), low or high haematocrit, and low lymphocytes have a significant correlation with prolonged LOS. CONCLUSION: Our findings demonstrated that the logistic regression with elastic-net was the best fit with an AUC of 0.75 and there is a significant association between LOS greater than five days and identified patient-specific variables. This method can identify the patients at highest risks and help focus time and resources.


Assuntos
Dengue , Hospitalização , Adolescente , Adulto , Criança , Dengue/epidemiologia , Dengue/terapia , Feminino , Hospitais , Humanos , Tempo de Internação , Modelos Logísticos , Masculino , Estudos Retrospectivos , Adulto Jovem
5.
BioData Min ; 14(1): 37, 2021 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-34353329

RESUMO

BACKGROUND: The last decade has seen a major increase in the availability of genomic data. This includes expert-curated databases that describe the biological activity of genes, as well as high-throughput assays that measure gene expression in bulk tissue and single cells. Integrating these heterogeneous data sources can generate new hypotheses about biological systems. Our primary objective is to combine population-level drug-response data with patient-level single-cell expression data to predict how any gene will respond to any drug for any patient. METHODS: We take 2 approaches to benchmarking a "dual-channel" random walk with restart (RWR) for data integration. First, we evaluate how well RWR can predict known gene functions from single-cell gene co-expression networks. Second, we evaluate how well RWR can predict known drug responses from individual cell networks. We then present two exploratory applications. In the first application, we combine the Gene Ontology database with glioblastoma single cells from 5 individual patients to identify genes whose functions differ between cancers. In the second application, we combine the LINCS drug-response database with the same glioblastoma data to identify genes that may exhibit patient-specific drug responses. CONCLUSIONS: Our manuscript introduces two innovations to the integration of heterogeneous biological data. First, we use a "dual-channel" method to predict up-regulation and down-regulation separately. Second, we use individualized single-cell gene co-expression networks to make personalized predictions. These innovations let us predict gene function and drug response for individual patients. Taken together, our work shows promise that single-cell co-expression data could be combined in heterogeneous information networks to facilitate precision medicine.

6.
Perspect Health Inf Manag ; 18(Spring): 1j, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34035791

RESUMO

Background: Intervention planning to reduce 30-day readmission post-acute myocardial infarction (AMI) in an environment of resource scarcity can be improved by readmission prediction score. The aim of study is to derive and validate a prediction model based on routinely collected hospital data for identification of risk factors for all-cause readmission within zero to 30 days post discharge from AMI. Methods: Our study includes 2,849 AMI patient records (January 2005 to December 2014) from a tertiary care facility in India. EMR with ICD-10 diagnosis, admission, pathological, procedural and medication data is used for model building. Model performance is analyzed for different combination of feature groups and diabetes sub-cohort. The derived models are evaluated to identify risk factors for readmissions. Results: The derived model using all features has the highest discrimination in predicting readmission, with AUC as 0.62; (95 percent confidence interval) in internal validation with 70/30 split for derivation and validation. For the sub-cohort of diabetes patients (1359) the discrimination is slightly better with AUC 0.66; (95 percent CI;). Some of the positively associated predictive variables, include age group 80-90, medicine class administered during index admission (Anti-ischemic drugs, Alpha 1 blocker, Xanthine oxidase inhibitors), additional procedure in index admission (Dialysis). While some of the negatively associated predictive variables, include patient demography (Male gender), medicine class administered during index admission (Betablocker, Anticoagulant, Platelet inhibitors, Anti-arrhythmic). Conclusions: Routinely collected data in the hospital's clinical and administrative data repository can identify patients at high risk of readmission following AMI, potentially improving AMI readmission rate.


Assuntos
Infarto do Miocárdio , Readmissão do Paciente , Doença Aguda , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Criança , Pré-Escolar , Registros Eletrônicos de Saúde , Feminino , Previsões , Humanos , Índia , Lactente , Classificação Internacional de Doenças , Modelos Logísticos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Medição de Risco , Adulto Jovem
7.
ACS Biomater Sci Eng ; 6(5): 3197-3207, 2020 05 11.
Artigo em Inglês | MEDLINE | ID: mdl-33463267

RESUMO

Wet spinning of silkworm silk has the potential to overcome the limitations of the natural spinning process, producing fibers with exceptional mechanical properties. However, the complexity of the extraction and spinning processes have meant that this potential has so far not been realized. The choice of silk processing parameters, including fiber degumming, dissolving, and concentration, are critical in producing a sufficiently viscous dope, while avoiding silk's natural tendency to gel via self-assembly. This study utilized recently developed rapid Bayesian optimization to explore the impact of these variables on dope viscosity. By following the dope preparation conditions recommended by the algorithm, a 13% (w/v) silk dope was produced with a viscosity of 0.46 Pa·s, approximately five times higher than the dope obtained using traditional experimental design. The tensile strength, modulus, and toughness of fibers spun from this dope also improved by a factor of 2.20×, 2.16×, and 2.75×, respectively. These results represent the outcome of just five sets of experimental trials focusing on just dope preparation. Given the number of parameters in the spinning and post spinning processes, the use of Bayesian optimization represents an exciting opportunity to explore the multivariate wet spinning process to unlock the potential to produce wet spun fibers with truly exceptional mechanical properties.


Assuntos
Fibroínas , Seda , Algoritmos , Animais , Teorema de Bayes , Resistência à Tração
8.
ACS Omega ; 4(24): 20571-20578, 2019 Dec 10.
Artigo em Inglês | MEDLINE | ID: mdl-31858042

RESUMO

The scale-up of laboratory procedures to industrial production is the main challenge standing between ideation and the successful introduction of novel materials into commercial products. Retaining quality while ensuring high per-batch production yields is the main challenge. Batch processing and other dynamic strategies that preserve product quality can be applied, but they typically involve a variety of experimental parameters and functions that are difficult to optimize because of interdependencies that are often antagonistic. Adaptive Bayesian optimization is demonstrated here as a valuable support tool in increasing both the per-batch yield and quality of short polymer fibers, produced by wet spinning and shear dispersion methods. Through this approach, it is shown that short fiber dispersions with high yield and a specified, targeted fiber length distribution can be obtained with minimal cost of optimization, starting from sub-optimal processing conditions and minimal prior knowledge. The Bayesian function optimization demonstrated here for batch processing could be applied to other dynamic scale-up methods as well as to cases presenting higher dimensional challenges such as shape and structure optimization. This work shows the great potential of synergies between industrial processing, material engineering, and machine learning perspectives.

9.
ACS Omega ; 4(14): 15912-15922, 2019 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-31592461

RESUMO

In materials science, the investigation of a large and complex experimental space is time-consuming and thus may induce bias to exclude potential solutions where little to no knowledge is available. This work presents the development of a highly hydrophobic material from an amphiphilic polymer through a novel, adaptive artificial intelligence approach. The hydrophobicity arises from the random packing of short polymer fibers into paper, a highly entropic, multistep process. Using Bayesian optimization, the algorithm is able to efficiently navigate the parameter space without bias, including areas which a human experimenter would not address. This resulted in additional knowledge gain, which can then be applied to the fabrication process, resulting in a highly hydrophobic material (static water contact angle 135°) from an amphiphilic polymer (contact angle of 90°) through a simple and scalable filtration-based method. This presents a potential pathway for surface modification using the short polymer fibers to create fluorine-free hydrophobic surfaces on a larger scale.

10.
Sci Rep ; 7(1): 5683, 2017 07 18.
Artigo em Inglês | MEDLINE | ID: mdl-28720869

RESUMO

The discovery of processes for the synthesis of new materials involves many decisions about process design, operation, and material properties. Experimentation is crucial but as complexity increases, exploration of variables can become impractical using traditional combinatorial approaches. We describe an iterative method which uses machine learning to optimise process development, incorporating multiple qualitative and quantitative objectives. We demonstrate the method with a novel fluid processing platform for synthesis of short polymer fibers, and show how the synthesis process can be efficiently directed to achieve material and process objectives.

11.
J Med Internet Res ; 18(12): e323, 2016 12 16.
Artigo em Inglês | MEDLINE | ID: mdl-27986644

RESUMO

BACKGROUND: As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. OBJECTIVE: To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. METHODS: A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. RESULTS: The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. CONCLUSIONS: A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community.


Assuntos
Pesquisa Biomédica/métodos , Interpretação Estatística de Dados , Aprendizado de Máquina , Pesquisa Biomédica/normas , Humanos , Estudos Interdisciplinares , Modelos Biológicos
12.
PLoS One ; 10(5): e0125602, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25938675

RESUMO

For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC) from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD) outcomes (four NCDs and two major clinical risk factors), based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88) and those excluded from the development for use as a completely separated validation sample (median correlation 0.85), demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease.


Assuntos
Bases de Dados como Assunto , Demografia , Aprendizado de Máquina , Saúde Pública , Comportamento , Humanos , Vigilância da População , Prevalência , Reprodutibilidade dos Testes , Fatores de Risco , Inquéritos e Questionários , Estados Unidos
13.
J Epidemiol Community Health ; 69(7): 693-9, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-25805603

RESUMO

BACKGROUND: The WHO framework for non-communicable disease (NCD) describes risks and outcomes comprising the majority of the global burden of disease. These factors are complex and interact at biological, behavioural, environmental and policy levels presenting challenges for population monitoring and intervention evaluation. This paper explores the utility of machine learning methods applied to population-level web search activity behaviour as a proxy for chronic disease risk factors. METHODS: Web activity output for each element of the WHO's Causes of NCD framework was used as a basis for identifying relevant web search activity from 2004 to 2013 for the USA. Multiple linear regression models with regularisation were used to generate predictive algorithms, mapping web search activity to Centers for Disease Control and Prevention (CDC) measured risk factor/disease prevalence. Predictions for subsequent target years not included in the model derivation were tested against CDC data from population surveys using Pearson correlation and Spearman's r. RESULTS: For 2011 and 2012, predicted prevalence was very strongly correlated with measured risk data ranging from fruits and vegetables consumed (r=0.81; 95% CI 0.68 to 0.89) to alcohol consumption (r=0.96; 95% CI 0.93 to 0.98). Mean difference between predicted and measured differences by State ranged from 0.03 to 2.16. Spearman's r for state-wise predicted versus measured prevalence varied from 0.82 to 0.93. CONCLUSIONS: The high predictive validity of web search activity for NCD risk has potential to provide real-time information on population risk during policy implementation and other population-level NCD prevention efforts.


Assuntos
Doença Crônica/epidemiologia , Informação de Saúde ao Consumidor/tendências , Internet/estatística & dados numéricos , Vigilância da População/métodos , Informação de Saúde ao Consumidor/métodos , Humanos , Análise de Regressão , Medição de Risco/métodos , Medição de Risco/estatística & dados numéricos , Ferramenta de Busca/tendências , Estados Unidos/epidemiologia , Organização Mundial da Saúde
14.
BMC Bioinformatics ; 15: 425, 2014 Dec 30.
Artigo em Inglês | MEDLINE | ID: mdl-25547173

RESUMO

BACKGROUND: Feature engineering is a time consuming component of predictive modeling. We propose a versatile platform to automatically extract features for risk prediction, based on a pre-defined and extensible entity schema. The extraction is independent of disease type or risk prediction task. We contrast auto-extracted features to baselines generated from the Elixhauser comorbidities. RESULTS: Hospital medical records was transformed to event sequences, to which filters were applied to extract feature sets capturing diversity in temporal scales and data types. The features were evaluated on a readmission prediction task, comparing with baseline feature sets generated from the Elixhauser comorbidities. The prediction model was through logistic regression with elastic net regularization. Predictions horizons of 1, 2, 3, 6, 12 months were considered for four diverse diseases: diabetes, COPD, mental disorders and pneumonia, with derivation and validation cohorts defined on non-overlapping data-collection periods. For unplanned readmissions, auto-extracted feature set using socio-demographic information and medical records, outperformed baselines derived from the socio-demographic information and Elixhauser comorbidities, over 20 settings (5 prediction horizons over 4 diseases). In particular over 30-day prediction, the AUCs are: COPD-baseline: 0.60 (95% CI: 0.57, 0.63), auto-extracted: 0.67 (0.64, 0.70); diabetes-baseline: 0.60 (0.58, 0.63), auto-extracted: 0.67 (0.64, 0.69); mental disorders-baseline: 0.57 (0.54, 0.60), auto-extracted: 0.69 (0.64,0.70); pneumonia-baseline: 0.61 (0.59, 0.63), auto-extracted: 0.70 (0.67, 0.72). CONCLUSIONS: The advantages of auto-extracted standard features from complex medical records, in a disease and task agnostic manner were demonstrated. Auto-extracted features have good predictive power over multiple time horizons. Such feature sets have potential to form the foundation of complex automated analytic tasks.


Assuntos
Diabetes Mellitus/etiologia , Transtornos Mentais/etiologia , Pneumonia/etiologia , Doença Pulmonar Obstrutiva Crônica/etiologia , Medição de Risco , Software , Idoso , Área Sob a Curva , Comorbidade , Bases de Dados Factuais , Feminino , Hospitais , Humanos , Modelos Logísticos , Masculino , Modelos Teóricos
15.
Aust Health Rev ; 38(4): 377-82, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25001433

RESUMO

OBJECTIVE: Readmission rates are high following acute myocardial infarction (AMI), but risk stratification has proved difficult because known risk factors are only weakly predictive. In the present study, we applied hospital data to identify the risk of unplanned admission following AMI hospitalisations. METHODS: The study included 1660 consecutive AMI admissions. Predictive models were derived from 1107 randomly selected records and tested on the remaining 553 records. The electronic medical record (EMR) model was compared with a seven-factor predictive score known as the HOSPITAL score and a model derived from Elixhauser comorbidities. All models were evaluated for the ability to identify patients at high risk of 30-day ischaemic heart disease readmission and those at risk of all-cause readmission within 12 months following the initial AMI hospitalisation. RESULTS: The EMR model has higher discrimination than other models in predicting ischaemic heart disease readmissions (area under the curve (AUC) 0.78; 95% confidence interval (CI) 0.71-0.85 for 30-day readmission). The positive predictive value was significantly higher with the EMR model, which identifies cohorts that were up to threefold more likely to be readmitted. Factors associated with readmission included emergency department attendances, cardiac diagnoses and procedures, renal impairment and electrolyte disturbances. The EMR model also performed better than other models (AUC 0.72; 95% CI 0.66-0.78), and with greater positive predictive value, in identifying 12-month risk of all-cause readmission. CONCLUSIONS: Routine hospital data can help identify patients at high risk of readmission following AMI. This could lead to decreased readmission rates by identifying patients suitable for targeted clinical interventions.


Assuntos
Infarto do Miocárdio , Readmissão do Paciente/estatística & dados numéricos , Adulto , Idoso , Idoso de 80 Anos ou mais , Bases de Dados Factuais , Registros Eletrônicos de Saúde , Feminino , Humanos , Modelos Logísticos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Centros de Atenção Terciária , Vitória , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...