Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS One ; 16(7): e0253696, 2021.
Article in English | MEDLINE | ID: mdl-34242241

ABSTRACT

OBJECTIVE: The association of body mass index (BMI) and all-cause mortality is controversial, frequently referred to as a paradox. Whether the cause is metabolic factors or statistical biases is still controversial. We assessed the association of BMI and all-cause mortality considering a wide range of comorbidities and baseline mortality risk. METHODS: Retrospective cohort study of Olmsted County residents with at least one BMI measurement between 2000-2005, clinical data in the electronic health record and minimum 8 year follow-up or death within this time. The cohort was categorized based on baseline mortality risk: Low, Medium, Medium-high, High and Very-high. All-cause mortality was assessed for BMI intervals of 5 and 0.5 Kg/m2. RESULTS: Of 39,739 subjects (average age 52.6, range 18-89; 38.1% male) 11.86% died during 8-year follow-up. The 8-year all-cause mortality risk had a "U" shape with a flat nadir in all the risk groups. Extreme BMI showed higher risk (BMI <15 = 36.4%, 15 to <20 = 15.4% and ≥45 = 13.7%), while intermediate BMI categories showed a plateau between 10.6 and 12.5%. The increased risk attributed to baseline risk and comorbidities was more obvious than the risk based on BMI increase within the same risk groups. CONCLUSIONS: There is a complex association between BMI and all-cause mortality when evaluated including comorbidities and baseline mortality risk. In general, comorbidities are better predictors of mortality risk except at extreme BMIs. In patients with no or few comorbidities, BMI seems to better define mortality risk. Aggressive management of comorbidities may provide better survival outcome for patients with body mass between normal and moderate obesity.


Subject(s)
Body Mass Index , Comorbidity , Mortality , Adolescent , Adult , Aged , Aged, 80 and over , Electronic Health Records/statistics & numerical data , Female , Follow-Up Studies , Humans , Male , Middle Aged , Minnesota/epidemiology , Retrospective Studies , Risk Assessment/methods , Risk Assessment/statistics & numerical data , Risk Factors , Young Adult
2.
IEEE J Biomed Health Inform ; 25(7): 2476-2486, 2021 07.
Article in English | MEDLINE | ID: mdl-34129510

ABSTRACT

Diseases can show different courses of progression even when patients share the same risk factors. Recent studies have revealed that the use of trajectories, the order in which diseases manifest throughout life, can be predictive of the course of progression. In this study, we propose a novel computational method for learning disease trajectories from EHR data. The proposed method consists of three parts: first, we propose an algorithm for extracting trajectories from EHR data; second, three criteria for filtering trajectories; and third, a likelihood function for assessing the risk of developing a set of outcomes given a trajectory set. We applied our methods to extract a set of disease trajectories from Mayo Clinic EHR data and evaluated it internally based on log-likelihood, which can be interpreted as the trajectories' ability to explain the observed (partial) disease progressions. We then externally evaluated the trajectories on EHR data from an independent health system, M Health Fairview. The proposed algorithm extracted a comprehensive set of disease trajectories that can explain the observed outcomes substantially better than competing methods and the proposed filtering criteria selected a small subset of disease trajectories that are highly interpretable and suffered only a minimal (relative 5%) loss of the ability to explain disease progression in both the internal and external validation.


Subject(s)
Algorithms , Electronic Health Records , Humans
3.
BMC Med Inform Decis Mak ; 20(1): 6, 2020 01 08.
Article in English | MEDLINE | ID: mdl-31914992

ABSTRACT

BACKGROUND: The ubiquity of electronic health records (EHR) offers an opportunity to observe trajectories of laboratory results and vital signs over long periods of time. This study assessed the value of risk factor trajectories available in the electronic health record to predict incident type 2 diabetes. STUDY DESIGN AND METHODS: Analysis was based on a large 13-year retrospective cohort of 71,545 adult, non-diabetic patients with baseline in 2005 and median follow-up time of 8 years. The trajectories of fasting plasma glucose, lipids, BMI and blood pressure were computed over three time frames (2000-2001, 2002-2003, 2004) before baseline. A novel method, Cumulative Exposure (CE), was developed and evaluated using Cox proportional hazards regression to assess risk of incident type 2 diabetes. We used the Framingham Diabetes Risk Scoring (FDRS) Model as control. RESULTS: The new model outperformed the FDRS Model (.802 vs .660; p-values <2e-16). Cumulative exposure measured over different periods showed that even short episodes of hyperglycemia increase the risk of developing diabetes. Returning to normoglycemia moderates the risk, but does not fully eliminate it. The longer an individual maintains glycemic control after a hyperglycemic episode, the lower the subsequent risk of diabetes. CONCLUSION: Incorporating risk factor trajectories substantially increases the ability of clinical decision support risk models to predict onset of type 2 diabetes and provides information about how risk changes over time.


Subject(s)
Diabetes Mellitus, Type 2/diagnosis , Diabetes Mellitus, Type 2/prevention & control , Adult , Blood Glucose , Female , Humans , Male , Middle Aged , Prognosis , Proportional Hazards Models , Retrospective Studies , Risk Factors
4.
Stud Health Technol Inform ; 264: 288-292, 2019 Aug 21.
Article in English | MEDLINE | ID: mdl-31437931

ABSTRACT

Different analytic techniques operate optimally with different types of data. As the use of EHR-based analytics expands to newer tasks, data will have to be transformed into different representations, so the tasks can be optimally solved. We classified representations into broad categories based on their characteristics, and proposed a new knowledge-driven representation for clinical data mining as well as trajectory mining, called Severity Encoding Variables (SEVs). Additionally, we studied which characteristics make representations most suitable for particular clinical analytics tasks including trajectory mining. Our evaluation shows that, for regression, most data representations performed similarly, with SEV achieving a slight (albeit statistically significant) advantage. For patients at high risk of diabetes, it outperformed the competing representation by (relative) 20%. For association mining, SEV achieved the highest performance. Its ability to constrain the search space of patterns through clinical knowledge was key to its success.


Subject(s)
Data Mining , Electronic Health Records
5.
Transl Psychiatry ; 9(1): 63, 2019 02 04.
Article in English | MEDLINE | ID: mdl-30718453

ABSTRACT

In recent years, the emerging field of computational psychiatry has impelled the use of machine learning models as a means to further understand the pathogenesis of multiple clinical disorders. In this paper, we discuss how autism spectrum disorder (ASD) was and continues to be diagnosed in the context of its complex neurodevelopmental heterogeneity. We review machine learning approaches to streamline ASD's diagnostic methods, to discern similarities and differences from comorbid diagnoses, and to follow developmentally variable outcomes. Both supervised machine learning models for classification outcome and unsupervised approaches to identify new dimensions and subgroups are discussed. We provide an illustrative example of how computational analytic methods and a longitudinal design can improve our inferential ability to detect early dysfunctional behaviors that may or may not reach threshold levels for formal diagnoses. Specifically, an unsupervised machine learning approach of anomaly detection is used to illustrate how community samples may be utilized to investigate early autism risk, multidimensional features, and outcome variables. Because ASD symptoms and challenges are not static within individuals across development, computational approaches present a promising method to elucidate subgroups of etiological contributions to phenotype, alternative developmental courses, interactions with biomedical comorbidities, and to predict potential responses to therapeutic interventions.


Subject(s)
Autism Spectrum Disorder/diagnosis , Autism Spectrum Disorder/physiopathology , Comorbidity , Machine Learning , Models, Theoretical , Autism Spectrum Disorder/epidemiology , Humans
6.
IEEE Int Conf Healthc Inform ; 2017: 374-379, 2017 Aug.
Article in English | MEDLINE | ID: mdl-29862384

ABSTRACT

The true onset time of a disease, particularly slow-onset diseases like Type 2 diabetes mellitus (T2DM), is rarely observable in electronic health records (EHRs). However, it is critical for analysis of time to events and for studying sequences of diseases. The aim of this study is to demonstrate a method for estimating the onset time of such diseases from intermittently observable laboratory results in the specific context of T2DM. A retrospective observational study design is used. A cohort of 5,874 non-diabetic patients from a large healthcare system in the Upper Midwest United States was constructed with a three-year follow-up period. The HbA1c level of each patient was collected from earliest and the latest follow-up. We modeled the patients' HbA1c trajectories through Bayesian networks to estimate the onset time of diabetes. Due to non-random censoring and interventions unobservable from EHR data (such as lifestyle changes), naïve modeling of HbA1c through linear regression or modeling time-to-event through proportional hazard model leads to a clinically infeasible model with no or limited ability to predict the onset time of diabetes. Our model is consistent with clinical knowledge and estimated the onset of diabetes with less than a six-month error for almost half the patients for whom the onset time could be clinically ascertained. To our knowledge, this is the first study of modeling long-term HbA1c progression in non-diabetic patients and estimating the onset time of diabetes.

7.
Big Data ; 4(1): 25-30, 2016 Mar 01.
Article in English | MEDLINE | ID: mdl-27158565

ABSTRACT

Disease progression models, statistical models that assess a patient's risk of diabetes progression, are popular tools in clinical practice for prevention and management of chronic conditions. Most, if not all, models currently in use are based on gold standard clinical trial data. The relatively small sample size available from clinical trial limits these models only considering the patient's state at the time of the assessment and ignoring the trajectory, the sequence of events, that led up to the state. Recent advances in the adoption of electronic health record (EHR) systems and the large sample size they contain have paved the way to build disease progression models that can take trajectories into account, leading to increasingly accurate and personalized assessment. To address these problems, we present a novel method to observe trajectories directly. We demonstrate the effectiveness of the proposed method by studying type 2 diabetes mellitus (T2DM) trajectories. Specifically, using EHR data for a large population-based cohort, we identified a typical trajectory that most people follow, which is a sequence of diseases from hyperlipidemia (HLD) to hypertension (HTN), impaired fasting glucose (IFG), and T2DM. In addition, we also show that patients who follow different trajectories can face significantly increased or decreased risk.

8.
Biol Blood Marrow Transplant ; 22(8): 1383-1390, 2016 08.
Article in English | MEDLINE | ID: mdl-27155584

ABSTRACT

Pulmonary complications due to infection and idiopathic pneumonia syndrome (IPS), a noninfectious lung injury in hematopoietic stem cell transplant (HSCT) recipients, are frequent causes of transplantation-related mortality and morbidity. Our objective was to characterize the global bronchoalveolar lavage fluid (BALF) protein expression of IPS to identify proteins and pathways that differentiate IPS from infectious lung injury after HSCT. We studied 30 BALF samples from patients who developed lung injury within 180 days of HSCT or cellular therapy transfusion (natural killer cell transfusion). Adult subjects were classified as having IPS or infectious lung injury by the criteria outlined in the 2011 American Thoracic Society statement. BALF was depleted of hemoglobin and 14 high-abundance proteins, treated with trypsin, and labeled with isobaric tagging for relative and absolute quantification (iTRAQ) 8-plex reagent for two-dimensional capillary liquid chromatography (LC) and data dependent peptide tandem mass spectrometry (MS) on an Orbitrap Velos system in higher-energy collision-induced dissociation activation mode. Protein identification employed a target-decoy strategy using ProteinPilot within Galaxy P. The relative protein abundance was determined with reference to a global internal standard consisting of pooled BALF from patients with respiratory failure and no history of HSCT. A variance weighted t-test controlling for a false discovery rate of ≤5% was used to identify proteins that showed differential expression between IPS and infectious lung injury. The biological relevance of these proteins was determined by using gene ontology enrichment analysis and Ingenuity Pathway Analysis. We characterized 12 IPS and 18 infectious lung injury BALF samples. In the 5 iTRAQ LC-MS/MS experiments 845, 735, 532, 615, and 594 proteins were identified for a total of 1125 unique proteins and 368 common proteins across all 5 LC-MS/MS experiments. When comparing IPS to infectious lung injury, 96 proteins were differentially expressed. Gene ontology enrichment analysis showed that these proteins participate in biological processes involved in the development of lung injury after HSCT. These include acute phase response signaling, complement system, coagulation system, liver X receptor (LXR)/retinoid X receptor (RXR), and farsenoid X receptor (FXR)/RXR modulation. We identified 2 canonical pathways modulated by TNF-α, FXR/RXR activation, and IL2 signaling in macrophages. The proteins also mapped to blood coagulation, fibrinolysis, and wound healing-processes that participate in organ repair. Cell movement was identified as significantly over-represented by proteins with differential expression between IPS and infection. In conclusion, the BALF protein expression in IPS differed significantly from infectious lung injury in HSCT recipients. These differences provide insights into mechanisms that are activated in lung injury in HSCT recipients and suggest potential therapeutic targets to augment lung repair.


Subject(s)
Hematopoietic Stem Cell Transplantation/adverse effects , Lung Injury/etiology , Pneumonia/etiology , Proteome/analysis , Adult , Aged , Bronchoalveolar Lavage Fluid/chemistry , Gene Expression Profiling , Gene Ontology , Humans , Middle Aged , Proteomics/methods
9.
PLoS One ; 9(10): e109713, 2014.
Article in English | MEDLINE | ID: mdl-25290099

ABSTRACT

Acute Respiratory Distress Syndrome (ARDS) continues to have a high mortality. Currently, there are no biomarkers that provide reliable prognostic information to guide clinical management or stratify risk among clinical trial participants. The objective of this study was to probe the bronchoalveolar lavage fluid (BALF) proteome to identify proteins that differentiate survivors from non-survivors of ARDS. Patients were divided into early-phase (1 to 7 days) and late-phase (8 to 35 days) groups based on time after initiation of mechanical ventilation for ARDS (Day 1). Isobaric tags for absolute and relative quantitation (iTRAQ) with LC MS/MS was performed on pooled BALF enriched for medium and low abundance proteins from early-phase survivors (n = 7), early-phase non-survivors (n = 8), and late-phase survivors (n = 7). Of the 724 proteins identified at a global false discovery rate of 1%, quantitative information was available for 499. In early-phase ARDS, proteins more abundant in survivors mapped to ontologies indicating a coordinated compensatory response to injury and stress. These included coagulation and fibrinolysis; immune system activation; and cation and iron homeostasis. Proteins more abundant in early-phase non-survivors participate in carbohydrate catabolism and collagen synthesis, with no activation of compensatory responses. The compensatory immune activation and ion homeostatic response seen in early-phase survivors transitioned to cell migration and actin filament based processes in late-phase survivors, revealing dynamic changes in the BALF proteome as the lung heals. Early phase proteins differentiating survivors from non-survivors are candidate biomarkers for predicting survival in ARDS.


Subject(s)
Proteome/metabolism , Respiratory Distress Syndrome/diagnosis , Respiratory Distress Syndrome/metabolism , Adult , Aged , Biomarkers/metabolism , Bronchoalveolar Lavage Fluid/chemistry , Chromatography, Liquid , Female , Humans , Male , Metabolic Networks and Pathways/genetics , Middle Aged , Molecular Sequence Annotation , Predictive Value of Tests , Prognosis , Proteome/genetics , Proteomics , Respiration, Artificial , Respiratory Distress Syndrome/mortality , Respiratory Distress Syndrome/therapy , Survival Analysis , Survivors , Tandem Mass Spectrometry , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...