Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
1.
Article in English | MEDLINE | ID: mdl-38533846

ABSTRACT

Background: Pregnancy-related cardiovascular (CV) conditions, including hypertensive disorders of pregnancy (HDP) and gestational diabetes (GDM), are associated with increased long-term CV risk. Methods: This retrospective cohort study defined the prevalence of HDP and GDM within a large, academic health system in the southeast United States between 2012 and 2015 and described health care utilization and routine CV screening up to 1-year following delivery among those with pregnancy-related CV conditions. Rates of follow-up visits and blood pressure, hemoglobin A1c (HbA1c), and lipid screening in the first postpartum year were compared by provider type and pregnancy-related CV condition. Results: Of the 6027 deliveries included, 20% were complicated by HDP and/or GDM. Rates of pre-pregnancy CV risk factors were high, with a significantly higher proportion of pre-pregnancy obesity among women with HDP than in normal pregnancies. Those with both HDP/GDM had the highest rates of follow-up by 1-year postpartum, yet only half of those with any pregnancy-related CV condition had any follow-up visit after 12 weeks. Although most (70%) of those with HDP had postpartum blood pressure screening, less than one-third of those with GDM had a repeat HbA1c by 12 months. Overall, postpartum lipid screening was rare (<20%). Conclusion: There is a high burden of pregnancy-related CV conditions in a large U.S. academic health system. Although overall rates of follow-up in the early postpartum period were high, gaps in longitudinal follow-up exist. Low rates of CV risk factor follow-up at 1 year indicate a missed opportunity for early CV prevention.

2.
Article in English | MEDLINE | ID: mdl-37711220

ABSTRACT

Background: JAK1 is a signaling molecule downstream of cytokine receptors, including IL-4 receptor α. Abrocitinib is an oral JAK1 inhibitor; it is a safe and effective US Food and Drug Administration-approved treatment for adults with moderate-to-severe atopic dermatitis. Objective: Our objective was to investigate the effect of abrocitinib on basophil activation and T-cell activation in patients with peanut allergy to determine the potential for use of JAK1 inhibitors as a monotherapy or an adjuvant to peanut oral immunotherapy. Methods: Basophil activation in whole blood was measured by detection of CD63 expression using flow cytometry. Activation of CD4+ effector and regulatory T cells was determined by the upregulation of CD154 and CD137, respectively, on anti-CD3/CD28- or peanut-stimulated PBMCs. For the quantification of peanut-induced cytokines, PBMCs were stimulated with peanut for 5 days before harvesting supernatant. Results: Abrocitinib decreased the allergen-specific activation of basophils in response to peanut. We showed suppression of effector T-cell activation when stimulated by CD3/CD28 beads in the presence of 10 ng of abrocitinib, whereas activation of regulatory T-cell populations was preserved in the presence of abrocitinib. Abrocitinib induced statistically significant dose-dependent inhibition in IL-5, IL-13, IL-10, IL-9, and TNF-α in the presence of peanut stimulation. Conclusion: These results support our hypothesis that JAK1 inhibition decreases basophil activation and TH2 cytokine signaling, reducing in vitro allergic responses in subjects with peanut allergy. Abrocitinib may be an effective adjunctive immune modulator in conjunction with peanut oral immunotherapy or as a monotherapy for individuals with food allergy.

3.
J Urban Health ; 99(6): 984-997, 2022 12.
Article in English | MEDLINE | ID: mdl-36367672

ABSTRACT

There is tremendous interest in understanding how neighborhoods impact health by linking extant social and environmental drivers of health (SDOH) data with electronic health record (EHR) data. Studies quantifying such associations often use static neighborhood measures. Little research examines the impact of gentrification-a measure of neighborhood change-on the health of long-term neighborhood residents using EHR data, which may have a more generalizable population than traditional approaches. We quantified associations between gentrification and health and healthcare utilization by linking longitudinal socioeconomic data from the American Community Survey with EHR data across two health systems accessed by long-term residents of Durham County, NC, from 2007 to 2017. Census block group-level neighborhoods were eligible to be gentrified if they had low socioeconomic status relative to the county average. Gentrification was defined using socioeconomic data from 2006 to 2010 and 2011-2015, with the Steinmetz-Wood definition. Multivariable logistic and Poisson regression models estimated associations between gentrification and development of health indicators (cardiovascular disease, hypertension, diabetes, obesity, asthma, depression) or healthcare encounters (emergency department [ED], inpatient, or outpatient). Sensitivity analyses examined two alternative gentrification measures. Of the 99 block groups within the city of Durham, 28 were eligible (N = 10,807; median age = 42; 83% Black; 55% female) and 5 gentrified. Individuals in gentrifying neighborhoods had lower odds of obesity (odds ratio [OR] = 0.89; 95% confidence interval [CI]: 0.81-0.99), higher odds of an ED encounter (OR = 1.10; 95% CI: 1.01-1.20), and lower risk for outpatient encounters (incidence rate ratio = 0.93; 95% CI: 0.87-1.00) compared with non-gentrifying neighborhoods. The association between gentrification and health and healthcare utilization was sensitive to gentrification definition.


Subject(s)
Residence Characteristics , Residential Segregation , Humans , Female , Adult , Male , Patient Acceptance of Health Care , Odds Ratio , Obesity
4.
JAMA Netw Open ; 4(3): e213460, 2021 03 01.
Article in English | MEDLINE | ID: mdl-33779743

ABSTRACT

Importance: Comparisons of antimicrobial use among hospitals are difficult to interpret owing to variations in patient case mix. Risk-adjustment strategies incorporating larger numbers of variables haves been proposed as a method to improve comparisons for antimicrobial stewardship assessments. Objective: To evaluate whether variables of varying complexity and feasibility of measurement, derived retrospectively from the electronic health records, accurately identify inpatient antimicrobial use. Design, Setting, and Participants: Retrospective cohort study, using a 2-stage random forests machine learning modeling analysis of electronic health record data. Data were split into training and testing sets to measure model performance using area under the curve and absolute error. All adult and pediatric inpatient encounters from October 1, 2015, to September 30, 2017, at 2 community hospitals and 1 academic medical center in the Duke University Health System were analyzed. A total of 204 candidate variables were categorized into 4 tiers based on feasibility of measurement from the electronic health records. Main Outcomes and Measures: Antimicrobial exposure was measured at the encounter level in 2 ways: binary (ever or never) and number of days of therapy. Analyses were stratified by age (pediatric or adult), unit type, and antibiotic group. Results: The data set included 170 294 encounters and 204 candidate variables from 3 hospitals during the 3-year study period. Antimicrobial exposure occurred in 80 190 encounters (47%); 64 998 (38%) received 1 to 6 days of therapy, and 15 192 (9%) received 7 or more days of therapy. Two-stage models identified antimicrobial use with high fidelity (mean area under the curve, 0.85; mean absolute error, 1.0 days of therapy). Addition of more complex variables increased accuracy, with largest improvements occurring with inclusion of diagnosis information. Accuracy varied based on location and antibiotic group. Models underestimated the number of days of therapy of encounters with long lengths of stay. Conclusions and Relevance: Models using variables derived from electronic health records identified antimicrobial exposure accurately. Future risk-adjustment strategies incorporating encounter-level information may make comparisons of antimicrobial use more meaningful for hospital antimicrobial stewardship assessments.


Subject(s)
Anti-Bacterial Agents/pharmacology , Antimicrobial Stewardship/methods , Electronic Health Records/statistics & numerical data , Inpatients , Machine Learning , Risk Assessment/methods , Adolescent , Adult , Aged , Child , Child, Preschool , Female , Follow-Up Studies , Humans , Infant , Male , Middle Aged , Retrospective Studies , Young Adult
5.
Prev Med Rep ; 24: 101615, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34976671

ABSTRACT

Data on patterns of weight change among adults with overweight or obesity are minimal. We aimed to examine patterns of weight change and associated hospitalizations in a large health system, and to develop a model to predict 2-year significant weight gain. Data from the Duke University Health System was abstracted from 1/1/13 to 12/31/16 on patients with BMI ≥ 25 kg/m2 in 2014. A regression model was developed to predict patients that would increase their weight by 10% within 2 years. We estimated the association between weight change category and all-cause hospitalization using Cox proportional hazards models. Of the 37,253 patients in our cohort, 59% had stable weight over 2 years, while 24% gained ≥ 5% weight and 17% lost ≥ 5% weight. Our predictive model had reasonable discriminatory capacity to predict which individuals would gain ≥ 10% weight over 2 years (AUC 0.73). Compared with stable weight, the risk of hospitalization was increased by 37% for individuals with > 10% weight loss [adj. HR (95% CI): 1.37 (1.25,1.5)], by 30% for those with > 10% weight gain [adj. HR (95% CI): 1.3 (1.19,1.42)], by 18% for those with 5-10% weight loss [adj. HR (95% CI): 1.18 (1.09,1.28)], and by 10% for those with 5-10% weight gain [adj. HR (95% CI): 1.1 (1.02,1.19)]. In this examination of a large health system, significant weight gain or loss of > 10% was associated with increased all-cause hospitalization over 2 years compared with stable weight. This analysis adds to the increasing observational evidence that weight stability may be a key health driver.

6.
Am J Prev Med ; 58(6): 817-824, 2020 06.
Article in English | MEDLINE | ID: mdl-32444000

ABSTRACT

INTRODUCTION: Both medication and surgical interventions can be used to treat obesity, yet their use and effectiveness in routine clinical practice are not clear. This study sought to characterize the prevalence and management of patients with obesity within a large U.S. academic medical center. METHODS: All patients aged ≥18 years who were seen in a primary care clinic within the Duke Health System between 2013 and 2016 were included. Patients were categorized according to baseline BMI as underweight or normal weight (<25 kg/m2), overweight (25-29.9 kg/m2), Class I obesity (30-34.9 kg/m2), Class II obesity (35-39.9 kg/m2), and Class III obesity (≥40 kg/m2). Baseline characteristics and use of weight loss medication were assessed by BMI category. Predicted change in BMI was modeled over 3 years. All data were analyzed between 2017 and 2018. RESULTS: Of the 173,462 included patients, most were overweight (32%) or obese (40%). Overall, <1% (n=295) of obese patients were prescribed medication for weight loss or underwent bariatric surgery within the 3-year study period. Most patients had no change in BMI class (70%) at 3 years. CONCLUSIONS: Despite a high prevalence of obesity within primary care clinics of a large, U.S. academic health center, the use of pharmacologic and surgical therapies was low, and most patients had no weight change over 3 years. This highlights the significant need for improvement in obesity care at a health system level.


Subject(s)
Academic Medical Centers , Anti-Obesity Agents/therapeutic use , Body Mass Index , Obesity , Orlistat/therapeutic use , Primary Health Care , Comorbidity , Female , Humans , Longitudinal Studies , Male , Middle Aged , Obesity/drug therapy , Obesity/epidemiology , Obesity/surgery , Prevalence , Retrospective Studies , United States/epidemiology
7.
MDM Policy Pract ; 5(1): 2381468319899663, 2020.
Article in English | MEDLINE | ID: mdl-31976373

ABSTRACT

Background. Identification of patients at risk of deteriorating during their hospitalization is an important concern. However, many off-shelf scores have poor in-center performance. In this article, we report our experience developing, implementing, and evaluating an in-hospital score for deterioration. Methods. We abstracted 3 years of data (2014-2016) and identified patients on medical wards that died or were transferred to the intensive care unit. We developed a time-varying risk model and then implemented the model over a 10-week period to assess prospective predictive performance. We compared performance to our currently used tool, National Early Warning Score. In order to aid clinical decision making, we transformed the quantitative score into a three-level clinical decision support tool. Results. The developed risk score had an average area under the curve of 0.814 (95% confidence interval = 0.79-0.83) versus 0.740 (95% confidence interval = 0.72-0.76) for the National Early Warning Score. We found the proposed score was able to respond to acute clinical changes in patients' clinical status. Upon implementing the score, we were able to achieve the desired positive predictive value but needed to retune the thresholds to get the desired sensitivity. Discussion. This work illustrates the potential for academic medical centers to build, refine, and implement risk models that are targeted to their patient population and work flow.

8.
Eur J Heart Fail ; 22(7): 1174-1182, 2020 07.
Article in English | MEDLINE | ID: mdl-31863532

ABSTRACT

AIMS: Worsening heart failure (HF) is associated with shorter left ventricular systolic ejection time (SET), but there are limited data describing the relationship between SET and clinical outcomes. Thus, the objective was to describe the association between SET and clinical outcomes in an ambulatory HF population irrespective of ejection fraction (EF). METHODS AND RESULTS: We identified ambulatory patients with HF with reduced EF (HFrEF) and HF with preserved EF (HFpEF) who had an outpatient transthoracic echocardiogram performed between August 2008 and July 2010 at a tertiary referral centre. Multivariable logistic regression was used to evaluate the association between SET and 1-year outcomes. A total of 545 HF patients (171 HFrEF, 374 HFpEF) met eligibility criteria. Compared with HFpEF, HFrEF patients were younger [median age 60 years (25th-75th percentiles 50-69) vs. 64 years (25th-75th percentiles 53-74], with fewer females (30% vs. 56%) and a similar percentage of African Americans (36% vs. 35%). Median (25th-75th percentiles) EF with HFrEF was 30% (25-35%) and with HFpEF was 54% (48-58%). Median SET was shorter (280 ms vs. 315 ms, P < 0.001), median pre-ejection period was longer (114 ms vs. 89 ms, P < 0.001), and median relaxation time was shorter (78.7 ms vs. 93.3 ms, P < 0.001) among patients with HFrEF vs. HFpEF. Death or HF hospitalization occurred in 26.9% (n = 46) HFrEF and 11.8% (n = 44) HFpEF patients. After adjustment, longer SET was associated with lower odds of the composite of death or HF hospitalization at 1 year among HFrEF but not HFpEF patients. CONCLUSION: Longer SET is independently associated with improved outcomes among HFrEF patients but not HFpEF patients, supporting a potential role for normalizing SET as a therapeutic strategy with systolic dysfunction.


Subject(s)
Heart Failure , Aged , Angiotensin Receptor Antagonists , Angiotensin-Converting Enzyme Inhibitors , Diabetes Mellitus, Type 2 , Female , Heart Failure/epidemiology , Humans , Male , Middle Aged , Percutaneous Coronary Intervention , Prognosis , Stroke Volume
9.
J Am Med Inform Assoc ; 26(12): 1609-1617, 2019 12 01.
Article in English | MEDLINE | ID: mdl-31553474

ABSTRACT

OBJECTIVE: Electronic health records (EHR) data have become a central data source for clinical research. One concern for using EHR data is that the process through which individuals engage with the health system, and find themselves within EHR data, can be informative. We have termed this process informed presence. In this study we use simulation and real data to assess how the informed presence can impact inference. MATERIALS AND METHODS: We first simulated a visit process where a series of biomarkers were observed informatively and uninformatively over time. We further compared inference derived from a randomized control trial (ie, uninformative visits) and EHR data (ie, potentially informative visits). RESULTS: We find that only when there is both a strong association between the biomarker and the outcome as well as the biomarker and the visit process is there bias. Moreover, once there are some uninformative visits this bias is mitigated. In the data example we find, that when the "true" associations are null, there is no observed bias. DISCUSSION: These results suggest that an informative visit process can exaggerate an association but cannot induce one. Furthermore, careful study design can, mitigate the potential bias when some noninformative visits are included. CONCLUSIONS: While there are legitimate concerns regarding biases that "messy" EHR data may induce, the conditions for such biases are extreme and can be accounted for.


Subject(s)
Bias , Biomarkers , Electronic Health Records , Aged , Biomedical Research , Computer Simulation , Female , Humans , Male , Middle Aged , Models, Biological , Office Visits
10.
J Am Med Inform Assoc ; 26(5): 429-437, 2019 05 01.
Article in English | MEDLINE | ID: mdl-30869798

ABSTRACT

OBJECTIVE: Participants enrolled into randomized controlled trials (RCTs) often do not reflect real-world populations. Previous research in how best to transport RCT results to target populations has focused on weighting RCT data to look like the target data. Simulation work, however, has suggested that an outcome model approach may be preferable. Here, we describe such an approach using source data from the 2 × 2 factorial NAVIGATOR (Nateglinide And Valsartan in Impaired Glucose Tolerance Outcomes Research) trial, which evaluated the impact of valsartan and nateglinide on cardiovascular outcomes and new-onset diabetes in a prediabetic population. MATERIALS AND METHODS: Our target data consisted of people with prediabetes serviced at the Duke University Health System. We used random survival forests to develop separate outcome models for each of the 4 treatments, estimating the 5-year risk difference for progression to diabetes, and estimated the treatment effect in our local patient populations, as well as subpopulations, and compared the results with the traditional weighting approach. RESULTS: Our models suggested that the treatment effect for valsartan in our patient population was the same as in the trial, whereas for nateglinide treatment effect was stronger than observed in the original trial. Our effect estimates were more efficient than the weighting approach and we effectively estimated subgroup differences. CONCLUSIONS: The described method represents a straightforward approach to efficiently transporting an RCT result to any target population.


Subject(s)
Antihypertensive Agents/therapeutic use , Hypoglycemic Agents/therapeutic use , Machine Learning , Nateglinide/therapeutic use , Prediabetic State/drug therapy , Valsartan/therapeutic use , Cardiovascular Diseases/prevention & control , Diabetes Mellitus, Type 2 , Disease Progression , Electronic Health Records , Evidence-Based Medicine , Humans , Outcome Assessment, Health Care , Randomized Controlled Trials as Topic , Translational Research, Biomedical
11.
Crit Care Med ; 47(1): 49-55, 2019 01.
Article in English | MEDLINE | ID: mdl-30247239

ABSTRACT

OBJECTIVES: Previous studies have looked at National Early Warning Score performance in predicting in-hospital deterioration and death, but data are lacking with respect to patient outcomes following implementation of National Early Warning Score. We sought to determine the effectiveness of National Early Warning Score implementation on predicting and preventing patient deterioration in a clinical setting. DESIGN: Retrospective cohort study. SETTING: Tertiary care academic facility and a community hospital. PATIENTS: Patients 18 years old or older hospitalized from March 1, 2014, to February 28, 2015, during preimplementation of National Early Warning Score to August 1, 2015, to July 31, 2016, after National Early Warning Score was implemented. INTERVENTIONS: Implementation of National Early Warning Score within the electronic health record and associated best practice alert. MEASUREMENTS AND MAIN RESULTS: In this study of 85,322 patients (42,402 patients pre-National Early Warning Score and 42,920 patients post-National Early Warning Score implementation), the primary outcome of rate of ICU transfer or death did not change after National Early Warning Score implementation, with adjusted hazard ratio of 0.94 (0.84-1.05) and 0.90 (0.77-1.05) at our academic and community hospital, respectively. In total, 175,357 best practice advisories fired during the study period, with the best practice advisory performing better at the community hospital than the academic at predicting an event within 12 hours 7.4% versus 2.2% of the time, respectively. Retraining National Early Warning Score with newly generated hospital-specific coefficients improved model performance. CONCLUSIONS: At both our academic and community hospital, National Early Warning Score had poor performance characteristics and was generally ignored by frontline nursing staff. As a result, National Early Warning Score implementation had no appreciable impact on defined clinical outcomes. Refitting of the model using site-specific data improved performance and supports validating predictive models on local data.


Subject(s)
Clinical Alarms , Clinical Deterioration , Patient Acuity , Academic Medical Centers , Adult , Aged , Attitude of Health Personnel , Cohort Studies , Early Diagnosis , Female , Hospital Mortality , Hospitals, Community , Humans , Intensive Care Units , Male , Middle Aged , North Carolina , Nursing Staff, Hospital , Patient Transfer/statistics & numerical data , Retrospective Studies
13.
Am Heart J ; 203: 39-48, 2018 09.
Article in English | MEDLINE | ID: mdl-30015067

ABSTRACT

BACKGROUND: We aimed to determine the association of MR severity and type with all-cause death in a large, real-world, clinical setting. METHODS: We reviewed full echocardiography studies at Duke Echocardiography Laboratory (01/01/1995-12/31/2010), classifying MR based on valve morphology, presence of coronary artery disease, and left ventricular size and function. Survival was compared among patients stratified by MR type and baseline severity. RESULTS: Of 93,007 qualifying patients, 32,137 (34.6%) had ≥mild MR. A total of 8094 (8.7%) had moderate/severe MR, which was primary myxomatous (14.1%), primary non-myxomatous (6.2%), secondary non-ischemic (17.0%), and secondary ischemic (49.4%). At 10 years, patients with primary myxomatous MR or MR due to indeterminate cause had survival rates of >60%; primary non-myxomatous, secondary ischemic, and non-ischemic MR had survival rates <50%. While mild (HR 1.06, 95% CI 1.03-1.09), moderate (HR 1.31, 95% CI 1.27-1.37), and severe (HR 1.55, 95% CI 1.46-1.65) MR were independently associated with all-cause death, the relationship of increasing MR severity with mortality varied across MR types (P ≤ .001 for interaction); the highest risk associated with worsening severity was seen in primary myxomatous MR followed by secondary ischemic MR and primary non-myxomatous MR. CONCLUSIONS: Although MR severity is independently associated with increased all-cause death risk for most forms of MR, the absolute mortality rates associated with worse MR severity are much higher for primary myxomatous, non-myxomatous, and secondary ischemic MR. The findings from this study support carefully defining MR by type and severity.


Subject(s)
Echocardiography, Doppler, Color/methods , Mitral Valve Insufficiency/diagnosis , Mitral Valve/diagnostic imaging , Stroke Volume/physiology , Ventricular Function, Left/physiology , Adult , Aged , Cause of Death/trends , Female , Follow-Up Studies , Humans , Male , Middle Aged , Mitral Valve Insufficiency/physiopathology , Prognosis , Retrospective Studies , Severity of Illness Index , Survival Rate/trends , Time Factors , United States/epidemiology
14.
JAMA Netw Open ; 1(5): e182716, 2018 09 07.
Article in English | MEDLINE | ID: mdl-30646172

ABSTRACT

Importance: Data from electronic health records (EHRs) are increasingly used for risk prediction. However, EHRs do not reliably collect sociodemographic and neighborhood information, which has been shown to be associated with health. The added contribution of neighborhood socioeconomic status (nSES) in predicting health events is unknown and may help inform population-level risk reduction strategies. Objective: To quantify the association of nSES with adverse outcomes and the value of nSES in predicting the risk of adverse outcomes in EHR-based risk models. Design, Setting, and Participants: Cohort study in which data from 90 097 patients 18 years or older in the Duke University Health System and Lincoln Community Health Center EHR from January 1, 2009, to December 31, 2015, with at least 1 health care encounter and residence in Durham County, North Carolina, in the year prior to the index date were linked with census tract data to quantify the association between nSES and the risk of adverse outcomes. Machine learning methods were used to develop risk models and determine how adding nSES to EHR data affects risk prediction. Neighborhood socioeconomic status was defined using the Agency for Healthcare Research and Quality SES index, a weighted measure of multiple indicators of neighborhood deprivation. Main Outcomes and Measures: Outcomes included use of health care services (emergency department and inpatient and outpatient encounters) and hospitalizations due to accidents, asthma, influenza, myocardial infarction, and stroke. Results: Among the 90 097 patients in the training set of the study (57 507 women and 32 590 men; mean [SD] age, 47.2 [17.7] years) and the 122 812 patients in the testing set of the study (75 517 women and 47 295 men; mean [SD] age, 46.2 [17.9] years), those living in neighborhoods with lower nSES had a shorter time to use of emergency department services and inpatient encounters, as well as a shorter time to hospitalizations due to accidents, asthma, influenza, myocardial infarction, and stroke. The predictive value of nSES varied by outcome of interest (C statistic ranged from 0.50 to 0.63). When added to EHR variables, nSES did not improve predictive performance for any health outcome. Conclusions and Relevance: Social determinants of health, including nSES, are associated with the health of a patient. However, the results of this study suggest that information on nSES may not contribute much more to risk prediction above and beyond what is already provided by EHR data. Although this result does not mean that integrating social determinants of health into the EHR has no benefit, researchers may be able to use EHR data alone for population risk assessment.


Subject(s)
Electronic Health Records/statistics & numerical data , Health Status Disparities , Residence Characteristics/statistics & numerical data , Social Class , Adult , Aged , Cohort Studies , Female , Humans , Income/statistics & numerical data , Male , Middle Aged , North Carolina/ethnology , Outcome Assessment, Health Care/methods , Outcome Assessment, Health Care/statistics & numerical data , Racial Groups/ethnology , Racial Groups/statistics & numerical data , Social Determinants of Health/ethnology , Social Determinants of Health/statistics & numerical data
15.
J Am Med Inform Assoc ; 25(2): 150-157, 2018 02 01.
Article in English | MEDLINE | ID: mdl-28645207

ABSTRACT

Background: Electronic medical record (EMR) computed algorithms allow investigators to screen thousands of patient records to identify specific disease cases. No computed algorithms have been developed to detect all cases of human immunodeficiency virus (HIV) infection using administrative, laboratory, and clinical documentation data outside of the Veterans Health Administration. We developed novel EMR-based algorithms for HIV detection and validated them in a cohort of subjects in the Duke University Health System (DUHS). Methods: We created 2 novel algorithms to identify HIV-infected subjects. Algorithm 1 used laboratory studies and medications to identify HIV-infected subjects, whereas Algorithm 2 used International Classification of Diseases, Ninth Revision (ICD-9) codes, medications, and laboratory testing. We applied the algorithms to a well-characterized cohort of patients and validated both against the gold standard of physician chart review. We determined sensitivity, specificity, and prevalence of HIV between 2007 and 2011 in patients seen at DUHS. Results: A total of 172 271 patients were detected with complete data; 1063 patients met algorithm criteria for HIV infection. In all, 970 individuals were identified by both algorithms, 78 by Algorithm 1 alone, and 15 by Algorithm 2 alone. The sensitivity and specificity of each algorithm were 78% and 99%, respectively, for Algorithm 1 and 77% and 100% for Algorithm 2. The estimated prevalence of HIV infection at DUHS between 2007 and 2011 was 0.6%. Conclusions: EMR-based phenotypes of HIV infection are capable of detecting cases of HIV-infected adults with good sensitivity and specificity. These algorithms have the potential to be adapted to other EMR systems, allowing for the creation of cohorts of patients across EMR systems.


Subject(s)
Algorithms , Electronic Health Records , HIV Infections/diagnosis , HIV-1 , Adult , Humans , Phenotype , Sensitivity and Specificity
16.
ESC Heart Fail ; 4(4): 432-439, 2017 11.
Article in English | MEDLINE | ID: mdl-29154416

ABSTRACT

AIMS: While abnormal resting LV GLS has been described in patients with chronic heart failure with preserved ejection fraction (HFpEF), its prognostic significance when measured during an acute heart failure hospitalization remains unclear. We assessed the association between left ventricular global longitudinal strain (LV GLS) and outcomes in patients hospitalized with acute HFpEF. METHODS AND RESULTS: We studied patients discharged alive for acute HFpEF from Duke University Medical Center between 2007 and 2010. Among patients with measurable LV GLS, we performed 2D, speckle-tracking analysis and Cox proportional hazards models assessed the association between continuous LV GLS and outcomes. Baseline characteristics were stratified by normal (≤-16%) or abnormal (>-16%) LV GLS for comparison. Among 463 patients, the median LV GLS was -12.8% (Interquartile range, -15.8 to -10.8%) and was abnormal in 352 (76%). Overall patients in the cohort were generally elderly, female and had hypertension. After multivariable adjustment, worse outcomes were noted between LV GLS and mortality (HR 1.19 per 1% increase; 95% CI 1.00-1.42; P = 0.046) and a composite endpoint of mortality or rehospitalization at 30 days (HR 1.08 per 1% increase; 95% CI 0.99-1.18; P = 0.08). There was no association between LV GLS and mortality or a composite of mortality or rehospitalization at 1 year. CONCLUSIONS: A high prevalence of patients hospitalized with acute HFpEF have abnormal LV GLS suggesting unrecognized myocardial systolic dysfunction. Furthermore, worse LV GLS is associated with worse clinical outcomes at 30 days but not by1 year.


Subject(s)
Heart Failure/physiopathology , Heart Ventricles/physiopathology , Hospitalization , Myocardial Contraction/physiology , Stroke Volume/physiology , Ventricular Function, Left/physiology , Acute Disease , Aged , Aged, 80 and over , Echocardiography , Female , Follow-Up Studies , Heart Failure/diagnosis , Heart Ventricles/diagnostic imaging , Humans , Male , Middle Aged , Prognosis , Retrospective Studies , Risk Factors
17.
J Am Heart Assoc ; 6(10)2017 Oct 11.
Article in English | MEDLINE | ID: mdl-29021274

ABSTRACT

BACKGROUND: Chronic kidney disease (CKD) is an adverse prognostic marker for valve intervention patients; however, the prevalence and related outcomes of valvular heart disease in CKD patients is unknown. METHODS AND RESULTS: Included patients underwent echocardiography (1999-2013), had serum creatinine values within 6 months before index echocardiogram, and had no history of valve surgery. CKD was defined as diagnosis based on the International Classification of Diseases, Ninth Revision or an estimated glomerular filtration rate <60 mL/min per 1.73 m2. Qualitative assessment determined left heart stenotic and regurgitant valve lesions. Cox models assessed CKD and aortic stenosis (AS) interaction for subsequent mortality; analyses were repeated for mitral regurgitation (MR). Among 78 059 patients, 23 727 (30%) had CKD; of these, 1326 were on hemodialysis. CKD patients were older; female; had a higher prevalence of hypertension, hyperlipidemia, diabetes, history of coronary artery bypass grafting/percutaneous coronary intervention, atrial fibrillation, and heart failure ≥mild AS; and ≥mild MR (all P<0.001). Five-year survival estimates of mild, moderate, and severe AS for CKD patients were 40%, 34%, and 42%, respectively, and 69%, 54%, and 67% for non-CKD patients. Five-year survival estimates of mild, moderate, and severe MR for CKD patients were 51%, 38%, and 37%, respectively, and 75%, 66%, and 65% for non-CKD patients. Significant interaction occurred among CKD, AS/MR severity, and mortality in adjusted analyses; the CKD hazard ratio increased from 1.8 (non-AS patients) to 2.0 (severe AS) and from 1.7 (non-MR patients) to 2.6 (severe MR). CONCLUSIONS: Prevalence of at least mild AS and MR is substantially higher and is associated with significantly lower survival among patients with versus without CKD. There is significant interaction among CKD, AS/MR severity, and mortality, with increasingly worse outcomes for CKD patients with increasing AS/MR severity.


Subject(s)
Aortic Valve Stenosis/epidemiology , Mitral Valve Insufficiency/epidemiology , Renal Insufficiency, Chronic/epidemiology , Adult , Age Factors , Aged , Aged, 80 and over , Aortic Valve/diagnostic imaging , Aortic Valve/physiopathology , Aortic Valve Stenosis/diagnostic imaging , Aortic Valve Stenosis/mortality , Aortic Valve Stenosis/physiopathology , Comorbidity , Databases, Factual , Echocardiography , Female , Glomerular Filtration Rate , Humans , Kaplan-Meier Estimate , Kidney/physiopathology , Male , Middle Aged , Mitral Valve/diagnostic imaging , Mitral Valve/physiopathology , Mitral Valve Insufficiency/diagnostic imaging , Mitral Valve Insufficiency/mortality , Mitral Valve Insufficiency/physiopathology , North Carolina/epidemiology , Prevalence , Prognosis , Proportional Hazards Models , Renal Insufficiency, Chronic/diagnosis , Renal Insufficiency, Chronic/mortality , Renal Insufficiency, Chronic/physiopathology , Retrospective Studies , Risk Factors , Severity of Illness Index , Sex Factors , Time Factors
18.
EGEMS (Wash DC) ; 5(1): 22, 2017 Dec 06.
Article in English | MEDLINE | ID: mdl-29930963

ABSTRACT

Electronic health record (EHR) data are becoming a primary resource for clinical research. Compared to traditional research data, such as those from clinical trials and epidemiologic cohorts, EHR data have a number of appealing characteristics. However, because they do not have mechanisms set in place to ensure that the appropriate data are collected, they also pose a number of analytic challenges. In this paper, we illustrate that how a patient interacts with a health system influences which data are recorded in the EHR. These interactions are typically informative, potentially resulting in bias. We term the overall set of induced biases informed presence. To illustrate this, we use examples from EHR based analyses. Specifically, we show that: 1) Where a patient receives services within a health facility can induce selection bias; 2) Which health system a patient chooses for an encounter can result in information bias; and 3) Referral encounters can create an admixture bias. While often times addressing these biases can be straightforward, it is important to understand how they are induced in any EHR based analysis.

19.
J Am Med Inform Assoc ; 24(e1): e121-e128, 2017 Apr 01.
Article in English | MEDLINE | ID: mdl-27616701

ABSTRACT

OBJECTIVE: We assessed the sensitivity and specificity of 8 electronic health record (EHR)-based phenotypes for diabetes mellitus against gold-standard American Diabetes Association (ADA) diagnostic criteria via chart review by clinical experts. MATERIALS AND METHODS: We identified EHR-based diabetes phenotype definitions that were developed for various purposes by a variety of users, including academic medical centers, Medicare, the New York City Health Department, and pharmacy benefit managers. We applied these definitions to a sample of 173 503 patients with records in the Duke Health System Enterprise Data Warehouse and at least 1 visit over a 5-year period (2007-2011). Of these patients, 22 679 (13%) met the criteria of 1 or more of the selected diabetes phenotype definitions. A statistically balanced sample of these patients was selected for chart review by clinical experts to determine the presence or absence of type 2 diabetes in the sample. RESULTS: The sensitivity (62-94%) and specificity (95-99%) of EHR-based type 2 diabetes phenotypes (compared with the gold standard ADA criteria via chart review) varied depending on the component criteria and timing of observations and measurements. DISCUSSION AND CONCLUSIONS: Researchers using EHR-based phenotype definitions should clearly specify the characteristics that comprise the definition, variations of ADA criteria, and how different phenotype definitions and components impact the patient populations retrieved and the intended application. Careful attention to phenotype definitions is critical if the promise of leveraging EHR data to improve individual and population health is to be fulfilled.


Subject(s)
Diabetes Mellitus/diagnosis , Electronic Health Records , Algorithms , Diabetes Mellitus/blood , Diabetes Mellitus, Type 2/blood , Diabetes Mellitus, Type 2/diagnosis , Glycated Hemoglobin/analysis , Humans , Phenotype , Sensitivity and Specificity
20.
Am J Epidemiol ; 184(11): 847-855, 2016 Dec 01.
Article in English | MEDLINE | ID: mdl-27852603

ABSTRACT

Electronic health records (EHRs) are an increasingly utilized resource for clinical research. While their size allows for many analytical opportunities, as with most observational data there is also the potential for bias. One of the key sources of bias in EHRs is what we term informed presence-the notion that inclusion in an EHR is not random but rather indicates that the subject is ill, making people in EHRs systematically different from those not in EHRs. In this article, we use simulated and empirical data to illustrate the conditions under which such bias can arise and how conditioning on the number of health-care encounters can be one way to remove this bias. In doing so, we also show when such an approach can impart M bias, or bias from conditioning on a collider. Finally, we explore the conditions under which number of medical encounters can serve as a proxy for general health. We apply these methods to an EHR data set from a university medical center covering the years 2007-2013.


Subject(s)
Biomedical Research/methods , Biomedical Research/standards , Electronic Health Records/statistics & numerical data , Epidemiologic Research Design , Selection Bias , Computer Simulation , Confounding Factors, Epidemiologic , Depression/epidemiology , Diabetes Mellitus/epidemiology , Health Services/statistics & numerical data , Health Status , Humans , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...