Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
J Clin Periodontol ; 51(2): 110-117, 2024 02.
Article in English | MEDLINE | ID: mdl-37846605

ABSTRACT

AIM: To illustrate the use of joint models (JMs) for longitudinal and survival data in estimating risk factors of tooth loss as a function of time-varying endogenous periodontal biomarkers (probing pocket depth [PPD], alveolar bone loss [ABL] and mobility [MOB]). MATERIALS AND METHODS: We used data from the Veterans Affairs Dental Longitudinal Study, a longitudinal cohort study of over 30 years of follow-up. We compared the results from the JM with those from the extended Cox regression model which assumes that the time-varying covariates are exogenous. RESULTS: Our results showed that PPD is an important risk factor of tooth loss, but each model produced different estimates of the hazard. In the tooth-level analysis, based on the JM, the hazard of tooth loss increased by 4.57 (95% confidence interval [CI]: 2.13-8.50) times for a 1-mm increase in maximum PPD, whereas based on the extended Cox model, the hazard of tooth loss increased by 1.60 (95% CI: 1.37-1.87) times. CONCLUSIONS: JMs can incorporate time-varying periodontal biomarkers to estimate the hazard of tooth loss. As JMs are not commonly used in oral health research, we provide a comprehensive set of R codes and an example dataset to implement the method.


Subject(s)
Alveolar Bone Loss , Tooth Loss , Humans , Longitudinal Studies , Tooth Loss/etiology , Proportional Hazards Models , Periodontal Pocket/complications , Risk Factors , Biomarkers , Alveolar Bone Loss/complications , Follow-Up Studies
2.
Ann Surg ; 279(3): 450-455, 2024 Mar 01.
Article in English | MEDLINE | ID: mdl-37477019

ABSTRACT

OBJECTIVE: To describe the incidence and natural progression of psychological distress after major surgery. BACKGROUND: The recovery process after surgery imposes physical and mental burdens that put patients at risk of psychological distress. Understanding the natural course of psychological distress after surgery is critical to supporting the timely and tailored management of high-risk individuals. METHODS: We conducted a secondary analysis of the "Measurement of Exercise Tolerance before Surgery" multicentre cohort study (Canada, Australia, New Zealand, and the UK). Measurement of Exercise Tolerance before Surgery recruited adult participants (≥40 years) undergoing elective inpatient noncardiac surgery and followed them for 1 year. The primary outcome was the severity of psychological distress measured using the anxiety-depression item of EQ-5D-3L. We used cumulative link mixed models to characterize the time trajectory of psychological distress among relevant patient subgroups. We also explored potential predictors of severe and/or worsened psychological distress at 1 year using multivariable logistic regression models. RESULTS: Of 1546 participants, moderate-to-severe psychological distress was reported by 32.6% of participants before surgery, 27.3% at 30 days after surgery, and 26.2% at 1 year after surgery. Psychological distress appeared to improve over time among females [odds ratio (OR): 0.80, 95% CI: 0.65-0.95] and patients undergoing orthopedic procedures (OR: 0.73, 95% CI: 0.55-0.91), but not among males (OR: 0.87, 95% CI: 0.87-1.07) or patients undergoing nonorthopedic procedures (OR: 0.95, 95% CI: 0.87-1.04). Among the average middle-aged adult, there were no time-related changes (OR: 0.94, 97% CI: 0.75-1.13), whereas the young-old (OR: 0.89, 95% CI: 0.79-0.99) and middle-old (OR: 0.87, 95% CI: 0.73-1.01) had small improvements. Predictors of severe and/or worsened psychological distress at 1 year were younger age, poor self-reported functional capacity, smoking history, and undergoing open surgery. CONCLUSIONS: One-third of adults experience moderate to severe psychological distress before major elective noncardiac surgery. This distress tends to persist or worsen over time among select patient subgroups.


Subject(s)
Inpatients , Psychological Distress , Adult , Male , Middle Aged , Female , Humans , Cohort Studies , Prospective Studies , Exercise Tolerance , Stress, Psychological/epidemiology , Stress, Psychological/etiology , Stress, Psychological/psychology
3.
BMC Med Res Methodol ; 21(1): 145, 2021 07 11.
Article in English | MEDLINE | ID: mdl-34247586

ABSTRACT

BACKGROUND: A large multi-center survey was conducted to understand patients' perspectives on biobank study participation with particular focus on racial and ethnic minorities. In order to enrich the study sample with racial and ethnic minorities, disproportionate stratified sampling was implemented with strata defined by electronic health records (EHR) that are known to be inaccurate. We investigate the effect of sampling strata misclassification in complex survey design. METHODS: Under non-differential and differential misclassification in the sampling strata, we compare the validity and precision of three simple and common analysis approaches for settings in which the primary exposure is used to define the sampling strata. We also compare the precision gains/losses observed from using a disproportionate stratified sampling scheme compared to using a simple random sample under varying degrees of strata misclassification. RESULTS: Disproportionate stratified sampling can result in more efficient parameter estimates of the rare subgroups (race/ethnic minorities) in the sampling strata compared to simple random sampling. When sampling strata misclassification is non-differential with respect to the outcome, a design-agnostic analysis was preferred over model-based and design-based analyses. All methods yielded unbiased parameter estimates but standard error estimates were lowest from the design-agnostic analysis. However, when misclassification is differential, only the design-based method produced valid parameter estimates of the variables included in the sampling strata. CONCLUSIONS: In complex survey design, when the interest is in making inference on rare subgroups, we recommend implementing disproportionate stratified sampling over simple random sampling even if the sampling strata are misclassified. If the misclassification is non-differential, we recommend a design-agnostic analysis. However, if the misclassification is differential, we recommend using design-based analyses.


Subject(s)
Ethnicity , Minority Groups , Electronic Health Records , Humans , Research Design , Surveys and Questionnaires
4.
JAMA Netw Open ; 3(3): e201965, 2020 03 02.
Article in English | MEDLINE | ID: mdl-32202640
5.
Biometrics ; 75(3): 938-949, 2019 09.
Article in English | MEDLINE | ID: mdl-30859544

ABSTRACT

The issue of informative cluster size (ICS) often arises in the analysis of dental data. ICS describes a situation where the outcome of interest is related to cluster size. Much of the work on modeling marginal inference in longitudinal studies with potential ICS has focused on continuous outcomes. However, periodontal disease outcomes, including clinical attachment loss, are often assessed using ordinal scoring systems. In addition, participants may lose teeth over the course of the study due to advancing disease status. Here we develop longitudinal cluster-weighted generalized estimating equations (CWGEE) to model the association of ordinal clustered longitudinal outcomes with participant-level health-related covariates, including metabolic syndrome and smoking status, and potentially decreasing cluster size due to tooth-loss, by fitting a proportional odds logistic regression model. The within-teeth correlation coefficient over time is estimated using the two-stage quasi-least squares method. The motivation for our work stems from the Department of Veterans Affairs Dental Longitudinal Study in which participants regularly received general and oral health examinations. In an extensive simulation study, we compare results obtained from CWGEE with various working correlation structures to those obtained from conventional GEE which does not account for ICS. Our proposed method yields results with very low bias and excellent coverage probability in contrast to a conventional generalized estimating equations approach.


Subject(s)
Cluster Analysis , Longitudinal Studies , Models, Statistical , Bias , Data Interpretation, Statistical , Humans , Logistic Models , Periodontal Diseases
6.
Biom J ; 60(3): 639-656, 2018 05.
Article in English | MEDLINE | ID: mdl-29349801

ABSTRACT

Large-scale agreement studies are becoming increasingly common in medical settings to gain better insight into discrepancies often observed between experts' classifications. Ordered categorical scales are routinely used to classify subjects' disease and health conditions. Summary measures such as Cohen's weighted kappa are popular approaches for reporting levels of association for pairs of raters' ordinal classifications. However, in large-scale studies with many raters, assessing levels of association can be challenging due to dependencies between many raters each grading the same sample of subjects' results and the ordinal nature of the ratings. Further complexities arise when the focus of a study is to examine the impact of rater and subject characteristics on levels of association. In this paper, we describe a flexible approach based upon the class of generalized linear mixed models to assess the influence of rater and subject factors on association between many raters' ordinal classifications. We propose novel model-based measures for large-scale studies to provide simple summaries of association similar to Cohen's weighted kappa while avoiding prevalence and marginal distribution issues that Cohen's weighted kappa is susceptible to. The proposed summary measures can be used to compare association between subgroups of subjects or raters. We demonstrate the use of hypothesis tests to formally determine if rater and subject factors have a significant influence on association, and describe approaches for evaluating the goodness-of-fit of the proposed model. The performance of the proposed approach is explored through extensive simulation studies and is applied to a recent large-scale cancer breast cancer screening study.


Subject(s)
Biometry/methods , Breast Neoplasms/diagnostic imaging , Humans , Mammography , Mass Screening , Models, Statistical , Observer Variation
7.
Ann Epidemiol ; 27(10): 677-685.e4, 2017 10.
Article in English | MEDLINE | ID: mdl-29029991

ABSTRACT

PURPOSE: Interpretation of screening tests such as mammograms usually require a radiologist's subjective visual assessment of images, often resulting in substantial discrepancies between radiologists' classifications of subjects' test results. In clinical screening studies to assess the strength of agreement between experts, multiple raters are often recruited to assess subjects' test results using an ordinal classification scale. However, using traditional measures of agreement in some studies is challenging because of the presence of many raters, the use of an ordinal classification scale, and unbalanced data. METHODS: We assess and compare the performances of existing measures of agreement and association as well as a newly developed model-based measure of agreement to three large-scale clinical screening studies involving many raters' ordinal classifications. We also conduct a simulation study to demonstrate the key properties of the summary measures. RESULTS: The assessment of agreement and association varied according to the choice of summary measure. Some measures were influenced by the underlying prevalence of disease and raters' marginal distributions and/or were limited in use to balanced data sets where every rater classifies every subject. Our simulation study indicated that popular measures of agreement and association are prone to underlying disease prevalence. CONCLUSIONS: Model-based measures provide a flexible approach for calculating agreement and association and are robust to missing and unbalanced data as well as the underlying disease prevalence.


Subject(s)
Breast Neoplasms/diagnosis , Mammography , Mass Screening/statistics & numerical data , Observer Variation , Breast Neoplasms/classification , Computer Graphics , Data Interpretation, Statistical , Female , Humans , Mammography/classification , Mammography/statistics & numerical data , Mass Screening/standards , Reproducibility of Results
8.
Stat Med ; 36(20): 3181-3199, 2017 Sep 10.
Article in English | MEDLINE | ID: mdl-28612356

ABSTRACT

Widespread inconsistencies are commonly observed between physicians' ordinal classifications in screening tests results such as mammography. These discrepancies have motivated large-scale agreement studies where many raters contribute ratings. The primary goal of these studies is to identify factors related to physicians and patients' test results, which may lead to stronger consistency between raters' classifications. While ordered categorical scales are frequently used to classify screening test results, very few statistical approaches exist to model agreement between multiple raters. Here we develop a flexible and comprehensive approach to assess the influence of rater and subject characteristics on agreement between multiple raters' ordinal classifications in large-scale agreement studies. Our approach is based upon the class of generalized linear mixed models. Novel summary model-based measures are proposed to assess agreement between all, or a subgroup of raters, such as experienced physicians. Hypothesis tests are described to formally identify factors such as physicians' level of experience that play an important role in improving consistency of ratings between raters. We demonstrate how unique characteristics of individual raters can be assessed via conditional modes generated during the modeling process. Simulation studies are presented to demonstrate the performance of the proposed methods and summary measure of agreement. The methods are applied to a large-scale mammography agreement study to investigate the effects of rater and patient characteristics on the strength of agreement between radiologists. Copyright © 2017 John Wiley & Sons, Ltd.


Subject(s)
Mammography/statistics & numerical data , Observer Variation , Biostatistics , Breast Neoplasms/diagnostic imaging , Computer Simulation , Female , Humans , Linear Models , Models, Statistical , Radiologists
9.
Am J Kidney Dis ; 69(6): 771-779, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28063734

ABSTRACT

BACKGROUND: Controversy exists about any differences in longer-term safety across different intravenous iron formulations routinely used in hemodialysis (HD) patients. We exploited a natural experiment to compare outcomes of patients initiating HD therapy in facilities that predominantly (in ≥90% of their patients) used iron sucrose versus sodium ferric gluconate complex. STUDY DESIGN: Retrospective cohort study of incident HD patients. SETTING & PARTICIPANTS: Using the US Renal Data System, we hard-matched on geographic region and center characteristics HD facilities predominantly using ferric gluconate with similar ones using iron sucrose. Subsequently, incident HD patients were assigned to their facility iron formulation exposure. INTERVENTION: Facility-level use of iron sucrose versus ferric gluconate. OUTCOMES: Patients were followed up for mortality from any, cardiovascular, or infectious causes. Medicare-insured patients were followed up for infectious and cardiovascular (stroke or myocardial infarction) hospitalizations and for composite outcomes with the corresponding cause-specific deaths. MEASUREMENTS: HRs. RESULTS: We matched 2,015 iron sucrose facilities with 2,015 ferric gluconate facilities, in which 51,603 patients (iron sucrose, 24,911; ferric gluconate, 26,692) subsequently initiated HD therapy. All recorded patient characteristics were balanced between groups. Over 49,989 person-years, 10,381 deaths (3,908 cardiovascular and 1,209 infectious) occurred. Adjusted all-cause (HR, 0.98; 95% CI, 0.93-1.03), cardiovascular (HR, 0.96; 95% CI, 0.89-1.03), and infectious mortality (HR, 0.98; 95% CI, 0.86-1.13) did not differ between iron sucrose and ferric gluconate facilities. Among Medicare beneficiaries, no differences between ferric gluconate and iron sucrose facilities were observed in fatal or nonfatal cardiovascular events (HR, 1.01; 95% CI, 0.93-1.09). The composite infectious end point occurred less frequently in iron sucrose versus ferric gluconate facilities (HR, 0.92; 95% CI, 0.88-0.96). LIMITATIONS: Unobserved selection bias from nonrandom treatment assignment. CONCLUSIONS: Patients initiating HD therapy in facilities almost exclusively using iron sucrose versus ferric gluconate had similar longer-term outcomes. However, there was a small decrease in infectious hospitalizations and deaths in patients dialyzing in facilities predominantly using iron sucrose. This difference may be due to residual confounding, random chance, or a causal effect.


Subject(s)
Anemia/drug therapy , Ferric Compounds/therapeutic use , Glucaric Acid/therapeutic use , Hematinics/therapeutic use , Kidney Failure, Chronic/therapy , Renal Dialysis , Administration, Intravenous , Aged , Anemia/complications , Cardiovascular Diseases/mortality , Cause of Death , Female , Ferric Oxide, Saccharated , Humans , Infections/mortality , Kidney Failure, Chronic/complications , Male , Middle Aged , Mortality , Proportional Hazards Models , Retrospective Studies
10.
J Mod Appl Stat Methods ; 15(1): 160-192, 2016 May.
Article in English | MEDLINE | ID: mdl-30766452

ABSTRACT

Little research has been devoted to multiple imputation (MI) of derived variables. This study investigates various MI approaches for the outcome, rate of change, when the analysis model is a two-stage linear regression. Simulations showed that competitive approaches depended on the missing data mechanism and presence of auxiliary terms.

11.
Breast Cancer Res ; 17: 108, 2015 Aug 13.
Article in English | MEDLINE | ID: mdl-26265211

ABSTRACT

INTRODUCTION: Screening mammography has contributed to a significant increase in the diagnosis of ductal carcinoma in situ (DCIS), raising concerns about overdiagnosis and overtreatment. Building on prior observations from lineage evolution analysis, we examined whether measuring genomic features of DCIS would predict association with invasive breast carcinoma (IBC). The long-term goal is to enhance standard clinicopathologic measures of low- versus high-risk DCIS and to enable risk-appropriate treatment. METHODS: We studied three common chromosomal copy number alterations (CNA) in IBC and designed fluorescence in situ hybridization-based assay to measure copy number at these loci in DCIS samples. Clinicopathologic data were extracted from the electronic medical records of Stanford Cancer Institute and linked to demographic data from the population-based California Cancer Registry; results were integrated with data from tissue microarrays of specimens containing DCIS that did not develop IBC versus DCIS with concurrent IBC. Multivariable logistic regression analysis was performed to describe associations of CNAs with these two groups of DCIS. RESULTS: We examined 271 patients with DCIS (120 that did not develop IBC and 151 with concurrent IBC) for the presence of 1q, 8q24 and 11q13 copy number gains. Compared to DCIS-only patients, patients with concurrent IBC had higher frequencies of CNAs in their DCIS samples. On multivariable analysis with conventional clinicopathologic features, the copy number gains were significantly associated with concurrent IBC. The state of two of the three copy number gains in DCIS was associated with a risk of IBC that was 9.07 times that of no copy number gains, and the presence of gains at all three genomic loci in DCIS was associated with a more than 17-fold risk (P = 0.0013). CONCLUSIONS: CNAs have the potential to improve the identification of high-risk DCIS, defined by presence of concurrent IBC. Expanding and validating this approach in both additional cross-sectional and longitudinal cohorts may enable improved risk stratification and risk-appropriate treatment in DCIS.


Subject(s)
Breast Neoplasms/genetics , Breast Neoplasms/pathology , Carcinoma, Intraductal, Noninfiltrating/genetics , Carcinoma, Intraductal, Noninfiltrating/pathology , Chromosome Aberrations , DNA Copy Number Variations , Adult , Aged , Aged, 80 and over , Biomarkers, Tumor , Female , Genetic Predisposition to Disease , Humans , In Situ Hybridization, Fluorescence , Middle Aged , Neoplasm Grading , Neoplasm Invasiveness , Neoplasm Staging , Young Adult
12.
Nephrol Dial Transplant ; 30(12): 2068-75, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26311216

ABSTRACT

BACKGROUND: Ferumoxytol was first approved for clinical use in 2009 solely based on data from trial comparisons with oral iron on biochemical anemia efficacy end points. To compare the rates of important patient outcomes (infection, cardiovascular events and death) between facilities predominantly using ferumoxytol versus iron sucrose (IS) or ferric gluconate (FG) in patients with end-stage renal disease (ESRD)-initiating hemodialysis (HD). METHODS: Using the United States Renal Data System, we identified all HD facilities that switched (almost) all patients from IS/FG to ferumoxytol (July 2009-December 2011). Each switching facility was matched with three facilities that continued IS/FG use. All incident ESRD patients subsequently initiating HD in these centers were studied and assigned their facility exposure. They were followed for all-cause mortality, cardiovascular hospitalization/death or infectious hospitalization/death. Follow-up ended at kidney transplantation, switch to peritoneal dialysis, transfer to another facility, facility switch to another iron formulation and end of database (31 December 2011). Cox proportional hazards regression was then used to estimate adjusted hazard ratios [HR (95% confidence intervals)]. RESULTS: In July 2009-December 2011, 278 HD centers switched to ferumoxytol; 265 units (95.3%) were matched with 3 units each that continued to use IS/FG. Subsequently, 14 206 patients initiated HD, 3752 (26.4%) in ferumoxytol and 10 454 (73.6%) in IS/FG centers; their characteristics were very similar. During 6433 person-years, 1929 all-cause, 726 cardiovascular and 191 infectious deaths occurred. Patients in ferumoxytol (versus IS/FG) facilities experienced similar all-cause [0.95 (0.85-1.07)], cardiovascular [0.99 (0.83-1.19)] and infectious mortality [0.88 (0.61-1.25)]. Among 5513 Medicare (Parts A + B) beneficiaries, cardiovascular events [myocardial infarction, stroke and cardiovascular death; 1.05 (0.79-1.39)] and infectious events [hospitalization/death; 0.96 (0.85-1.08)] did not differ between the iron exposure groups. CONCLUSIONS: In incident HD patients, ferumoxytol showed similar short- to mid-term safety profiles with regard to cardiovascular, infectious and mortality outcomes compared with the more commonly used intravenous iron formulations IS and FG.


Subject(s)
Anemia/drug therapy , Ferric Compounds/administration & dosage , Ferrosoferric Oxide/administration & dosage , Glucaric Acid/administration & dosage , Kidney Failure, Chronic/therapy , Renal Dialysis , Renal Insufficiency, Chronic/drug therapy , Administration, Intravenous , Aged , Anemia/etiology , Female , Ferric Oxide, Saccharated , Hematinics/administration & dosage , Humans , Kidney Failure, Chronic/complications , Male , Middle Aged , Myocardial Infarction/etiology , Myocardial Infarction/prevention & control , Prognosis , Proportional Hazards Models , Renal Insufficiency, Chronic/etiology , Stroke/etiology , Stroke/prevention & control , United States
13.
Am J Kidney Dis ; 66(1): 106-13, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25943715

ABSTRACT

BACKGROUND: Adequately powered studies directly comparing hard clinical outcomes of darbepoetin alfa (DPO) versus epoetin alfa (EPO) in patients undergoing dialysis are lacking. STUDY DESIGN: Observational, registry-based, retrospective cohort study; we mimicked a cluster-randomized trial by comparing mortality and cardiovascular events in US patients initiating hemodialysis therapy in facilities (almost) exclusively using DPO versus EPO. SETTING & PARTICIPANTS: Nonchain US hemodialysis facilities; each facility switching from EPO to DPO (2003-2010) was matched for location, profit status, and facility type with one EPO facility. Patients subsequently initiating hemodialysis therapy in these facilities were assigned their facility-level exposure. INTERVENTION: DPO versus EPO. OUTCOMES: All-cause mortality, cardiovascular mortality; composite of cardiovascular death, nonfatal myocardial infarction (MI), and nonfatal stroke. MEASUREMENTS: Unadjusted and adjusted HRs from Cox proportional hazards regression models. RESULTS: Of 508 dialysis facilities that switched to DPO, 492 were matched with a similar EPO facility; 19,932 (DPO: 9,465 [47.5%]; EPO: 10,467 [52.5%]) incident hemodialysis patients were followed up for 21,918 person-years during which 5,550 deaths occurred. Almost all baseline characteristics were tightly balanced. The demographics-adjusted mortality HR for DPO (vs EPO) was 1.06 (95% CI, 1.00-1.13) and was materially unchanged after adjustment for all other baseline characteristics (HR, 1.05; 95% CI, 0.99-1.12). Cardiovascular mortality did not differ between groups (HR, 1.05; 95% CI, 0.94-1.16). Nonfatal outcomes were evaluated among 9,455 patients with fee-for-service Medicare: 4,542 (48.0%) in DPO and 4,913 (52.0%) in EPO facilities. During 10,457 and 10,363 person-years, 248 and 372 events were recorded, respectively, for strokes and MIs. We found no differences in adjusted stroke or MI rates or their composite with cardiovascular death (HR, 1.10; 95% CI, 0.96-1.25). LIMITATIONS: Nonrandom treatment assignment, potential residual confounding. CONCLUSIONS: In incident hemodialysis patients, mortality and cardiovascular event rates did not differ between patients treated at facilities predominantly using DPO versus EPO.


Subject(s)
Erythropoietin/analogs & derivatives , Hematinics/therapeutic use , Renal Insufficiency, Chronic/mortality , Aged , Ambulatory Care Facilities , Anemia/drug therapy , Anemia/etiology , Cardiovascular Diseases/mortality , Cause of Death , Comorbidity , Darbepoetin alfa , Epoetin Alfa , Erythropoietin/adverse effects , Erythropoietin/pharmacokinetics , Erythropoietin/therapeutic use , Female , Hemodialysis Units, Hospital , Humans , Kidney Failure, Chronic/complications , Kidney Failure, Chronic/mortality , Kidney Failure, Chronic/therapy , Male , Middle Aged , Myocardial Infarction/epidemiology , Proportional Hazards Models , Recombinant Proteins/therapeutic use , Registries , Renal Dialysis , Renal Insufficiency, Chronic/complications , Retrospective Studies , Stroke/epidemiology , Treatment Outcome , United States/epidemiology
14.
Am J Nephrol ; 40(4): 300-7, 2014.
Article in English | MEDLINE | ID: mdl-25341418

ABSTRACT

BACKGROUND: Heparin is commonly given during hemodialysis (HD). Patients undergoing HD have a high rate of gastrointestinal bleeding (GIB). It is unclear whether or when it is safe to give heparin after acute GIB. We describe the patterns and safety of heparin use with outpatient HD following an acute GIB. METHODS: We identified patients aged ≥ 67 who, from 2004-2008, experienced GIB requiring hospitalization within 2 days of receiving maintenance HD with heparin. We used Cox regression to estimate the risk of recurrent GIB and death associated with receiving heparin the day they resumed outpatient HD post-GIB. RESULTS: Of the 1,342 patients who had GIB, 1,158 (86%) received heparin at a median dose of 4,000 units with their first outpatient HD session after discharge from GIB. On average, their post-GIB doses were slightly lower than their pre-GIB doses (mean change: -214 ± 3,266 units, p < 0.02). However, only 27% of patients had a decrease in their dose, while 21% had their dose increased. We did not find an increased risk of death or recurrent GIB associated with using heparin post-GIB (HR; 95% confidence interval (CI), for death: 1.01; 0.69-1.48; for recurrent GIB: 0.78; 0.39-1.57). CONCLUSIONS: The vast majority of these high-risk patients received heparin on the very first day they resumed outpatient HD post-GIB, and the majority at unchanged doses to those received pre-GIB. Even if the practice was not associated with increased risks of death or re-bleeding, it highlights an area for possible system-based improvement to the care for patients on HD.


Subject(s)
Anticoagulants/administration & dosage , Gastrointestinal Hemorrhage/prevention & control , Heparin/administration & dosage , Renal Dialysis , Aged , Aged, 80 and over , Contraindications , Female , Gastrointestinal Hemorrhage/chemically induced , Humans , Kidney Failure, Chronic/complications , Kidney Failure, Chronic/therapy , Male , Retrospective Studies
15.
Pharmacoepidemiol Drug Saf ; 23(5): 515-25, 2014 May.
Article in English | MEDLINE | ID: mdl-24677688

ABSTRACT

PURPOSE: Heparin is commonly used to anticoagulate the hemodialysis (HD) circuit. Despite the bleeding risk, no American standards exist for its administration. We identified correlates and quantified sources of variance in heparin dosing for HD. METHODS: We performed a cross-sectional study of patients aged 67 years or older who underwent HD with heparin on one of two randomly chosen days in 2008 at a national chain of dialysis facilities. Using a mixed effects model with random intercept for facility and fixed patient and facility characteristics, we examined heparin dosing at patient and facility levels. RESULTS: The median heparin dose among the 17 722 patients treated in 1366 facilities was 4000 (25th-75th percentile: 2625-6000) units. In multivariable-adjusted analyses, higher weight, longer session duration, catheter use, and dialyzer reuse were significantly associated with higher heparin dose. Dose also varied considerably among census divisions. Of the overall variance in dose, 21% was due to between-facility differences, independent of facilities' case mix, geography, size, or rurality; 79% was due to differences at the patient level. The patient and facility characteristics in our model explained only 25% of the variance at the patient level. CONCLUSIONS: Despite the lack of standards for heparin administration, we noted patterns of use, including weight-based and time-dependent dosing. Most of the variance was at the patient level; however, only a quarter of it could be explained. The high amount of unexplained variance suggests that factors other than clinical need are driving heparin dosing and that there is likely room for more judicious dosing of heparin.


Subject(s)
Anticoagulants/administration & dosage , Hemorrhage/chemically induced , Heparin/administration & dosage , Renal Dialysis/methods , Aged , Aged, 80 and over , Anticoagulants/adverse effects , Cross-Sectional Studies , Dose-Response Relationship, Drug , Female , Heparin/adverse effects , Humans , Male , Time Factors , United States
16.
J Am Soc Nephrol ; 25(6): 1321-9, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24652791

ABSTRACT

The proportion of low-income nonelderly adults covered by Medicaid varies widely by state. We sought to determine whether broader state Medicaid coverage, defined as the proportion of each state's low-income nonelderly adult population covered by Medicaid, associates with lower state-level incidence of ESRD and greater access to care. The main outcomes were incidence of ESRD and five indicators of access to care. We identified 408,535 adults aged 20-64 years, who developed ESRD between January 1, 2001, and December 31, 2008. Medicaid coverage among low-income nonelderly adults ranged from 12.2% to 66.0% (median 32.5%). For each additional 10% of the low-income nonelderly population covered by Medicaid, there was a 1.8% (95% confidence interval, 1.0% to 2.6%) decrease in ESRD incidence. Among nonelderly adults with ESRD, gaps in access to care between those with private insurance and those with Medicaid were narrower in states with broader coverage. For a 50-year-old white woman, the access gap to the kidney transplant waiting list between Medicaid and private insurance decreased by 7.7 percentage points in high (>45%) versus low (<25%) Medicaid coverage states. Similarly, the access gap to transplantation decreased by 4.0 percentage points and the access gap to peritoneal dialysis decreased by 3.8 percentage points in high Medicaid coverage states. In conclusion, states with broader Medicaid coverage had a lower incidence of ESRD and smaller insurance-related access gaps.


Subject(s)
Health Services Accessibility/statistics & numerical data , Kidney Failure, Chronic/economics , Kidney Failure, Chronic/epidemiology , Medicaid/statistics & numerical data , Adult , Cardiovascular Diseases/economics , Cardiovascular Diseases/epidemiology , Female , Humans , Incidence , Insurance, Health/statistics & numerical data , Kidney Failure, Chronic/therapy , Kidney Transplantation/economics , Kidney Transplantation/statistics & numerical data , Male , Medically Uninsured/statistics & numerical data , Middle Aged , Poverty/economics , Poverty/statistics & numerical data , Registries/statistics & numerical data , Renal Dialysis/economics , Renal Dialysis/statistics & numerical data , Socioeconomic Factors , United States/epidemiology , Young Adult
17.
JAMA Intern Med ; 174(5): 699-707, 2014 May.
Article in English | MEDLINE | ID: mdl-24589911

ABSTRACT

IMPORTANCE: Anemia is common in patients with advanced chronic kidney disease. Whereas the treatment of anemia in patients with end-stage renal disease (ESRD) has attracted considerable attention, relatively little is known about patterns and trends in the anemia care received by patients before they start maintenance dialysis or undergo preemptive kidney transplantation. OBJECTIVE: To determine the trends in anemia treatment received by Medicare beneficiaries approaching ESRD. DESIGN, SETTING, AND PARTICIPANTS: Closed cohort study in the United States using national ESRD registry data (US Renal Data System) of patients 67 years or older who initiated maintenance dialysis or underwent preemptive kidney transplantation between 1995 and 2010. All eligible patients had uninterrupted Medicare (A+B) coverage for at least 2 years before ESRD. EXPOSURE: Time, defined as calendar year of incident ESRD. MAIN OUTCOMES AND MEASURES: Use of erythropoiesis-stimulating agents (ESA), intravenous iron supplements, and blood transfusions in the 2 years prior to ESRD; hemoglobin concentration at the time of ESRD. We used multivariable modified Poisson regression to estimate utilization prevalence ratios (PRs). RESULTS: Records of 466,803 patients were analyzed. The proportion of patients with incident ESRD receiving any ESA in the 2 years before increased from 3.2% in 1995 to a peak of 40.8% in 2007; thereafter, ESA use decreased modestly to 35.0% in 2010 (compared with 1995; PR, 9.85 [95% CI, 9.04-10.74]). Among patients who received an ESA, median time from first recorded ESA use to ESRD increased from 120 days in 1995 to 337 days in 2010. Intravenous iron administration increased from 1.2% (1995) to 12.3% (2010; PR, 9.20 [95% CI, 7.97-10.61]). The proportion of patients receiving any blood transfusions increased monotonically from 20.6% (1995) to 40.3% (2010; PR, 1.88 [95% CI, 1.82-1.95]). Mean hemoglobin concentrations were 9.5 g/dL in 1995, increased to a peak of 10.3 g/dL in 2006, and then decreased moderately to 9.9 g/dL in 2010. CONCLUSIONS AND RELEVANCE: Between 1995 and 2010, older adults approaching ESRD were increasingly more likely to be treated with ESAs and to receive intravenous iron supplementation, but also more likely to receive blood transfusions.


Subject(s)
Anemia/etiology , Blood Transfusion/statistics & numerical data , Hematinics/therapeutic use , Iron/administration & dosage , Practice Patterns, Physicians'/trends , Renal Insufficiency, Chronic/complications , Aged , Aged, 80 and over , Anemia/drug therapy , Cohort Studies , Darbepoetin alfa , Epoetin Alfa , Erythropoietin/analogs & derivatives , Erythropoietin/therapeutic use , Hemoglobin A/analysis , Humans , Incidence , Kidney Failure, Chronic/therapy , Medicare , Recombinant Proteins/therapeutic use , Registries , United States
18.
Clin J Am Soc Nephrol ; 9(1): 82-91, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24178968

ABSTRACT

BACKGROUND AND OBJECTIVES: Sudden cardiac death is the most common cause of death among individuals undergoing hemodialysis. The epidemiology of sudden cardiac death has been well studied, and efforts are shifting to risk assessment. This study aimed to test whether assessment of acute changes during hemodialysis that are captured in electronic health records improved risk assessment. DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS: Data were collected from all hemodialysis sessions of patients 66 years and older receiving hemodialysis from a large national dialysis provider between 2004 and 2008. The primary outcome of interest was sudden cardiac death the day of or day after a dialysis session. This study used data from 2004 to 2006 as the training set and data from 2007 to 2008 as the validation set. The machine learning algorithm, Random Forests, was used to derive the prediction model. RESULTS: In 22 million sessions, 898 people between 2004 and 2006 and 826 people between 2007 and 2008 died on the day of or day after a dialysis session that was serving as a training or test data session, respectively. A reasonably strong predictor was derived using just predialysis information (concordance statistic=0.782), which showed modest but significant improvement after inclusion of postdialysis information (concordance statistic=0.799, P<0.001). However, risk prediction decreased the farther out that it was forecasted (up to 1 year), and postdialytic information became less important. CONCLUSION: Subtle changes in the experience of hemodialysis aid in the assessment of sudden cardiac death and are captured by modern electronic health records. The collected data are better for the assessment of near-term risk as opposed to longer-term risk.


Subject(s)
Data Mining , Death, Sudden, Cardiac/epidemiology , Electronic Health Records , Kidney Diseases/mortality , Kidney Diseases/therapy , Renal Dialysis/mortality , Age Factors , Aged , Aged, 80 and over , Algorithms , Artificial Intelligence , Cause of Death , Female , Humans , Kidney Diseases/diagnosis , Logistic Models , Male , Renal Dialysis/adverse effects , Risk Assessment , Risk Factors , Time Factors , Treatment Outcome , United States/epidemiology
19.
Clin J Am Soc Nephrol ; 8(12): 2149-57, 2013 Dec.
Article in English | MEDLINE | ID: mdl-24115195

ABSTRACT

BACKGROUND AND OBJECTIVES: Hispanic patients undergoing chronic dialysis are less likely to receive a kidney transplant compared with non-Hispanic whites. This study sought to elucidate disparities in the path to receipt of a deceased donor transplant between Hispanic and non-Hispanic whites. DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS: Using the US Renal Data System, 417,801 Caucasians who initiated dialysis between January 1, 1995 and December 31, 2007 with follow-up through 2008 were identified. This study investigated time from first dialysis to first kidney transplantation, time from first dialysis to waitlisting, and time from waitlisting to kidney transplantation. Multivariable Cox regression estimated cause-specific hazard ratios (HRCS) and subdistribution (competing risk) hazard ratios (HRSD) for Hispanics versus non-Hispanic whites. RESULTS: Hispanics experienced lower adjusted rates of deceased donor kidney transplantation than non-Hispanic whites (HRCS, 0.77; 95% confidence interval [95% CI], 0.75 to 0.80) measured from dialysis initiation. No meaningful differences were found in time from dialysis initiation to placement on the transplant waitlist. Once waitlisted, Hispanics had lower adjusted rates of deceased donor kidney transplantation (HRCS, 0.66; 95% CI, 0.64 to 0.68), and the association attenuated once accounting for competing risks (HRSD, 0.79; 95% CI, 0.77 to 0.81). Additionally controlling for blood type and organ procurement organization further reduced the disparity (HRSD, 0.99; 95% CI, 0.96 to 1.02). CONCLUSIONS: After accounting for geographic location and controlling for competing risks (e.g., Hispanic survival advantage), the disparity in access to deceased donor transplantation was markedly attenuated among Hispanics compared with non-Hispanic whites. To overcome the geographic disparities that Hispanics encounter in the path to transplantation, organ allocation policy revisions are needed to improve donor organ equity.


Subject(s)
Catchment Area, Health , Health Services Accessibility , Healthcare Disparities/ethnology , Hispanic or Latino , Kidney Failure, Chronic/ethnology , Kidney Failure, Chronic/surgery , Kidney Transplantation , White People , Adolescent , Adult , Aged , Chi-Square Distribution , Female , Health Services Needs and Demand , Humans , Incidence , Kidney Failure, Chronic/diagnosis , Male , Middle Aged , Multivariate Analysis , Prevalence , Proportional Hazards Models , Renal Dialysis , Residence Characteristics , Risk Factors , Time Factors , Tissue and Organ Procurement , United States/epidemiology , Waiting Lists , Young Adult
20.
Anesthesiology ; 119(2): 284-94, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23695172

ABSTRACT

BACKGROUND: Heart failure (HF) is a leading cause of hospitalization and mortality. Plasma B-type natriuretic peptide (BNP) is an established diagnostic and prognostic ambulatory HF biomarker. We hypothesized that increased perioperative BNP independently associates with HF hospitalization or HF death up to 5 yr after coronary artery bypass graft surgery. METHODS: The authors conducted a two-institution, prospective, observational study of 1,025 subjects (mean age = 64 ± 10 yr SD) undergoing isolated primary coronary artery bypass graft surgery with cardiopulmonary bypass. Plasma BNP was measured preoperatively and on postoperative days 1-5. The study outcome was hospitalization or death from HF, with HF events confirmed by reviewing hospital and death records. Cox proportional hazards analyses were performed with multivariable adjustments for clinical risk factors. Preoperative and peak postoperative BNP were added to the multivariable clinical model in order to assess additional predictive benefit. RESULTS: One hundred five subjects experienced an HF event (median time to first event = 1.1 yr). Median follow-up for subjects who did not have an HF event = 4.2 yr. When individually added to the multivariable clinical model, higher preoperative and peak postoperative BNP concentrations each, independently associated with the HF outcome (log10 preoperative BNP hazard ratio = 1.93; 95% CI, 1.30-2.88; P = 0.001; log10 peak postoperative BNP hazard ratio = 3.38; 95% CI, 1.45-7.65; P = 0.003). CONCLUSIONS: Increased perioperative BNP concentrations independently associate with HF hospitalization or HF death during the 5 yr after primary coronary artery bypass graft surgery. Clinical trials may be warranted to assess whether medical management focused on reducing preoperative and longitudinal postoperative BNP concentrations associates with decreased HF after coronary artery bypass graft surgery.


Subject(s)
Coronary Artery Bypass/adverse effects , Heart Failure/blood , Heart Failure/epidemiology , Hospitalization/statistics & numerical data , Natriuretic Peptide, Brain/blood , Perioperative Period/statistics & numerical data , Postoperative Complications/epidemiology , Aged , Biomarkers , Boston/epidemiology , Coronary Artery Bypass/methods , Female , Follow-Up Studies , Heart Failure/etiology , Heart Failure/mortality , Humans , Male , Postoperative Complications/blood , Postoperative Complications/etiology , Postoperative Complications/mortality , Proportional Hazards Models , Prospective Studies , Risk Factors , Survival Analysis , Texas/epidemiology
SELECTION OF CITATIONS
SEARCH DETAIL
...