Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
Am J Transplant ; 21(11): 3593-3607, 2021 11.
Article in English | MEDLINE | ID: mdl-34254434

ABSTRACT

The OPTN's simultaneous liver-kidney (SLK) allocation policy, implemented August 10, 2017, established medical eligibility criteria for adult SLK candidates and created Safety Net kidney allocation priority for liver-alone recipients with new/continued renal impairment. OPTN SLK and kidney after liver (KAL) data were analyzed (registrations as of December 31, 2019, transplants pre-policy [March 20, 2015-August 9, 2017] vs. post-policy [August 10, 2017-December 31, 2019]). Ninety-four percent of SLK registrations met eligibility criteria (99% CKD: 50% dialysis, 50% eGFR). SLK transplant volume decreased from a record 740 (2017) to 676 (2018, -9%), with a subsequent increase to 728 (2019, 1.6% below 2017 volume). For KAL listings within 1 year of liver transplant, waitlist mortality rates declined post-policy versus pre-policy (27 [95% CI = 20.6-34.7] vs. 16 [11.7-20.5]) while transplant rates increased fourfold (46 [32.2-60.0] vs. 197 [171.6-224.7]). There were 234 KAL transplants post-policy (94% Safety Net priority eligible), and no significant difference in 1-year patient/graft survival vs. kidney-alone (patient: 95.9% KAL, 97.0% kidney-alone [p = .39]; graft: 94.2% KAL, 94.6% kidney-alone [p = .81]). From pre- to post-policy, the proportion of all deceased donor kidney and liver transplants that were SLK decreased (kidney: 5.1% to 4.3%; liver: 9.7% to 8.7%). SLK policy implementation interrupted the longstanding rise in SLK transplants, while Safety Net priority directed kidneys to liver recipients in need with thus far minimal impact to posttransplant outcomes.


Subject(s)
Kidney Transplantation , Liver Transplantation , Tissue and Organ Procurement , Adult , Graft Survival , Humans , Kidney , Liver , Policy , Risk Factors
2.
Am J Transplant ; 21(2): 689-702, 2021 02.
Article in English | MEDLINE | ID: mdl-32627325

ABSTRACT

Despite clinical and laboratory screening of potential donors for transmissible disease, unexpected transmission of disease from donor to recipient remains an inherent risk of organ transplantation. The Disease Transmission Advisory Committee (DTAC) was created to review and classify reports of potential disease transmission and use this information to inform national policy and improve patient safety. From January 1, 2008 to December 31, 2017, the DTAC received 2185 reports; 335 (15%) were classified as a proven/probable donor transmission event. Infections were transmitted most commonly (67%), followed by malignancies (29%), and other disease processes (6%). Forty-six percent of recipients receiving organs from a donor that transmitted disease to at least 1 recipient developed a donor-derived disease (DDD). Sixty-seven percent of recipients developed symptoms of DDD within 30 days of transplantation, and all bacterial infections were recognized within 45 days. Graft loss or death occurred in about one third of recipients with DDD, with higher rates associated with malignancy transmission and parasitic and fungal diseases. Unexpected DDD was rare, occurring in 0.18% of all transplant recipients. These findings will help focus future efforts to recognize and prevent DDD.


Subject(s)
Communicable Diseases , Organ Transplantation , Advisory Committees , Communicable Diseases/etiology , Humans , Organ Transplantation/adverse effects , Tissue Donors , Transplant Recipients
4.
Am J Transplant ; 21(6): 2100-2112, 2021 06.
Article in English | MEDLINE | ID: mdl-33244847

ABSTRACT

COVID-19 has been sweeping the globe, hitting the United States particularly hard with a state of emergency declared on March 13, 2020. Transplant hospitals have taken various precautions to protect patients from potential exposure. OPTN donor, candidate, and transplant data were analyzed from January 5, 2020 to September 5, 2020. The number of new waiting list registrations decreased, with the Northeast seeing over a 50% decrease from the week of 3/8 versus the week of 4/5. The national transplant system saw near cessation of living donor transplantation (-90%) from the week of 3/8 to the week of 4/5. Similarly, deceased donor kidney transplant volume dropped from 367 to 202 (-45%), and other organs saw similar decreases: lung (-70%), heart (-43%), and liver (-37%). Deceased donors recovered dropped from 260 to 163 (-45%) from 3/8 compared to 4/5, including a 67% decrease for lungs recovered. The magnitude of this decrease varied by geographic area, with the largest percent change (-67%) in the Northeast. Despite the pandemic, discard rates across organ has remained stable. Although the COVID-19 pandemic continues to evolve, OPTN data show recent evidence of stabilization, an indication that an early recovery of the number of living and deceased donors and transplants has ensued.


Subject(s)
COVID-19 , Organ Transplantation , Tissue and Organ Procurement , Humans , Pandemics , SARS-CoV-2 , Tissue Donors , United States/epidemiology , Waiting Lists
5.
Am J Transplant ; 21(6): 2161-2174, 2021 06.
Article in English | MEDLINE | ID: mdl-33140571

ABSTRACT

Kidney-alone transplant (KAT) candidates may be disadvantaged by the allocation priority given to multi-organ transplant (MOT) candidates. This study identified potential KAT candidates not receiving a given kidney offer due to its allocation for MOT. Using the Organ Procurement and Transplant Network (OPTN) database, we identified deceased donors from 2002 to 2017 who had one kidney allocated for MOT and the other kidney allocated for KAT or simultaneous pancreas-kidney transplant (SPK) (n = 7,378). Potential transplant recipient data were used to identify the "next-sequential KAT candidate" who would have received a given kidney offer had it not been allocated to a higher prioritized MOT candidate. In this analysis, next-sequential KAT candidates were younger (p < .001), more likely to be racial/ethnic minorities (p < .001), and more highly sensitized than MOT recipients (p < .001). A total of 2,113 (28.6%) next-sequential KAT candidates subsequently either died or were removed from the waiting list without receiving a transplant. In a multivariable model, despite adjacent position on the kidney match-run, mortality risk was significantly higher for next-sequential KAT candidates compared to KAT/SPK recipients (hazard ratio 1.55, 95% confidence interval 1.44, 1.66). These results highlight implications of MOT allocation prioritization, and potential consequences to KAT candidates prioritized below MOT candidates.


Subject(s)
Kidney Transplantation , Organ Transplantation , Pancreas Transplantation , Tissue and Organ Procurement , Humans , Tissue Donors , Waiting Lists
6.
Am J Transplant ; 19(9): 2594-2605, 2019 09.
Article in English | MEDLINE | ID: mdl-31207040

ABSTRACT

The HIV Organ Policy Equity (HOPE) Act, enacted on November 21, 2013, enables research on the transplantation of organs from donors infected with human immunodeficiency virus (HIV) (HIV+) into HIV+ individuals who, prior to transplantation, are infected with HIV. In 2015, the Organ Procurement and Transplantation Network revised organ allocation policies on November 21, and on November 23, the Secretary of Health and Human Services published research criteria and revised the Final Rule accordingly. The HOPE Act appears to be underutilized to date. As of December 31, 2018, there were 56 donors recovered (50 donors transplanted) resulting in 102 organs transplanted (31 liver, 71 kidney). As of December 31, 2018, 212 registrations were indicated on the waiting list as willing to accept an HIV+ kidney or liver, most of which were waiting in active status. Due to the limited number of transplants performed to date, definitive safety conclusions cannot be reached at this time, though current data suggest that 1-year patient and graft survival does not deviate in a major way from that observed in HIV+ recipients of non-HIV+ organs or non-HIV+ recipients. As safety data are reviewed and disseminated, it is anticipated that HOPE participation will increase should safety signals remain low.


Subject(s)
HIV Infections/transmission , Organ Transplantation/legislation & jurisprudence , Organ Transplantation/standards , Tissue and Organ Procurement/legislation & jurisprudence , Tissue and Organ Procurement/standards , Adult , Female , Graft Survival , HIV Infections/epidemiology , Health Policy , Humans , Kidney/virology , Kidney Transplantation , Liver/virology , Liver Transplantation , Male , Middle Aged , Patient Safety , Registries , Tissue Donors , Treatment Outcome , United States/epidemiology , Waiting Lists , Young Adult
7.
Pharm Stat ; 18(5): 533-545, 2019 10.
Article in English | MEDLINE | ID: mdl-31069929

ABSTRACT

Cost and burden of diagnostic testing may be reduced if fewer tests can be applied. Sequential testing involves selecting a sequence of tests, but only administering subsequent tests dependent on results of previous tests. This research provides guidance to choosing between single tests or the believe the positive (BP) and believe the negative (BN) sequential testing strategies, using accuracy (as measured by the Youden Index) as the primary determinant. Approximately 75% of the parameter combinations examined resulted in either BP or BN being recommended based on a higher accuracy at the optimal point. In about half of the scenarios BP was preferred, and the other half, BN, with the choice often a function of the value of the ratio of standard deviations of those without and with disease (b). Large values of b for the first test of the sequence tended to be associated with preference for BN as opposed to BP, while small values of b appear to favor BP. When there was no preference between sequences and/or single tests based on the Youden Index, cost of the sequence was considered. In this case, disease prevalence plays a large role in the selection of strategies, with lower values favoring BN and sometimes higher values favoring BP. The cost threshold for the sequential strategy to be preferred over a single, more accurate test, was often quite high. It appears that while sequential strategies most often increase diagnostic accuracy over a single test, sequential strategies are not always preferred.


Subject(s)
Diagnostic Techniques and Procedures , Diagnostic Tests, Routine/methods , Cost-Benefit Analysis , Diagnostic Techniques and Procedures/economics , Diagnostic Tests, Routine/economics , Humans , Reproducibility of Results
8.
Am J Transplant ; 18(5): 1129-1139, 2018 05.
Article in English | MEDLINE | ID: mdl-29392849

ABSTRACT

We studied End-Stage Renal Disease (ESRD) in living kidney donors (LKDs) who donated in the United States between 1994 and 2016 (n = 123 526), using Organ Procurement and Transplantation Network and Centers for Medicare and Medicaid Services data. Two hundred eighteen LKDs developed ESRD, with a median of 11.1 years between donation and ESRD. Absolute 20-year risk was low but not uniform, with risk associated with race, age, and sex and increasing exponentially over time. LKDs had increased risk of ESRD if they were male (adjusted hazard ratio [aHR]: 1.75, 95% confidence interval [95%CI]: 1.33-2.31), had higher BMI (aHR: 1.34 per 5 kg/m2 , 95%CI: 1.10-1.64) or lower estimated GFR (aHR: 0.89 per 10 mL/min, 95% CI: 0.80-0.99), were first-degree relatives of the recipient (parent: [aHR: 2.01, 95% CI: 1.26-3.21]; full sibling [aHR: 1.87, 95%CI: 1.23-2.84]; identical twin [aHR: 19.79, 95%CI: 7.65-51.24]), or lived in lower socioeconomic status neighborhoods at donation (aHR: 0.87 per $10k increase; 95%CI: 0.77-0.99). We found a significant interaction between donation age and race, with higher risk at older ages for white LKDs (aHR: 1.26 per decade, 95%CI: 1.04-1.54), but higher risk at younger ages for black LKDs (aHR: 0.75 per decade, 95%CI: 0.57-0.99). These findings further inform risk assessment of potential LKDs.


Subject(s)
Kidney Failure, Chronic/epidemiology , Kidney Transplantation , Living Donors/supply & distribution , Nephrectomy/adverse effects , Risk Assessment/methods , Tissue and Organ Procurement/methods , Adult , Female , Follow-Up Studies , Humans , Incidence , Kidney Failure, Chronic/etiology , Male , Prognosis , Retrospective Studies , Risk Factors , Virginia/epidemiology
9.
J Am Coll Cardiol ; 71(1): 40-49, 2018 01 02.
Article in English | MEDLINE | ID: mdl-29301626

ABSTRACT

BACKGROUND: Malignancy is a concern in cardiac transplant recipients, but the temporal trends of de novo malignancy development are unknown. OBJECTIVES: The goal of this study was to describe the temporal trends of the incidence, types, and predictors of de novo malignancy in cardiac transplant recipients. METHODS: The authors analyzed the temporal trends of post-transplant incidence, types, and predictors of malignancy using 17,587 primary adult heart-only transplant recipients from the International Society for Heart and Lung Transplantation registry. The main study outcomes included the incidence of, types of, and time to de novo malignancy. RESULTS: The risk of any de novo solid malignancy between years 1 and 5 after transplantation was 10.7%. The cumulative incidence by malignancy type was: skin cancer (7.0%), non-skin solid cancer (4.0%), and lymphoproliferative disorders (0.9%). There was no temporal difference in the time to development according to malignancy type. However, the cumulative incidence of de novo solid malignancy increased from 2000 to 2005 vs. 2006 to 2011 (10.0% vs. 12.4%; p < 0.0001). Survival in patients after de novo malignancy was markedly lower than in patients without malignancy (p < 0.0001). Older recipients and patients who underwent transplantation in the recent era had a higher risk of de novo malignancy. CONCLUSIONS: More than 10% of adult heart transplant recipients developed de novo malignancy between years 1 and 5 after transplantation, and this outcome was associated with increased mortality. The incidence of post-transplant de novo solid malignancy increased temporally, with the largest increase in skin cancer. Individualized immunosuppression strategies and enhanced cancer screening should be studied to determine whether they can reduce the adverse outcomes of post-transplantation malignancy.


Subject(s)
Heart Transplantation/adverse effects , Immunosuppression Therapy , Neoplasms , Postoperative Complications , Adult , Cluster Analysis , Female , Heart Transplantation/methods , Heart Transplantation/statistics & numerical data , Humans , Immunosuppression Therapy/methods , Immunosuppression Therapy/statistics & numerical data , Incidence , Male , Middle Aged , Neoplasms/classification , Neoplasms/diagnosis , Neoplasms/epidemiology , Neoplasms/etiology , Postoperative Complications/diagnosis , Postoperative Complications/epidemiology , Prognosis , Registries/statistics & numerical data , Risk Factors
10.
Transplantation ; 101(7): 1666-1669, 2017 07.
Article in English | MEDLINE | ID: mdl-28196048

ABSTRACT

BACKGROUND: The Public Health Service "Increased Risk" (PHS IR) designation identifies donors at increased risk of transmitting hepatitis B, C, and human immunodeficiency virus. Although the risk remains very low in the era of nucleic acid testing, we hypothesized that this label may result in decreased organ utilization. METHODS: Organ Procurement and Transplantation Network data were used to compare utilization rates between PHS-IR and non-PHS-IR donors, as well as to compare export rates and variation in utilization. RESULTS: Among adult standard criteria donors between 2010 and 2013 with a known PHS-IR status, covariate-adjusted utilization rates were lower among PHS-IR donors than non-PHS-IR donors for all organs. For example, 4073 (76.7%) of 5314 PHS-IR kidneys were used, compared with 25 490 (83.7%) of 30 456 non-PHS-IR kidneys-an absolute difference of 7%. Furthermore, all PHS-IR organs had higher export rates than non-PHS-IR organs. For example, 28.7% of PHS-IR kidneys were exported versus 19.7% of non-PHS-IR kidneys. Finally, the utilization rate of PHS-IR organs varied by Donation Service Area; utilization ranged from 20% to 100% among adult kidneys, suggesting significant variation in practices. Similar patterns were seen among pediatric donors. Based on the covariate-adjusted model, if the PHS-IR label did not exist, there could be an additional 313 transplants performed in the United States each year. CONCLUSIONS: The PHS "increased risk" label appears to be associated with nonutilization of hundreds of organs per year, despite the very low risk of disease transmission. Better tools are needed to communicate the magnitude of risk to patients and their families.


Subject(s)
Donor Selection , HIV Infections/transmission , Hepatitis B/transmission , Hepatitis C/transmission , Organ Transplantation/methods , Process Assessment, Health Care , Tissue Donors/supply & distribution , United States Public Health Service , Adolescent , Adult , Child , Child, Preschool , Databases, Factual , Female , Health Knowledge, Attitudes, Practice , Humans , Infant , Infant, Newborn , Male , Middle Aged , Organ Transplantation/adverse effects , Patient Acceptance of Health Care , Practice Patterns, Physicians' , Risk Assessment , Risk Factors , Time Factors , Tissue and Organ Procurement , Treatment Outcome , United States , Young Adult
11.
Surg Obes Relat Dis ; 13(5): 826-834, 2017 May.
Article in English | MEDLINE | ID: mdl-28236529

ABSTRACT

BACKGROUND: Four current bariatric operations were compared after matching patients for differences at baseline. Operations with greater weight loss and resolution of co-morbidities also incurred more adverse events. Reflux was best treated by gastric bypass and type 2 diabetes with duodenal switch. These results can guide decision making regarding choice of bariatric operation. Relative outcomes of common primary bariatric operations have not been compared previously in a large multisite cohort from surgeons in multiple surgical centers. OBJECTIVE: Compare outcomes of primary bariatric operations in a matched national sample. SETTING: Bariatric Surgery Centers of Excellence in the United States of America METHODS: Data from Bariatric Surgery Center of Excellence Data File was queried from June 2007 to September 2011 for 30-day and 1-year adverse events, 1-year weight loss and comorbidity resolution. Inverse probability weighting accounted for covariate imbalances in multivariable linear/logistic regressions estimates of differences/odds ratios for each pairwise surgical procedure comparison. A Bonferroni correction was applied to account for multiple pairwise comparisons. RESULTS: Among 130,796 patients, 57,094 patients underwent AGB, 5942 patients underwent SG, 66,324 patients underwent RYGB and 1436 patients underwent BPD/DS. Compared with AGB, change in body mass index units at 1 year for BPD/DS was 10.6 (standard error [SE]: .15), RYGB 9.3 (SE: .03), and SG 5.7 (SE: .06). Resolution of GERD was best for RYGB (odds ratio [OD] = 1.5, 95% confidence interval [CI]: 1.48-1.58) and lowest for SG (OR = 0.87, 95% CI: .79-.95). Hypertension and T2D resolution were better after the BPD/DS (OR = 3.82, 95% CI: 3.21-4.55, and OR = 5.62, 95% CI: 4.60-6.88, respectively) or after RYGB (OR = 3.08, 95% CI: 2.98-3.18 and OR = 3.5, 95% CI: 3.39-3.64, respectively). Odds of serious adverse events at 1 year were: SG, OR = 3.22, 95% CI: 2.64-3.92; RYGB, OR = 4.92, 95% CI: 4.38-5.54; BPD/DS, OR = 17.47, 95% CI: 14.19-21.52. CONCLUSION: Odds of adverse events and co-morbidity resolution were determined after matching for baseline characteristics. RYGB was associated with highest resolution of GERD, whereas BPD/DS was associated with highest resolution of T2D. These findings can guide decision making regarding choice of bariatric operation.


Subject(s)
Bariatric Surgery/standards , Obesity/surgery , Analysis of Variance , Bariatric Surgery/methods , Bariatric Surgery/statistics & numerical data , Female , Humans , Male , Middle Aged , Multiple Chronic Conditions/epidemiology , Obesity/complications , Obesity/epidemiology , Retrospective Studies , Surgicenters/statistics & numerical data , Treatment Outcome , United States/epidemiology , Weight Loss/physiology
12.
Transplantation ; 101(4): 836-843, 2017 Apr.
Article in English | MEDLINE | ID: mdl-27547866

ABSTRACT

BACKGROUND: Although the Organ Procurement and Transplantation Network (OPTN) database contains a rich set of data on United States transplant recipients, follow-up data may be incomplete. It was of interest to determine if augmenting OPTN data with external death data altered patient survival estimates. METHODS: Solitary kidney, liver, heart, and lung transplants performed between January 1, 2011, and January 31, 2013, were queried from the OPTN database. Unadjusted Kaplan-Meier 3-year patient survival rates were computed using 4 nonmutually exclusive augmented datasets: OPTN only, OPTN + verified external deaths, OPTN + verified + unverified external deaths (OPTN + all), and an additional source extending recipient survival time if no death was found in OPTN + all (OPTN + all [Assumed Alive]). Pairwise comparisons were made using unadjusted Cox Proportional Hazards analyses applying Bonferroni adjustments. RESULTS: Although differences in patient survival rates across data sources were small (≤1 percentage point), OPTN only data often yielded slightly higher patient survival rates than sources including external death data. No significant differences were found, including comparing OPTN + verified (hazard ratio [HR], 1.05; 95% confidence interval [95% CI], 1.00-1.10); P = 0.0356), OPTN + all (HR, 1.06; 95% CI, 1.01-1.11; P = 0.0243), and OPTN + all (Assumed Alive) (HR, 1.00; 95% CI, 0.96-1.05; P = 0.8587) versus OPTN only, or OPTN + verified (HR, 1.05; 95% CI, 1.00-1.10; P = 0.0511), and OPTN + all (HR, 1.05; 95% CI, 1.00-1.10; P = 0.0353) versus OPTN + all (Assumed Alive). CONCLUSIONS: Patient survival rates varied minimally with augmented data sources, although using external death data without extending the survival time of recipients not identified in these sources results in a biased estimate. It remains important for transplant centers to maintain contact with transplant recipients and obtain necessary follow-up information, because this information can improve the transplantation process for future recipients.


Subject(s)
Organ Transplantation/mortality , Tissue and Organ Procurement , Cause of Death , Chi-Square Distribution , Data Accuracy , Data Mining , Databases, Factual , Female , Humans , Kaplan-Meier Estimate , Logistic Models , Male , Multivariate Analysis , Odds Ratio , Organ Transplantation/adverse effects , Proportional Hazards Models , Risk Factors , Survival Rate , Time Factors , Treatment Outcome , United States
13.
J Heart Lung Transplant ; 36(2): 202-210, 2017 Feb.
Article in English | MEDLINE | ID: mdl-27520780

ABSTRACT

BACKGROUND: Pre-transplant amiodarone use has been postulated as a risk factor for morbidity and mortality after orthotopic heart transplantation (OHT). We assessed pre-OHT amiodarone use and tested the hypothesis that it is associated with impaired post-OHT outcomes. METHODS: We performed a retrospective cohort analysis of adult OHT recipients from the registry of the International Society for Heart and Lung Transplantation (ISHLT). All patients had been transplanted between 2005 and 2013 and were stratified by pre-OHT amiodarone use. We derived propensity scores using logistic regression with amiodarone use as the dependent variable, and assessed the associations between amiodarone use and outcomes with Kaplan-Meier analysis after matching patients 1:1 based on propensity score, and with Cox regression with adjustment for propensity score. RESULTS: Of the 14,944 OHT patients in the study cohort, 32% (N = 4,752) received pre-OHT amiodarone. Amiodarone use was higher in recent years (29% in 2005 to 2007, 32% in 2008 to 2010, 35% in 2011 to 2013). Amiodarone-treated patients were older and more frequently had a history of sudden cardiac death (27% vs 13%) and pre-OHT mechanical circulatory support. Key donor characteristics and allograft ischemia times were similar between groups. In propensity-matched analyses, amiodarone-treated patients had higher rates of cardiac reoperation (15% vs 13%) and permanent pacemaker (5% vs 3%) after OHT and before discharge. Amiodarone-treated patients also had higher 1-year mortality (hazard ratio 1.15, 95% confidence interval 1.02 to 1.30), but the risks of early graft failure, retransplantation and rehospitalization were similar between groups. CONCLUSIONS: Amiodarone use before OHT was independently associated with increased 1-year mortality. The need for amiodarone therapy should be carefully and continuously assessed in patients awaiting OHT.


Subject(s)
Amiodarone/adverse effects , Cause of Death , Graft Rejection/chemically induced , Graft Rejection/mortality , Heart Transplantation/mortality , Adult , Age Factors , Allografts , Amiodarone/administration & dosage , Cohort Studies , Female , Heart Transplantation/adverse effects , Heart Transplantation/methods , Humans , Kaplan-Meier Estimate , Logistic Models , Male , Middle Aged , Postoperative Complications/mortality , Postoperative Complications/physiopathology , Prognosis , Propensity Score , Proportional Hazards Models , Registries , Retrospective Studies , Risk Assessment , Survival Analysis , Waiting Lists
14.
Cephalalgia ; 33(12): 998-1008, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23575819

ABSTRACT

BACKGROUND: Headache (HA) following traumatic brain injury (TBI) is common, but predictors and time course are not well established, particularly after moderate to severe TBI. METHODS: A prospective, longitudinal cohort study of HA severity post-TBI was conducted on 450 participants at seven participating rehabilitation centers. Generalized linear mixed-effects models (GLMMs) were used to model repeated measures (months 3, 6, and 12 post-TBI) of two outcomes: HA density (a composite of frequency, duration, and intensity) and HA disruptions to activities of daily living (ADL). RESULTS: Although HA density and ADL disruptions were nominally highest during the first three months post-TBI, neither showed significant changes over time. At all time points, history of pre-injury migraine was by far the strongest predictor of both HA density and ADL disruptions (odds ratio (OR) = 8.0 and OR = 7.2, averaged across time points, respectively). Furthermore, pre-injury non-migraine HA (at three and six months post-TBI), penetrating-type TBI (at six months post-TBI), and female sex (at six and 12 months post-TBI) were each associated with an increase in the odds of a more severe HA density. Severity of TBI (post-traumatic amnesia (PTA) duration) was not associated with either outcome. CONCLUSION: Individuals with HA at three months after moderate-severe TBI do not improve over the ensuing nine months with respect to HA density or ADL disruptions. Those with pre-injury HA, particularly of migraine type, are at greatest risk for HA post-TBI. Other independent risk factors are penetrating-type TBI and, to a lesser degree and post-acutely only, female sex. Individuals with these risk factors should be monitored and considered for aggressive early intervention.


Subject(s)
Activities of Daily Living , Brain Injuries/complications , Post-Traumatic Headache/epidemiology , Post-Traumatic Headache/etiology , Adolescent , Adult , Aged , Aged, 80 and over , Comorbidity , Female , Humans , Longitudinal Studies , Male , Middle Aged , Migraine Disorders/epidemiology , Young Adult
15.
NeuroRehabilitation ; 29(1): 9-21, 2011.
Article in English | MEDLINE | ID: mdl-21876291

ABSTRACT

OBJECTIVES: Estimate changes in prevalence of Major Depressive Disorder (MDD) 1 to 5 years post spinal cord injury (SCI); Identify demographic, injury, and discharge factors associated with MDD at 1 and 5 years post-injury; Identify modifiers of changes in MDD. DESIGN: Retrospective. SETTING: Model Spinal Cord Injury System. PARTICIPANTS: 2,256 adult participants enrolled in the National Spinal Cord Injury Statistical Center between 1999 and 2004. MAIN OUTCOME MEASURE: MDD as determined by the Patient Health Questionnaire-9 (PHQ-9). RESULTS: Prevalence of MDD was 11.9% at 1 year and 9.7% at 5 years post SCI. Odds of MDD decreased significantly 1 to 5 years post-injury (odds ratio=1.26, 95% confidence interval=1.02, 1.56). At 1 year post-injury, the odds of MDD was greater for persons 35-55 years old at injury, unemployed, having an indwelling catheter or voiding bladder management at discharge, and higher scores on ASIA motor index. At 5 years post-injury, the odds of MDD were greater for females, persons 35-55 years old at injury, those with a high school education or less, those having an indwelling catheter, voiding, and no bladder management at discharge, and higher scores on ASIA motor index. Sex was the only significant modifier. CONCLUSIONS: MDD occurs commonly 1 to 5 years post SCI. Sociodemographic, injury, and discharge factors are associated with the development and changes in depression. Future research should expand upon current findings in order to identify, prevent, and reduce the prevalence of MDD after SCI.


Subject(s)
Depressive Disorder, Major/diagnosis , Depressive Disorder, Major/epidemiology , Spinal Cord Injuries/epidemiology , Adolescent , Adult , Age Factors , Aged , Aged, 80 and over , Depressive Disorder, Major/rehabilitation , Ethnicity , Female , Humans , Longitudinal Studies , Male , Middle Aged , Predictive Value of Tests , Prevalence , Retrospective Studies , Risk Factors , Spinal Cord Injuries/rehabilitation , Surveys and Questionnaires , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...