Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
Transplantation ; 101(4): 836-843, 2017 Apr.
Article in English | MEDLINE | ID: mdl-27547866

ABSTRACT

BACKGROUND: Although the Organ Procurement and Transplantation Network (OPTN) database contains a rich set of data on United States transplant recipients, follow-up data may be incomplete. It was of interest to determine if augmenting OPTN data with external death data altered patient survival estimates. METHODS: Solitary kidney, liver, heart, and lung transplants performed between January 1, 2011, and January 31, 2013, were queried from the OPTN database. Unadjusted Kaplan-Meier 3-year patient survival rates were computed using 4 nonmutually exclusive augmented datasets: OPTN only, OPTN + verified external deaths, OPTN + verified + unverified external deaths (OPTN + all), and an additional source extending recipient survival time if no death was found in OPTN + all (OPTN + all [Assumed Alive]). Pairwise comparisons were made using unadjusted Cox Proportional Hazards analyses applying Bonferroni adjustments. RESULTS: Although differences in patient survival rates across data sources were small (≤1 percentage point), OPTN only data often yielded slightly higher patient survival rates than sources including external death data. No significant differences were found, including comparing OPTN + verified (hazard ratio [HR], 1.05; 95% confidence interval [95% CI], 1.00-1.10); P = 0.0356), OPTN + all (HR, 1.06; 95% CI, 1.01-1.11; P = 0.0243), and OPTN + all (Assumed Alive) (HR, 1.00; 95% CI, 0.96-1.05; P = 0.8587) versus OPTN only, or OPTN + verified (HR, 1.05; 95% CI, 1.00-1.10; P = 0.0511), and OPTN + all (HR, 1.05; 95% CI, 1.00-1.10; P = 0.0353) versus OPTN + all (Assumed Alive). CONCLUSIONS: Patient survival rates varied minimally with augmented data sources, although using external death data without extending the survival time of recipients not identified in these sources results in a biased estimate. It remains important for transplant centers to maintain contact with transplant recipients and obtain necessary follow-up information, because this information can improve the transplantation process for future recipients.


Subject(s)
Organ Transplantation/mortality , Tissue and Organ Procurement , Cause of Death , Chi-Square Distribution , Data Accuracy , Data Mining , Databases, Factual , Female , Humans , Kaplan-Meier Estimate , Logistic Models , Male , Multivariate Analysis , Odds Ratio , Organ Transplantation/adverse effects , Proportional Hazards Models , Risk Factors , Survival Rate , Time Factors , Treatment Outcome , United States
2.
Liver Transpl ; 22(4): 399-409, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26890858

ABSTRACT

In June of 2013, the Organ Procurement and Transplantation Network (OPTN) implemented regional sharing for Model for End-Stage Liver Disease (MELD)/Pediatric End-Stage Liver Disease (PELD) candidates with scores reaching 35 and above ("Share 35"). The goal of this distribution change was to increase access to lifesaving transplants for the sickest candidates with chronic liver disease and to reduce the waiting-list mortality for this medically urgent group of patients. To assess the impact of this change, we compared results before and after policy implementation at 2 years. Overall, there were more liver transplants performed under Share 35 and a greater percentage of MELD/PELD 35+ candidates underwent transplantation; waiting-list mortality rates in this group were also significantly lower in the post-policy period. Overall adjusted waiting-list mortality was decreased slightly, with no significant changes in mortality by age group or ethnicity. Posttransplant graft and patient survival was unchanged overall and was unchanged for the MELD/PELD 35+ recipients. In conclusion, these data demonstrate that the Share 35 policy achieved its goal of increasing access to transplants for these medically urgent patients without reducing access to liver transplants for pediatric and minority candidates. Although the variance in the median MELD at transplant as well as the variance in transport distance increased, there was a decrease in overall liver discard rates and no change in overall cold ischemia times following broader sharing of these organs. The OPTN will continue to monitor this policy, particularly for longer-term posttransplant survival outcomes.


Subject(s)
Liver Failure/surgery , Liver Transplantation/statistics & numerical data , Tissue and Organ Procurement/methods , Waiting Lists/mortality , Child , Cold Ischemia/statistics & numerical data , Female , Graft Survival , Health Impact Assessment/statistics & numerical data , Humans , Liver Failure/mortality , Male , Middle Aged , Tissue Donors/statistics & numerical data , Treatment Outcome
3.
Liver Transpl ; 19(9): 957-64, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23784730

ABSTRACT

The hospital at which liver transplantation (LT) is performed has a substantial impact on post-LT outcomes. Center-specific outcome data are closely monitored not only by the centers themselves but also by patients and government regulatory agencies. However, the true magnitude of this center effect, apart from the effects of the region and donor service area (DSA) as well as recipient and donor determinants of graft survival, has not been examined. We analyzed data submitted to the Organ Procurement and Transplantation Network for all adult (age ≥ 18 years) primary LT recipients (2005-2008). Using a mixed effects, proportional hazards regression analysis, we modeled graft failure within 1 year after LT on the basis of center (de-identified), region, DSA, and donor and recipient characteristics. At 115 unique centers, 14,654 recipients underwent transplantation. Rates of graft loss within a year varied from 5.9% for the lowest quartile of centers to 20.2% for the highest quartile. Gauged by a comparison of the 75th and 25th percentiles of the data, the magnitude of the center effect on graft survival (1.49-fold change) was similar to that of the recipient Model for End-Stage Liver Disease (MELD) score (1.47) and the donor risk index (DRI; 1.45). The center effect was similar across the DRI and MELD score quartiles and was not associated with a center's annual LT volume. After stratification by region and DSA, the magnitude of the center effect, though decreased, remained significant and substantial (1.30-fold interquartile difference). In conclusion, the LT center is a significant predictor of graft failure that is independent of region and DSA as well as donor and recipient characteristics.


Subject(s)
Graft Survival , Hospitals , Liver Transplantation/methods , Adult , Female , Geography , Health Status Disparities , Humans , Liver Transplantation/standards , Male , Middle Aged , Outcome Assessment, Health Care , Regression Analysis , Risk , Severity of Illness Index , Survival Rate , Time Factors , United States
4.
Liver Transpl ; 15(6): 592-9, 2009 Jun.
Article in English | MEDLINE | ID: mdl-19479802

ABSTRACT

We have investigated the impact of the donor risk index (DRI) on the outcome of hepatitis C virus (HCV)-infected patients undergoing liver transplantation (LTx). Retrospective analysis was performed from the Organ Procurement and Transplantation Network database (January 1, 2000 to June, 2006). The DRI was calculated as described by Feng et al. (Am J Transplant 2006;6:783-790). Model for End-Stage Liver Disease (MELD) exceptions were excluded from the analysis. Relative risk (RR) estimates of patient and graft loss were derived from Cox regression models. The Wald test was used to test the effect of the MELD score at transplant on the HCV-DRI interaction. Of the LTx recipients (16,678), 76.1% were Caucasian, and 66.7% were male; the median age was 52 (range, 18-80 years), and the mean follow-up time was 1148 days (range, 0-2959 days). Forty-six percent (n = 7675) of LTx recipients were HCV(+). The median DRI was 1.3 (range, 0.77-4.27). Increasing DRI was associated with a statistically significant increase in the RR of graft failure and patient death for both HCV(+) and HCV(-) recipients. However, HCV(+) recipients demonstrated a significantly higher increase in the RR of patient and graft loss as a function of the DRI than HCV(-) subjects, even after adjustments for several recipient factors, including MELD. In conclusion, a synergistic interaction between donor DRI and recipient HCV status exists, such that an allograft from a high-DRI donor more adversely affects the outcome of an HCV(+) recipient than that of an HCV(-) recipient.


Subject(s)
Graft Rejection/epidemiology , Hepatitis C/surgery , Liver Transplantation , Patient Selection , Tissue Donors , Adolescent , Adult , Aged , Aged, 80 and over , Female , Follow-Up Studies , Humans , Liver/physiopathology , Liver/virology , Male , Middle Aged , Proportional Hazards Models , Retrospective Studies , Risk Factors , Severity of Illness Index , Treatment Outcome , Young Adult
5.
Transplantation ; 84(7): 926-8, 2007 Oct 15.
Article in English | MEDLINE | ID: mdl-17984847

ABSTRACT

BACKGROUND: To investigate whether center volume impacts the rate hepatic artery thrombosis (HAT) and patient survival after adult living donor liver transplantation (ALDLT). METHODS: Patients with HAT who were listed as Status 1 in the Organ Procurement Transplant Network database were included in the study. Recipients of ALDLT were compared to those who received a deceased donor liver transplant (DDLT). RESULTS: Recipients of ALDLT had a higher rate of HAT than recipients of DDLT. Centers that performed less than four adult ALDLT had a higher rate of HAT than other higher volume centers. "Novice" centers had a worse graft and patient survival than those with more experience in ALDLT. Recipients who had HAT experienced a worse patient survival than those who did not. CONCLUSIONS: Centers with higher volume have a lower rate of HAT and a better patient and graft survival in ALDLT. Clearer regulations and focus on overcoming the learning curve might be needed to increase the utilization of ALDLT.


Subject(s)
Hepatic Artery/pathology , Liver Transplantation/methods , Thrombosis/immunology , Databases, Factual , Graft Survival , Humans , Living Donors , Retrospective Studies , Thrombosis/pathology , Time Factors , Tissue and Organ Harvesting , Tissue and Organ Procurement , Treatment Outcome
6.
Transplantation ; 82(12): 1653-7, 2006 Dec 27.
Article in English | MEDLINE | ID: mdl-17198254

ABSTRACT

BACKGROUND: The goal of this analysis was to determine if outcomes from the use of extended criteria donor (ECD) livers were dependent upon the Model for End-Stage Liver Disease (MELD) score of the recipient. METHODS: The Organ Procurement and Transplantation Network (OPTN) database as of March 4, 2006 was used for the analysis. Data from 12,056 adult liver transplant (LTx) recipients between June 1, 2002 and June 30, 2005 was analyzed. The donor risk index (DRI) was calculated as previously reported. A DRI of > or =1.7 was classified as ECD. Relative risk (RR) estimates were derived from Cox regression models adjusted for DRI, recipient MELD, age, sex, ethnicity, diagnosis, and year of transplant. RESULTS: Data from 2,873 grafts falling in the ECD category (23.8%) and their recipients were analyzed. Recipients with low MELD scores (<15) received the highest proportion of ECD livers (33%). ECD livers were associated with a significant increase in the RR of graft failure within each MELD category. However, this effect held within each of the three MELD categories. CONCLUSION: The use of ECD grafts expands the organ pool at expense of increased RR of liver failure. Our analysis showed no significant interaction between DRI and MELD score of the recipient. The fact that there is no additional impact of ECD livers in recipients with high MELD scores suggests that this group of patients may benefit from this pool of grafts.


Subject(s)
Donor Selection/methods , Graft Survival , Liver Failure/surgery , Liver Transplantation , Living Donors , Adolescent , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Models, Biological , Risk , Treatment Outcome
7.
Transplantation ; 80(2): 272-5, 2005 Jul 27.
Article in English | MEDLINE | ID: mdl-16041274

ABSTRACT

Although graft and patient survival data are available for pancreas and kidney transplants, they are rarely reported in terms of half-life. Our aim was to determine whether a more relevant measure of outcome is patient and allograft half-life. Using the data from the Organ Procurement and Transplantation Network Registry on kidney and pancreas transplants from January 1988 to December 1996, patient and graft half-life and 95% confidence intervals were calculated and demographic variables compared. No significant differences were found between demographic variables. Kidneys transplanted in diabetics as a simultaneous kidney-pancreas (SPK) fared better than diabetics receiving a kidney alone (9.6 vs. 6.3 years). Pancreatic graft survival in an SPK pair was better than pancreas after kidney transplant or pancreas transplant alone (11.2 vs. 2.5 years). Because kidney and pancreatic grafts have a longer half-life when transplanted with their mate grafts, we should consider the relative benefits of SPKs over pancreas after kidney transplant or pancreas transplant alone to limit the loss of precious resources.


Subject(s)
Graft Survival , Kidney Transplantation/physiology , Pancreas Transplantation/physiology , Adult , Diabetes Mellitus/surgery , Ethnicity , Female , Humans , Kidney Diseases/surgery , Kidney Transplantation/mortality , Pancreas Transplantation/mortality , Retrospective Studies , Survival Analysis , Tissue Donors/statistics & numerical data , United States/epidemiology
8.
Transplantation ; 77(9): 1411-5, 2004 May 15.
Article in English | MEDLINE | ID: mdl-15167600

ABSTRACT

BACKGROUND: In 2000, the United Network for Organ Sharing/Organ Procurement and Transplantation Network Registry reported 540 recovered kidneys were discarded because of biopsy results, and 210 were discarded because of poor organ function. We compared the percentage of glomerulosclerosis (GS) and creatinine clearance (CrCl) of both discarded and transplanted cadaveric kidneys and examined their effect on graft survival and function. METHODS: The cohort consisted of all cadaveric kidneys (n= 3,444) with reported biopsy results between October 25, 1999 and December 31, 2001. Graft survival was calculated by univariate and multivariate models. RESULTS: Fifty-one percent of discarded kidneys had GS of less than 20%, 27% had a CrCl greater than 80 mL/min, and 15% (129 kidneys) had both GS less than 20% and a CrCl of greater than 80 mL/min. Univariate analyses of kidneys with less than or equal to 20% GS revealed no difference in 1-year graft survival when the CrCl was greater than or less than or equal to 80 mL/min. When GS was greater than 20%, 1-year graft survival of kidneys with a CrCl of greater than 80 mL/min was significantly greater than that of kidneys with a CrCl of less than or equal to 80 mL/min. Multivariate results showed no significant difference in relative risk of graft loss with GS greater than 20% versus less than or equal to 20% when the CrCl was either 50 or 80 mL/min. With both GS less than or equal to 20% and greater than 20%, serum creatinine at 1 year was significantly lower in kidneys with CrCl greater 80 mL/min. CONCLUSIONS: Calculated donor CrCl does, and percentage GS on donor kidney biopsies does not, correlate well with 1-year graft survival and function, and percentage GS should not be used as the sole criterion for discarding recovered cadaveric kidneys.


Subject(s)
Creatinine/metabolism , Glomerulosclerosis, Focal Segmental/mortality , Graft Survival , Kidney Transplantation/mortality , Kidney/pathology , Aged , Aged, 80 and over , Biopsy , Cadaver , Cohort Studies , Female , Glomerulosclerosis, Focal Segmental/pathology , Humans , Kidney/metabolism , Kidney Transplantation/standards , Kidney Transplantation/statistics & numerical data , Male , Middle Aged , Multivariate Analysis , Outcome Assessment, Health Care , Predictive Value of Tests , Registries , Tissue and Organ Procurement
9.
Liver Transpl ; 10(1): 36-41, 2004 Jan.
Article in English | MEDLINE | ID: mdl-14755775

ABSTRACT

The new allocation policy of the United Network of Organ Sharing (UNOS) based on the model for end-stage liver disease (MELD) gives candidates with stage T1 or stage T2 hepatocellular carcinoma (HCC) a priority MELD score beyond their degree of hepatic decompensation. The aim of this study was to determine the impact of the new allocation policy on HCC candidates before and after the institution of MELD. The UNOS database was reviewed for all HCC candidates listed between July 1999 and July 2002. The candidates were grouped by two time periods, based on the date of implementation of new allocation policy of February 27, 2002. Pre-MELD candidates were listed for deceased donor liver transplantation (DDLT) before February 27,2002, and post-MELD candidates were listed after February 27, 2002. Candidates were compared by incidence of DDLT, time to DDLT, and dropout rate from the waiting list because of clinical deterioration or death, and survival while waiting and after DDLT. Incidence rates calculated for pre-MELD and post-MELD periods were expressed in person years. During the study, 2,074 HCC candidates were listed for DDLT in the UNOS database. The DDLT incidence rate was 0.439 transplant/person years pre-MELD and 1.454 transplant/person years post-MELD (P < 0.001). The time to DDLT was 2.28 years pre-MELD and 0.69 years post-MELD (P < 0.001). The 5-month dropout rate was 16.5% pre-MELD and 8.5% post-MELD (P < 0.001). The 5-month waiting-list survival was 90.3% pre-MELD and 95.7% post-MELD (P < 0.001). The 5-month survival after DDLT was similar for both time periods. The new allocation policy has led to an increased incidence rate of DDLT in HCC candidates. Furthermore, the 5-month dropout rate has decreased significantly. In addition, 5-month survival while waiting has increased in the post-MELD period. Thus, the new MELD-based allocation policy has benefited HCC candidates.


Subject(s)
Carcinoma, Hepatocellular/surgery , Health Care Rationing/organization & administration , Liver Neoplasms/surgery , Liver Transplantation , Patient Selection , Tissue and Organ Procurement/organization & administration , Carcinoma, Hepatocellular/mortality , Health Care Rationing/statistics & numerical data , Humans , Liver Neoplasms/mortality , Resource Allocation , United States/epidemiology , Waiting Lists
10.
Transplantation ; 76(9): 1389-94, 2003 Nov 15.
Article in English | MEDLINE | ID: mdl-14627922

ABSTRACT

BACKGROUND: The Etablissement français des Greffes reports regional variability in access to organ transplantation in France. Some variability seems to be inevitable for reasons discussed in the French article. We provide comparative data on a similar phenomenon in the United States, including some historical perspectives and recent attempts to minimize geographic variability especially for patients in urgent need of liver transplants. METHODS: To assess regional variability in access to heart, liver, and kidney transplants, a competing risks method was used. Outcomes were examined for primary transplant candidates added to the waiting list during 3-year periods. Results were stratified by region of listing. RESULTS: Four months after listing, the transplant rate for all U.S. kidney transplant candidates was 10.9%. Regionally the 4-month transplant rate ranged from 4.2% to 18.5% for highly sensitized patients and from 5.4% to 19.6% for nonsensitized patients. For liver candidates, the overall national transplant rate 4 months after listing was 22%, but the overall regional rate varied from 11.8% to 36.5%. The overall transplant rate for heart candidates 4 months after listing was 43.9%, whereas regional 30-day transplant rates for the most urgent heart candidates (status 1A) ranged from 25.1% to 47.1%. Four-month transplant rates for less urgent heart candidates ranged from 24.9% to 40.7%. CONCLUSION: Similar to the French experience, pretransplantation waiting times in the 11 U.S. regions vary considerably. Computer-simulated modeling shows that redrawing organ distribution boundaries could reduce but not eliminate geographic variability. It may be too early to tell whether the recently implemented Model for End-Stage Liver Disease/Pediatric End-Stage Liver Disease liver allocation system will decrease regional variability in access to transplant as compared with the previous system.


Subject(s)
Tissue Donors/supply & distribution , Geography , Heart Transplantation/statistics & numerical data , Humans , Kidney Transplantation/statistics & numerical data , Liver Transplantation/statistics & numerical data , Time Factors , Tissue and Organ Procurement/methods , Tissue and Organ Procurement/statistics & numerical data , United States , Waiting Lists
12.
Liver Transpl ; 8(8): 659-66, 2002 Aug.
Article in English | MEDLINE | ID: mdl-12149756

ABSTRACT

For several years, the Organ Procurement and Transplantation Network/United Network for Organ Sharing (UNOS) Liver and Intestinal Transplantation Committee has been examining effects of changes and proposed changes to the liver allocation system. The Institute of Medicine recently recommended that the size of liver distribution units be increased to improve the organ distribution system. Methods to achieve this and the potential impact on patients and transplant centers of such a change are evaluated in this study. In hypothetical scenarios, we combined geographically contiguous organ procurement organizations (OPOs) in seven different configurations to increase the size of liver distribution units to cover populations greater than 9 million persons. Using the UNOS Liver Allocation Model (ULAM), we examined the effect of 17 different organ allocation sequences in these proposed realignments and compared them with those predicted by ULAM for the current liver distribution system by using the following primary outcome variables: number of primary liver transplantations performed, total number of deaths, and total number of life-years saved. Every proposed new liver distribution unit plan resulted in fewer primary transplantations. Many policies increased the total number of deaths and reduced total life-years saved compared with the current system. Most of the proposed plans reduced interregional variation compared with the current plan, but no one plan consistently reduced variation for all outcome variables, and all reductions in variations were relatively small. All new liver distribution unit plans led to significant shifts in the number of transplantations performed in individual OPOs compared with the current system. The ULAM predicts that changing liver distribution units to larger geographic areas has little positive impact on overall results of liver transplantation in the United States compared with the current plan. Enlarging liver distribution units likely will result in significant shifts in organs across current OPO boundaries, which will have a significant impact on the activity of many transplant centers.


Subject(s)
Computer Simulation , Liver Transplantation/statistics & numerical data , Tissue and Organ Procurement/organization & administration , Humans , Tissue and Organ Procurement/standards , Waiting Lists
13.
Clin Transpl ; : 21-8, 2002.
Article in English | MEDLINE | ID: mdl-12971434

ABSTRACT

The OPTN implemented a revised system (MELD/ PELD) for the allocation of cadaveric livers on February 27, 2002. When compared with an earlier era, preliminary results indicate that transplant rates remain similar by gender, ethnicity, age group (adult and pediatric) and for most principal diagnoses. Both the actual number of pretransplant deaths and the pretransplant death rate has dropped under the new system. While some regional variation exists in the average MELD scores at listing, death and transplant, it accounts for only a small percentage of the total variation observed. In a multivariate analysis, MELD scores above 20 had the strongest effect and were associated with a significantly increased mortality risk on the waiting list. More data are need to analyze the impact of MELD on posttransplant outcomes.


Subject(s)
Liver Failure/surgery , Liver Transplantation/statistics & numerical data , Tissue and Organ Procurement/organization & administration , Adult , Chronic Disease , Humans , Liver Failure/epidemiology , Resource Allocation/organization & administration , Tissue and Organ Procurement/statistics & numerical data , Waiting Lists
14.
Clin Transpl ; : 79-92, 2002.
Article in English | MEDLINE | ID: mdl-12971437

ABSTRACT

1. On November 30, 2002, there were 86,452 registrations on the combined UNOS waiting list. Of these, 65% were awaiting kidney transplantation and 20% were awaiting liver transplantation. 2. The majority of patients on the UNOS waiting list on October 31, 2000 were blood type O (52%), White (53%) and male (58%), and awaiting their first transplant (87%). 3. Despite a decreasing trend in the percentage transplanted within one year of listing over the past several years, the percentage transplanted increased in 2001 for all organs except kidney and pancreas. 4. Blood type and medical urgency have a significant impact upon the percent transplanted within one year of listing for most organ types. Patients awaiting heart, pancreas, and intestinal transplants experience the highest probability of receiving a transplant within one year. 5. Death rates per patients waiting at risk have declined since 1988 for most patients awaiting life-saving organs and have remained relatively low for those awaiting a kidney, pancreas, or kidney-pancreas transplant.


Subject(s)
Registries , Tissue and Organ Procurement/statistics & numerical data , Transplantation/statistics & numerical data , Waiting Lists , Adult , Age Distribution , Blood Group Antigens , Child , Ethnicity , Female , Humans , Male , Racial Groups , Transplantation/mortality , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...