Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
1.
Stat Med ; 43(6): 1238-1255, 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38258282

ABSTRACT

In clinical studies, multi-state model (MSM) analysis is often used to describe the sequence of events that patients experience, enabling better understanding of disease progression. A complicating factor in many MSM studies is that the exact event times may not be known. Motivated by a real dataset of patients who received stem cell transplants, we considered the setting in which some event times were exactly observed and some were missing. In our setting, there was little information about the time intervals in which the missing event times occurred and missingness depended on the event type, given the analysis model covariates. These additional challenges limited the usefulness of some missing data methods (maximum likelihood, complete case analysis, and inverse probability weighting). We show that multiple imputation (MI) of event times can perform well in this setting. MI is a flexible method that can be used with any complete data analysis model. Through an extensive simulation study, we show that MI by predictive mean matching (PMM), in which sampling is from a set of observed times without reliance on a specific parametric distribution, has little bias when event times are missing at random, conditional on the observed data. Applying PMM separately for each sub-group of patients with a different pathway through the MSM tends to further reduce bias and improve precision. We recommend MI using PMM methods when performing MSM analysis with Markov models and partially observed event times.


Subject(s)
Research Design , Humans , Data Interpretation, Statistical , Computer Simulation , Probability , Bias
2.
Front Epidemiol ; 3: 1237447, 2023 Sep 15.
Article in English | MEDLINE | ID: mdl-37974561

ABSTRACT

Epidemiological studies often have missing data, which are commonly handled by multiple imputation (MI). In MI, in addition to those required for the substantive analysis, imputation models often include other variables ("auxiliary variables"). Auxiliary variables that predict the partially observed variables can reduce the standard error (SE) of the MI estimator and, if they also predict the probability that data are missing, reduce bias due to data being missing not at random. However, guidance for choosing auxiliary variables is lacking. We examine the consequences of a poorly chosen auxiliary variable: if it shares a common cause with the partially observed variable and the probability that it is missing (i.e., it is a "collider"), its inclusion can induce bias in the MI estimator and may increase the SE. We quantify, both algebraically and by simulation, the magnitude of bias and SE when either the exposure or outcome is incomplete. When the substantive analysis outcome is partially observed, the bias can be substantial, relative to the magnitude of the exposure coefficient. In settings in which a complete records analysis is valid, the bias is smaller when the exposure is partially observed. However, bias can be larger if the outcome also causes missingness in the exposure. When using MI, it is important to examine, through a combination of data exploration and considering plausible casual diagrams and missingness mechanisms, whether potential auxiliary variables are colliders.

3.
J Clin Epidemiol ; 160: 100-109, 2023 08.
Article in English | MEDLINE | ID: mdl-37343895

ABSTRACT

OBJECTIVES: Epidemiological studies often have missing data, which are commonly handled by multiple imputation (MI). Standard (default) MI procedures use simple linear covariate functions in the imputation model. We examine the bias that may be caused by acceptance of this default option and evaluate methods to identify problematic imputation models, providing practical guidance for researchers. STUDY DESIGN AND SETTING: Using simulation and real data analysis, we investigated how imputation model mis-specification affected MI performance, comparing results with complete records analysis (CRA). We considered scenarios in which imputation model mis-specification occurred because (i) the analysis model was mis-specified or (ii) the relationship between exposure and confounder was mis-specified. RESULTS: Mis-specification of the relationship between outcome and exposure, or between exposure and confounder, can cause biased CRA and MI estimates (in addition to any bias in the full-data estimate due to analysis model mis-specification). MI by predictive mean matching can mitigate model mis-specification. Methods for examining model mis-specification were effective in identifying mis-specified relationships. CONCLUSION: When using MI methods that assume data are MAR, compatibility between the analysis and imputation models is necessary, but not sufficient to avoid bias. We propose a step-by-step procedure for identifying and correcting mis-specification of imputation models.


Subject(s)
Data Analysis , Research Design , Humans , Data Interpretation, Statistical , Computer Simulation , Bias
4.
Eye (Lond) ; 37(6): 1236-1241, 2023 04.
Article in English | MEDLINE | ID: mdl-35590105

ABSTRACT

PURPOSE: To compare Kaplan-Meier survival curves and funnel plots for the audit of surgeon-specific corneal transplantation outcomes. METHODS: We obtained data on all patients with Fuchs endothelial dystrophy (FED) receiving a first corneal transplant in one eye between January 2012 and December 2017. We produced 2-year Kaplan-Meier graft survival curves to compare a simulated individual surgeon's graft survival rate to national pooled data. We used funnel plots to compare all surgeon outcomes to the national graft survival rate with superimposed 95 and 99.8% confidence limits. We defined an outlier as a surgeon who performed ≥10 transplants and had graft survival below the 99.8% national lower limit. To assess the effect of the surgeon case mix, we also compared unadjusted and risk-adjusted graft survival rates. RESULTS: There were 3616 first corneal transplants for FED patients with complete data, performed or overseen by 196 surgeons. The 2-year national graft survival rate was 88%. The median change from the unadjusted to the risk-adjusted graft survival rate for individual surgeons was 0% (IQR: 0%- -2%). Of the 108 surgeons who had performed ≥10 transplants, we identified two outliers based on the unadjusted graft survival funnel plot, compared to four outliers based on the risk-adjusted graft survival funnel plot. CONCLUSION: Funnel plots provide a visually accessible method for comparing individual graft survival rates to the national rate. Risk-adjustment accounts for clinical factors, and this has advantages for audit and clinical governance.


Subject(s)
Corneal Transplantation , Fuchs' Endothelial Dystrophy , Surgeons , Humans , Fuchs' Endothelial Dystrophy/surgery , Registries , Graft Survival
5.
BMJ Open Ophthalmol ; 7(1)2022 08.
Article in English | MEDLINE | ID: mdl-36161852

ABSTRACT

OBJECTIVE: To determine whether patients who receive corneas from the same donor have similar risks of endothelial failure and rejection. METHODS AND ANALYSIS: Patients with Fuchs endothelial dystrophy (FED) and pseudophakic bullous keratopathy (PBK) who received their first corneal transplant between 1999 and 2016 were analysed. Patients receiving corneas from donors who donated both corneas for the same indication were defined as 'paired'. Gray's test was used to compare the cumulative incidence of endothelial failure and rejection within 5 years post-transplant for 'paired' and 'unpaired' groups. Cox regression models were fitted to determine whether there was an association between recorded donor characteristics (endothelial cell density (ECD), age and sex and endothelial graft failure and rejection. RESULTS: 10 838 patients were analysed of whom 1536 (14%) were paired. The unpaired group comprised 1837 (69%) recipients of single corneal donors and 7465 (69%) donors who donated both corneas for another indication. ECD was lower for unpaired single cornea donors (p<0.01). There was no significant difference in endothelial graft failure or rejection between paired and unpaired groups for FED (p=0.37, p=0.99) or PBK (p=0.88, p=0.28) nor for donor ECD, age, sex and paired donation after adjusting for transplant factors (across all models p>0.16 for ECD, p>0.32 for donor age, p>0.14 for sex match and p>0.17 for the donor effect). CONCLUSION: The absence of a significant difference in graft outcome for corneal transplants for FED and PBK between paired and unpaired donors may reflect a homogeneous donor pool in the UK.


Subject(s)
Corneal Edema , Corneal Transplantation , Fuchs' Endothelial Dystrophy , Cornea , Corneal Edema/surgery , Fuchs' Endothelial Dystrophy/surgery , Graft Survival , Humans , Tissue Donors
6.
Ophthalmol Ther ; 11(3): 1131-1146, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35303284

ABSTRACT

INTRODUCTION: Endothelial cell density (ECD) changes long after penetrating keratoplasty (PKP) of organ-cultured corneas have been little studied. We aim to calculate the point when ECD decline stabilises following PKP with organ culture stored corneas. METHODS: This is an observational study of first-ever PKPs and first-ever re-grafts, performed over 17 years under a single surgeon. ECDs were acquired at 3 and 6 months, 1 year post-graft and annually thereafter by specular microscopy. Time-dependent ECD data was fitted to a log-biexponential model. RESULTS: We studied 465 first-ever grafts and 128 re-grafts. Mean recipient age was 59 years (range 0-96 years; SD 22). Median follow-up was 5.7 (range 0.2-17.1) years. Probability of ED at 5 years in first grafts and re-grafts was 4.4% (2.6-7.1%) and 14.8% (8.3-23.2%). In first grafts, ECD loss reached 0.6% per annum at 7.9 (6.2-9.6) years post-operatively. The half-lives of ECD loss during the immediate post-operative period for first grafts, re-grafts, dystrophies, ectasias, and previous ocular surgery are 20.1 (14.9-30.9), 12.8 (6.9-79.4), 19.5 (13.1-37.7), 26.2 (16.2-68), and 11.6 (6.7-41.3) months, respectively. The half-life during this rapid phase of ECD loss has an inverse correlation with graft survival at 10 years (r = - 0.89, p = 0.02). CONCLUSIONS: Rate of endothelial decompensation is higher in first grafts than re-grafts. ECD decline stabilises 7.9 years post-operatively in first grafts but then becomes lower than the physiological loss expected. Further work is needed to verify whether organ-cultured grafts reach physiological levels of ECD loss faster than hypothermically stored grafts.

7.
Stat Med ; 40(8): 1917-1929, 2021 04 15.
Article in English | MEDLINE | ID: mdl-33469974

ABSTRACT

In patient follow-up studies, events of interest may take place between periodic clinical assessments and so the exact time of onset is not observed. Such events are known as "bounded" or "interval-censored." Methods for handling such events can be categorized as either (i) applying multiple imputation (MI) strategies or (ii) taking a full likelihood-based (LB) approach. We focused on MI strategies, rather than LB methods, because of their flexibility. We evaluated MI strategies for bounded event times in a competing risks analysis, examining the extent to which interval boundaries, features of the data distribution and substantive analysis model are accounted for in the imputation model. Candidate imputation models were predictive mean matching (PMM); log-normal regression with postimputation back-transformation; normal regression with and without restrictions on the imputed values and Delord and Genin's method based on sampling from the cumulative incidence function. We used a simulation study to compare MI methods and one LB method when data were missing at random and missing not at random, also varying the proportion of missing data, and then applied the methods to a hematopoietic stem cell transplantation dataset. We found that cumulative incidence and median event time estimation were sensitive to model misspecification. In a competing risks analysis, we found that it is more important to account for features of the data distribution than to restrict imputed values based on interval boundaries or to ensure compatibility with the substantive analysis by sampling from the cumulative incidence function. We recommend MI by type 1 PMM.


Subject(s)
Research Design , Computer Simulation , Data Interpretation, Statistical , Humans , Likelihood Functions , Risk Assessment
8.
Front Immunol ; 11: 685, 2020.
Article in English | MEDLINE | ID: mdl-32508805

ABSTRACT

The control of peripheral immune responses by FOXP3+ T regulatory (Treg) cells is essential for immune tolerance. However, at any given time, Treg frequencies in whole blood can vary more than fivefold between individuals. An understanding of factors that influence Treg numbers and migration within and between individuals would be a powerful tool for cellular therapies that utilize the immunomodulatory properties of Tregs to control pathology associated with inflammation. We sought to understand how season could influence Treg numbers and phenotype by monitoring the proportion of natural thymus-derived Tregs (nTregs) defined as (CD3+CD4+CD25+FOXP3+CD127-/low ) cells as a proportion of CD4+ T cells and compared these to all FOXP3+ Tregs (allTregs, CD3+CD25+FOXP3+CD127-/low ). We were able to determine changes within individuals during 1 year suggesting an influence of season on nTreg frequencies. We found that, between individuals at any given time, nTreg/CD4+ T cells ranged from 1.8% in February to 8.8% in the summer where median nTreg/CD4 in January and February was 2.4% (range 3.75-1.76) and in July and August was 4.5% (range 8.81-3.17) p = 0.025. Importantly we were able to monitor individual nTreg frequencies throughout the year in donors that started the year with high or low nTregs. Some nTreg variation could be attributed to vitamin D status where normal linear regression estimated that an absolute increase in nTreg/CD4+ by 0.11% could be expected with 10 nmol increase in serum 25 (OH) vitamin D3 (p = 0.005, 95% CI: 0.03-0.19). We assessed migration markers on Tregs for the skin and/or gut. Here cutaneous lymphocyte associated antigen (CLA+) expression on CD25+FOXP3+CD4+/CD4+ was compared with the same population expressing the gut associated integrin, ß7. Gut tropic CD25+FOXP3+ß7+Tregs/CD4+ had similar dynamics to nTreg/CD4+. Conversely, CD25+FOXP3+CLA+Tregs/CD4+ showed no association with vitamin D status. Important for cellular therapies requiring isolation of Tregs, the absolute number of ß7+CD4+CD25+FOXP3+Tregs was positively associated with 25(OH)vitamin D3 (R2 = 0.0208, r = 0.184, p = 0.021) whereas the absolute numbers of CLA+CD4+CD25+FOXP3+Tregs in the periphery were not influenced by vitamin D status. These baseline observations provide new opportunities to utilize seasonal variables that influence Treg numbers and their migratory potential in patients or donors.


Subject(s)
Calcifediol/blood , Cell Movement/immunology , Seasons , T-Lymphocytes, Regulatory/immunology , Vitamins/blood , Adult , Aged , Biomarkers/metabolism , Blood Donors , CD4 Lymphocyte Count , Follow-Up Studies , Humans , Hydrocortisone/blood , Male , Middle Aged , Phenotype , Young Adult
9.
Cornea ; 39(1): 18-22, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31335531

ABSTRACT

PURPOSE: To compare the survival of a first penetrating keratoplasty (PK) or endothelial keratoplasty (EK) for iridocorneal endothelial (ICE) syndrome with transplant survival in Fuchs endothelial dystrophy (FED) and pseudophakic bullous keratopathy (PBK). METHODS: We compared graft survival of PK and EK for ICE syndrome for 2 time periods. We then compared graft survival in ICE syndrome with graft survival in FED and PBK. Kaplan-Meier estimates of graft survival up to 5 years posttransplant were calculated with 95% confidence intervals (CI), whereas comparisons between the groups were performed using the log-rank test. RESULTS: We included 86 first transplants for ICE syndrome. There was no difference in graft survival between the 58 PKs and the 28 EKs for up to 5 years after surgery (P = 0.717). For the period from 2009 to 2017, the 5-year graft survival rates for ICE syndrome were 64.3% (CI, 21.8%-88.0%) for the 16 PKs and 66.8% (CI, 41.8%-83.0%) for the 26 EKs (P = 0.469). Between 2009 and 2017, the 5-year survival rate for 42 grafts with ICE syndrome was 62.7% (CI, 39.6%-79.0%), which was lower than 75.9% (CI, 74.2%-77.4%) in 7058 transplants for FED but higher than 55.1% (CI, 52.0%-58.0%) in 3320 transplants for PBK, although the numbers of ICE transplants are too small to tell whether this difference was by chance. CONCLUSIONS: The results indicate no difference in graft survival between PK and EK for ICE syndrome. Graft survival in ICE syndrome is intermediate between that of FED and PBK.


Subject(s)
Descemet Stripping Endothelial Keratoplasty/methods , Endothelium, Corneal/transplantation , Fuchs' Endothelial Dystrophy/surgery , Graft Rejection/prevention & control , Graft Survival , Iridocorneal Endothelial Syndrome/surgery , Keratoplasty, Penetrating/methods , Female , Follow-Up Studies , Fuchs' Endothelial Dystrophy/diagnosis , Graft Rejection/epidemiology , Humans , Incidence , Iridocorneal Endothelial Syndrome/diagnosis , Male , Middle Aged , Registries , Retrospective Studies , Risk Factors , Time Factors , Treatment Outcome , United Kingdom/epidemiology , Visual Acuity
10.
Transfusion ; 59(11): 3468-3477, 2019 11.
Article in English | MEDLINE | ID: mdl-31618457

ABSTRACT

BACKGROUND: There is renewed interest in administering whole blood (WB) for the resuscitation of patients with bleeding trauma. The shelf life of WB was established decades ago based on the viability of red blood cells. However, plasma quality during WB storage is not established. STUDY DESIGN AND METHODS: White blood cell- and platelet-reduced WB (WB-PLT) was prepared using standard processes and compared to WB processed using a platelet-sparing WBC reduction (WB + PLT) filter. WB (± PLT) was held at 2 to 6°C for 35 days alongside control units of red blood cells (RBCs) in saline, adenine, glucose, and mannitol and liquid plasma. A series of assays explored the coagulation potential and RBC quality. RESULTS: While fibrinogen and α2-antiplasmin remained unaffected by storage, other factors varied between components or over time at 2 to 6°C. At 14 days factor V, factor VII, α2 -antiplasmin and free protein S antigen remained on average greater than 0.50 IU/mL or 50%, as appropriate, in WB ± PLT. Factor VIII was on average 0.49 IU/mL in WB+PLT, and 0.56 IU/mL for WB-PLT. Free protein S activity decreased significantly in all arms but remained on average greater than 40% at Day 14. Contact activation was not demonstrated before Day 14. Thrombin generation in plasma remained relatively stable to Day 35 in all arms. CONCLUSIONS: Clotting factor activity remained at or above a mean of 0.5 IU/mL, or 50%, at Day 14 for factor V, factor VII, factor VIII, free protein S, fibrinogen, and α2-antiplasmin in all arms. Further data on platelet function in WB+PLT is needed to inform its shelf life.


Subject(s)
Blood Platelets/physiology , Blood Preservation/methods , Erythrocytes/physiology , Plasma , Adenosine Triphosphate/blood , Blood Coagulation , Blood Specimen Collection , Humans , Leukocyte Reduction Procedures , Protein S/analysis , Thrombelastography , alpha-2-Antiplasmin/analysis
11.
Transfusion ; 58(9): 2208-2216, 2018 09.
Article in English | MEDLINE | ID: mdl-30204951

ABSTRACT

BACKGROUND: We sought to compare the quality of washed red blood cells (RBCs) produced using the ACP215 device or manual methods with different combinations of wash and storage solutions. Our aim was to establish manual methods of washing that would permit a shelf life of more than 24 hours. STUDY DESIGN AND METHODS: Fourteen-day-old RBCs were pooled, split, and washed in one of five ways: 1) using the ACP215 and stored in SAGM, 2) manually washed and stored in saline, 3) manually washed in saline and stored in SAGM, 4) manually washed in saline-glucose and stored in SAGM, and 5) manually washed and stored in SAGM. Additional units were pooled and split, washed manually or using the ACP215, and irradiated on Day 14. Units were sampled to 14 days after washing and storage at 4 ± 2°C. RESULTS: All washed RBCs met specification for volume (200-320 mL) and hemoglobin (Hb) content (>40 g/unit). Removal of plasma proteins was better using manual methods: residual immunoglobulin A in saline-glucose-washed cells 0.033 (0.007-0.058) mg/dL manual versus 0.064 (0.026-0.104) mg/dL ACP215 (median, range). Hb loss was lower in manually washed units (mean, ≤ 2.0g/unit) than in ACP215-washed units (mean, 6.1 g/unit). Disregarding saline-washed and stored cells, hemolysis in all nonirradiated units was less than 0.8% 14 days after washing. As expected, the use of SAGM to store manually washed units improved adenosine triphosphate, glucose, lactate, and pH versus storage in saline. CONCLUSION: The data suggest that the shelf life of manually washed RBCs could be extended to 14 days if stored in SAGM instead of saline.


Subject(s)
Erythrocytes , Therapeutic Irrigation/methods , Adenine , Automation , Blood Glucose/analysis , Blood Preservation , Erythrocyte Transfusion , Erythrocyte Volume , Erythrocytes/chemistry , Erythrocytes/cytology , Glucose , Hemoglobins/analysis , Hemolysis , Humans , Hydrogen-Ion Concentration , Lactic Acid/blood , Mannitol , Saline Solution , Sodium Chloride , Therapeutic Irrigation/instrumentation , Transfusion Reaction/prevention & control
12.
Crit Care ; 22(1): 164, 2018 06 18.
Article in English | MEDLINE | ID: mdl-29914530

ABSTRACT

BACKGROUND: There is increasing interest in the timely administration of concentrated sources of fibrinogen to patients with major traumatic bleeding. Following evaluation of early cryoprecipitate in the CRYOSTAT 1 trial, we explored the use of fibrinogen concentrate, which may have advantages of more rapid administration in acute haemorrhage. The aims of this pragmatic study were to assess the feasibility of fibrinogen concentrate administration within 45 minutes of hospital admission and to quantify efficacy in maintaining fibrinogen levels ≥ 2 g/L during active haemorrhage. METHODS: We conducted a blinded, randomised, placebo-controlled trial at five UK major trauma centres with adult trauma patients with active bleeding who required activation of the major haemorrhage protocol. Participants were randomised to standard major haemorrhage therapy plus 6 g of fibrinogen concentrate or placebo. RESULTS: Twenty-seven of 39 participants (69%; 95% CI, 52-83%) across both arms received the study intervention within 45 minutes of admission. There was some evidence of a difference in the proportion of participants with fibrinogen levels ≥ 2 g/L between arms (p = 0.10). Fibrinogen levels in the fibrinogen concentrate (FgC) arm rose by a mean of 0.9 g/L (SD, 0.5) compared with a reduction of 0.2 g/L (SD, 0.5) in the placebo arm and were significantly higher in the FgC arm (p < 0.0001) at 2 hours. Fibrinogen levels were not different at day 7. Transfusion use and thromboembolic events were similar between arms. All-cause mortality at 28 days was 35.5% (95% CI, 23.8-50.8%) overall, with no difference between arms. CONCLUSIONS: In this trial, early delivery of fibrinogen concentrate within 45 minutes of admission was not feasible. Although evidence points to a key role for fibrinogen in the treatment of major bleeding, researchers need to recognise the challenges of timely delivery in the emergency setting. Future studies must explore barriers to rapid fibrinogen therapy, focusing on methods to reduce time to randomisation, using 'off-the-shelf' fibrinogen therapies (such as extended shelf-life cryoprecipitate held in the emergency department or fibrinogen concentrates with very rapid reconstitution times) and limiting the need for coagulation test-based transfusion triggers. TRIAL REGISTRATION: ISRCTN67540073 . Registered on 5 August 2015.


Subject(s)
Fibrinogen/therapeutic use , Hemorrhage/drug therapy , Secondary Prevention/standards , Adult , Double-Blind Method , Female , Fibrinogen/administration & dosage , Hemorrhage/etiology , Hemostatics/administration & dosage , Hemostatics/therapeutic use , Humans , Male , Middle Aged , Pilot Projects , Placebos , Pragmatic Clinical Trials as Topic , Secondary Prevention/methods , Treatment Outcome , United Kingdom , Wounds and Injuries/complications , Wounds and Injuries/drug therapy
14.
Haematologica ; 102(3): 476-483, 2017 03.
Article in English | MEDLINE | ID: mdl-27909219

ABSTRACT

The generation of cultured red blood cells from stem cell sources may fill an unmet clinical need for transfusion-dependent patients, particularly in countries that lack a sufficient and safe blood supply. Cultured red blood cells were generated from human CD34+ cells from adult peripheral blood or cord blood by ex vivo expansion, and a comprehensive in vivo survival comparison with standard red cell concentrates was undertaken. Significant amplification (>105-fold) was achieved using CD34+ cells from both cord blood and peripheral blood, generating high yields of enucleated cultured red blood cells. Following transfusion, higher levels of cultured red cells could be detected in the murine circulation compared to standard adult red cells. The proportions of cultured blood cells from cord or peripheral blood sources remained high 24 hours post-transfusion (82±5% and 78±9%, respectively), while standard adult blood cells declined rapidly to only 49±9% by this time. In addition, the survival time of cultured blood cells in mice was longer than that of standard adult red cells. A paired comparison of cultured blood cells and standard adult red blood cells from the same donor confirmed the enhanced in vivo survival capacity of the cultured cells. The study herein represents the first demonstration that ex vivo generated cultured red blood cells survive longer than donor red cells using an in vivo model that more closely mimics clinical transfusion. Cultured red blood cells may offer advantages for transfusion-dependent patients by reducing the number of transfusions required.


Subject(s)
Blood Component Transfusion , Cell Survival , Reticulocytes/metabolism , Reticulocytes/transplantation , Animals , Antigens, CD34/metabolism , Cell Differentiation , Cells, Cultured , Cytophagocytosis , Erythrocytes/metabolism , Hematopoietic Stem Cells/cytology , Hematopoietic Stem Cells/metabolism , Humans , Immunophenotyping , Macrophages , Mice , Phenotype , Reticulocytes/cytology , Transplantation, Heterologous
15.
Transfusion ; 57(4): 881-889, 2017 04.
Article in English | MEDLINE | ID: mdl-27882572

ABSTRACT

BACKGROUND: To make plasma readily available to treat major hemorrhage, some centers are internationally using either thawed plasma (TP) or "never-frozen" liquid plasma (LP). Despite the routine use of both, there are limited data comparing the two. The hemostatic properties of LP were evaluated and compared to TP in a paired study. STUDY DESIGN AND METHODS: Two ABO-matched plasma units were pooled and split to produce 1 unit for LP and 1 unit for TP. Samples of TP and LP, stored at 2 to 6°C, were tested for a range of coagulation factors, thrombin generation, and rotational thromboelastometry. An additional 119 units of LP were collected and analyzed for markers of contact activation (S-2302 cleavage) and cellular content. RESULTS: LP and TP were compared, up to 7 days of storage, with results showing no difference in the rate of change over time for any variable measured. When compared to Day 5, LP on Day 7 showed no difference for any factors measured; however, on Day 11 Factor (F)II, FV, FVII, and protein S (activity) were lower. Analysis of 119 LP units showed that 26 of 119 (22%) exhibited cold-induced contact activation by Day 28. CONCLUSION: LP and TP were comparable in terms of hemostatic variables up to 7 days of storage. Decreasing coagulation factor activity along with an increased activation risk during storage of LP needs to be balanced against availability to supply and clinical need when considering using LP with more than 7 days of storage.


Subject(s)
ABO Blood-Group System , Cryopreservation , Plasma/chemistry , Humans , Male , Thrombelastography/methods , Time Factors
16.
Am J Ophthalmol ; 170: 50-57, 2016 Oct.
Article in English | MEDLINE | ID: mdl-27491697

ABSTRACT

PURPOSE: To investigate graft survival and rejection following sequential bilateral corneal transplantation. DESIGN: Retrospective cohort study. METHODS: The study included patients with Fuchs endothelial dystrophy (FED), pseudophakic bullous keratopathy (PBK), or keratoconus who had undergone a penetrating keratoplasty (PK), endothelial keratoplasty (EK), or deep anterior lamellar keratoplasty (DALK) between 1999 and 2012. The main cohort included patients who had received a first transplant in both eyes for the same indication and a control cohort patients who had undergone a unilateral first corneal transplant. Main outcome measures were graft rejection or failure at 5 years. RESULTS: A total of 11 822 patients were included, of whom 9335 had a unilateral and 2487 bilateral corneal transplantation. For patients with FED (P < .005) and KC (P = .03) but not PBK (P = .19), a transplant in the second eye was associated with a 50% reduction in risk of graft failure within 5 years in the first eye (FED: hazard ratio [HR] 0.47, 95% confidence interval [CI]: 0.34-0.64; KC: HR 0.50, 95% CI: 0.24-1.02). For FED this was dependent on the type of transplant (EK: HR 0.30, 95% CI: 0.17-0.52; PK: HR 0.61, 95% CI: 0.42-0.88). We found no association between a transplant in the second eye and a rejection episode in the first eye (KC P = .19, FED P = .39, PBK P = .19). CONCLUSION: For FED and KC, a transplant in the second eye was associated with a reduced risk of graft failure in the first eye, independent of inter-transplant time. For FED this effect was pronounced following an EK in the first eye, where the risk of failure was reduced by 70%.


Subject(s)
Corneal Diseases/surgery , Graft Survival/physiology , Keratoplasty, Penetrating/methods , Adult , Aged , Cohort Studies , Corneal Diseases/physiopathology , Female , Graft Rejection/physiopathology , Humans , Male , Middle Aged , Retrospective Studies , Risk Factors , Visual Acuity
17.
Transfusion ; 55(8): 1964-71, 2015 Aug.
Article in English | MEDLINE | ID: mdl-25906831

ABSTRACT

BACKGROUND: There is increasing evidence for restrictive red blood cell (RBC) transfusion but compliance with recommended transfusion triggers is variable. A clinical decision support system (CDSS) has been found to reduce unnecessary transfusion in some clinical settings when physicians are advised they are noncompliant with the current guidelines. The objective was to assess the impact of a CDSS for blood product ordering in patients with hematologic disease. STUDY DESIGN AND METHODS: All platelet (PLT) and RBC transfusions were identified in hematology patients in three periods: before (baseline), immediately after (CDSS1), and 7 months after implementation of CDSS for blood ordering (CDSS2). Compliance with the recommended transfusion triggers was monitored for all orders made by CDSS or non-CDSS methods during each period. RESULTS: Ninety-seven patients with a variety of hematologic diagnoses received 502 RBC and 572 PLT transfusions during the three periods with no significant difference in 1) the mean number of transfusions per patient, 2) the proportion of patients transfused, 3) posttransfusion hemoglobin (Hb), and 4) pre- and posttransfusion PLT count, although mean pretransfusion Hb decreased. The proportion of noncompliant RBC and PLT transfusion requests improved from baseline to CDSS2 (69.0% to 43.4% p ≤ 0.005 for RBCs; and 41.9% to 31.2%, p = 0.16 for PLT) when all orders were compared, although this improvement was not significant at the 5% level for PLTs. CONCLUSIONS: The introduction of CDSS for blood product ordering supported by education and physician feedback in the hematology setting had an immediate impact on improving compliance with guidelines for restrictive transfusion practice.


Subject(s)
Decision Support Systems, Clinical , Erythrocyte Transfusion/statistics & numerical data , Guideline Adherence , Hematologic Diseases/therapy , Platelet Transfusion/statistics & numerical data , Unnecessary Procedures , Adult , Aged , Erythrocyte Transfusion/standards , Female , Hematologic Diseases/blood , Hemoglobins/analysis , Humans , Inappropriate Prescribing/statistics & numerical data , Male , Middle Aged , Platelet Count , Platelet Transfusion/standards , Practice Guidelines as Topic , Practice Patterns, Physicians'/statistics & numerical data , Prescriptions/statistics & numerical data
18.
Nephron ; 129 Suppl 1: 247-56, 2015.
Article in English | MEDLINE | ID: mdl-25695815

ABSTRACT

BACKGROUND: Renal transplantation is recognised as being the optimal treatment modality for many patients with established renal failure. This analysis aimed to explore inter-centre variation in access to renal transplantation in the UK. METHODS: Transplant activity and waiting list data were obtained from NHS Blood and Transplant, demographic and laboratory data were obtained from the UK Renal Registry. All incident RRT patients starting treatment between 1st January 2008 and 31st December 2010 from 71 renal centres were considered for inclusion. The cohort was followed until 31st December 2012 (or until transplantation or death, whichever was earliest). RESULTS: Age, ethnicity and primary renal diagnosis were associated with both accessing the kidney transplant waiting list and receiving a kidney transplant. A patient starting dialysis ina non-transplanting renal centre was less likely to be registered for transplantation (OR 0.74, 95% CI 0.68­0.81) or receive a transplant from a donor after cardiac death or a living kidney donor (OR 0.75, 95% CI 0.67­0.84) compared with patients cared for in transplanting renal centres. Once registered for kidney transplantation, patients in both transplanting and non-transplanting renal centres had an equal chance of receiving a transplant from a donor after brainstem death (OR 0.93, 95% CI 0.78 to 1.10). CONCLUSION: There was wide variation in access to kidney transplantation between UK renal centres which cannot be explained by differences in case mix.


Subject(s)
Health Services Accessibility , Kidney Transplantation , Registries , Adolescent , Adult , Female , Humans , Male , Middle Aged , United Kingdom , Young Adult
19.
Transfusion ; 51(6): 1284-90, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21133927

ABSTRACT

BACKGROUND: Most hematopoietic progenitor cell (HPC) products are infused or processed shortly after collection, but in some cases this may be delayed for up to 48 hours. A number of variables such as temperature and cell concentration are of critical importance for the integrity of HPCs during this time. STUDY DESIGN AND METHODS: We evaluated critical variables using cord blood HPC units that were divided equally and stored at 4 °C versus room temperature (RT) for up to 96 hours. Total nucleated cell (TNC) and mononuclear cell (MNC) counts, viable CD34+ cell counts, and CD45+ cell viability as well as colony-forming unit-granulocyte-macrophage (CFU-GM) present over time at each temperature were determined. RESULTS: Overall, the data indicate that with the exception of viable CD34+ cells, there was a significant decrease in each variable measured for 72 to 96 hours and, with the exception of viable CD34+ cells and CFU-GM, the reductions were significantly greater in RT units than 4 °C units. There was an increase in viable CD34+ count for units where TNC count was greater than 8.5 × 10(9) /L, compared with units where TNC count was less than 8.5 × 10(9) /L, that was different for each storage temperature. CONCLUSIONS: Cord blood HPC collections maintained at 4 °C retained higher TNC counts, MNC counts, and CD45+ cell viability over a 72- to 96-hour storage period.


Subject(s)
Blood Preservation/methods , Granulocyte-Macrophage Progenitor Cells/cytology , Hematopoietic Stem Cells/cytology , Antigens, CD34/metabolism , Fetal Blood/cytology , Temperature
20.
World J Gastroenterol ; 16(40): 5070-6, 2010 Oct 28.
Article in English | MEDLINE | ID: mdl-20976844

ABSTRACT

AIM: To investigate the outcome of patients with hepatitis C virus (HCV) infection undergoing liver retransplantation. METHODS: Using the UK National Registry, patients undergoing liver transplantation for HCV-related liver disease were identified. Data on patient and graft characteristics, as well as transplant and graft survival were collected to determine the outcome of HCV patients undergoing retransplantation and in order to identify factors associated with transplant survival. RESULTS: Between March 1994 and December 2007, 944 adult patients were transplanted for HCV-related liver disease. At the end of follow-up, 617 of these patients were alive. In total, 194 (21%) patients had first graft failure and of these, 80 underwent liver retransplantation, including 34 patients where the first graft failed due to recurrent disease. For those transplanted for HCV-related disease, the 5-year graft survival in those retransplanted for recurrent HCV was 45% [95% confidence interval (CI): 24%-64%] compared with 80% (95% CI: 62%-90%) for those retransplanted for other indications (P = 0.01 log-rank test); the 5-year transplant survival after retransplantation was 43% (95% CI: 23%-62%) and 46% (95% CI: 31%-60%), respectively (P = 0.8, log-rank test). In univariate analysis of all patients retransplanted, no factor analyzed was significantly associated with transplant survival. CONCLUSION: Outcomes for retransplantation in patients with HCV infection approach agreed criteria for minimum transplant benefit. These data support selective liver retransplantation in patients with HCV infection.


Subject(s)
Graft Rejection/surgery , Hepatitis C, Chronic/surgery , Liver Transplantation , Adult , Female , Follow-Up Studies , Graft Survival , Humans , Male , Middle Aged , Reoperation , Resource Allocation , Retrospective Studies , Treatment Outcome , United Kingdom
SELECTION OF CITATIONS
SEARCH DETAIL
...