Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 99
Filter
1.
medRxiv ; 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38978666

ABSTRACT

IMPORTANCE: Improving the efficiency of interim assessments in phase III trials should reduce trial costs, hasten the approval of efficacious therapies, and mitigate patient exposure to disadvantageous randomizations. OBJECTIVE: We hypothesized that in silico Bayesian early stopping rules improve the efficiency of phase III trials compared with the original frequentist analysis without compromising overall interpretation. DESIGN: Cross-sectional analysis. SETTING: 230 randomized phase III oncology trials enrolling 184,752 participants. PARTICIPANTS: Individual patient-level data were manually reconstructed from primary endpoint Kaplan-Meier curves. INTERVENTIONS: Trial accruals were simulated 100 times per trial and leveraged published patient outcomes such that only the accrual dynamics, and not the patient outcomes, were randomly varied. MAIN OUTCOMES AND MEASURES: Early stopping was triggered per simulation if interim analysis demonstrated ≥ 85% probability of minimum clinically important difference/3 for efficacy or futility. Trial-level early closure was defined by stopping frequencies ≥ 0.75. RESULTS: A total of 12,451 simulations (54%) met early stopping criteria. Trial-level early stopping frequency was highly predictive of the published outcome (OR, 7.24; posterior probability of association, >99.99%; AUC, 0.91; P < 0.0001). Trial-level early closure was recommended for 82 trials (36%), including 62 trials (76%) which had performed frequentist interim analysis. Bayesian early stopping rules were 96% sensitive (95% CI, 91% to 98%) for detecting trials with a primary endpoint difference, and there was a high level of agreement in overall trial interpretation (Bayesian Cohen's κ, 0.95; 95% CrI, 0.92 to 0.99). However, Bayesian interim analysis was associated with >99.99% posterior probability of reducing patient enrollment requirements ( P < 0.0001), with an estimated cumulative enrollment reduction of 20,543 patients (11%; 89 patients averaged equally over all studied trials) and an estimated cumulative cost savings of 851 million USD (3.7 million USD averaged equally over all studied trials). CONCLUSIONS AND RELEVANCE: Bayesian interim analyses may improve randomized trial efficiency by reducing enrollment requirements without compromising trial interpretation. Increased utilization of Bayesian interim analysis has the potential to reduce costs of late-phase trials, reduce patient exposures to ineffective therapies, and accelerate approvals of effective therapies. KEY POINTS: Question: What are the effects of Bayesian early stopping rules on the efficiency of phase III randomized oncology trials?Findings: Individual-patient level outcomes were reconstructed for 184,752 patients from 230 trials. Compared with the original interim analysis strategy, in silico Bayesian interim analysis reduced patient enrollment requirements and preserved the original trial interpretation. Meaning: Bayesian interim analysis may improve the efficiency of conducting randomized trials, leading to reduced costs, reduced exposure of patients to disadvantageous treatments, and accelerated approval of efficacious therapies.

2.
Oncologist ; 29(7): 547-550, 2024 Jul 05.
Article in English | MEDLINE | ID: mdl-38824414

ABSTRACT

Missing visual elements (MVE) in Kaplan-Meier (KM) curves can misrepresent data, preclude curve reconstruction, and hamper transparency. This study evaluated KM plots of phase III oncology trials. MVE were defined as an incomplete y-axis range or missing number at risk table in a KM curve. Surrogate endpoint KM curves were additionally evaluated for complete interpretability, defined by (1) reporting the number of censored patients and (2) correspondence of the disease assessment interval with the number at risk interval. Among 641 trials enrolling 518 235 patients, 116 trials (18%) had MVE in KM curves. Industry sponsorship, larger trials, and more recently published trials were correlated with lower odds of MVE. Only 3% of trials (15 of 574) published surrogate endpoint KM plots with complete interpretability. Improvements in the quality of KM curves of phase III oncology trials, particularly for surrogate endpoints, are needed for greater interpretability, reproducibility, and transparency in oncology research.


Subject(s)
Clinical Trials, Phase III as Topic , Kaplan-Meier Estimate , Humans , Clinical Trials, Phase III as Topic/standards , Neoplasms/therapy , Medical Oncology/standards , Medical Oncology/methods
4.
JAMA Netw Open ; 7(3): e243379, 2024 Mar 04.
Article in English | MEDLINE | ID: mdl-38546648

ABSTRACT

Importance: Subgroup analyses are often performed in oncology to investigate differential treatment effects and may even constitute the basis for regulatory approvals. Current understanding of the features, results, and quality of subgroup analyses is limited. Objective: To evaluate forest plot interpretability and credibility of differential treatment effect claims among oncology trials. Design, Setting, and Participants: This cross-sectional study included randomized phase 3 clinical oncology trials published prior to 2021. Trials were screened from ClinicalTrials.gov. Main Outcomes and Measures: Missing visual elements in forest plots were defined as a missing point estimate or use of a linear x-axis scale for hazard and odds ratios. Multiplicity of testing control was recorded. Differential treatment effect claims were rated using the Instrument for Assessing the Credibility of Effect Modification Analyses. Linear and logistic regressions evaluated associations with outcomes. Results: Among 785 trials, 379 studies (48%) enrolling 331 653 patients reported a subgroup analysis. The forest plots of 43% of trials (156 of 363) were missing visual elements impeding interpretability. While 4148 subgroup effects were evaluated, only 1 trial (0.3%) controlled for multiple testing. On average, trials that did not meet the primary end point conducted 2 more subgroup effect tests compared with trials meeting the primary end point (95% CI, 0.59-3.43 tests; P = .006). A total of 101 differential treatment effects were claimed across 15% of trials (55 of 379). Interaction testing was missing in 53% of trials (29 of 55) claiming differential treatment effects. Trials not meeting the primary end point were associated with greater odds of no interaction testing (odds ratio, 4.47; 95% CI, 1.42-15.55, P = .01). The credibility of differential treatment effect claims was rated as low or very low in 93% of cases (94 of 101). Conclusions and Relevance: In this cross-sectional study of phase 3 oncology trials, nearly half of trials presented a subgroup analysis in their primary publication. However, forest plots of these subgroup analyses largely lacked essential features for interpretation, and most differential treatment effect claims were not supported. Oncology subgroup analyses should be interpreted with caution, and improvements to the quality of subgroup analyses are needed.


Subject(s)
Medical Oncology , Neoplasms , Humans , Cross-Sectional Studies , Neoplasms/therapy , Odds Ratio , Randomized Controlled Trials as Topic , Clinical Trials, Phase III as Topic
5.
J Natl Cancer Inst ; 116(6): 990-994, 2024 Jun 07.
Article in English | MEDLINE | ID: mdl-38331394

ABSTRACT

Differential censoring, which refers to censoring imbalance between treatment arms, may bias the interpretation of survival outcomes in clinical trials. In 146 phase III oncology trials with statistically significant time-to-event surrogate primary endpoints, we evaluated the association between differential censoring in the surrogate primary endpoints, control arm adequacy, and the subsequent statistical significance of overall survival results. Twenty-four (16%) trials exhibited differential censoring that favored the control arm, whereas 15 (10%) exhibited differential censoring that favored the experimental arm. Positive overall survival was more common in control arm differential censoring trials (63%) than in trials without differential censoring (37%) or with experimental arm differential censoring (47%; odds ratio = 2.64, 95% confidence interval = 1.10 to 7.20; P = .04). Control arm differential censoring trials more frequently used suboptimal control arms at 46% compared with 20% without differential censoring and 13% with experimental arm differential censoring (odds ratio = 3.60, 95% confidence interval = 1.29 to 10.0; P = .007). The presence of control arm differential censoring in trials with surrogate primary endpoints, especially in those with overall survival conversion, may indicate an inadequate control arm and should be examined and explained.


Subject(s)
Neoplasms , Humans , Neoplasms/mortality , Neoplasms/therapy , Clinical Trials, Phase III as Topic , Research Design/standards , Medical Oncology/standards
6.
Arthroscopy ; 40(2): 303-304, 2024 02.
Article in English | MEDLINE | ID: mdl-38296436

ABSTRACT

Chronic retracted rotator cuff tears are difficult entities to treat. L-shaped tears are a particular subset of such rotator cuff tears that pose challenges for surgeons attempting to reduce the supraspinatus tendon back to the greater tuberosity. Lack of full coverage of the tuberosity, need for medialization of the tendon, undue tension, and incomplete reconstitution of the rotator cable are some of the reasons L-shaped retracted tears of the supraspinatus can be challenging. Anterior cable reconstruction (ACR) is a technique that has gained increasing recent popularity, as is the use of patch augmentation. The long head of the biceps tendon is often readily available for use in ACR, but when it isn't, patch augmentation is an option for partially repairable rotator cuff tears. These produce similar postoperative improvements in range of motion as well as Constant and American Shoulder and Elbow Surgeons scores, but comparison to partial repair is still unknown.


Subject(s)
Rotator Cuff Injuries , Humans , Rotator Cuff Injuries/surgery , Rotator Cuff/surgery , Autografts , Elbow , Tendons/surgery , Allografts , Range of Motion, Articular
7.
Prog Retin Eye Res ; 99: 101246, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38262557

ABSTRACT

Due to the increasing prevalence of high myopia around the world, structural and functional damages to the optic nerve in high myopia has recently attracted much attention. Evidence has shown that high myopia is related to the development of glaucomatous or glaucoma-like optic neuropathy, and that both have many common features. These similarities often pose a diagnostic challenge that will affect the future management of glaucoma suspects in high myopia. In this review, we summarize similarities and differences in optic neuropathy arising from non-pathologic high myopia and glaucoma by considering their respective structural and functional characteristics on fundus photography, optical coherence tomography scanning, and visual field tests. These features may also help to distinguish the underlying mechanisms of the optic neuropathies and to determine management strategies for patients with high myopia and glaucoma.


Subject(s)
Glaucoma , Myopia , Optic Disk , Optic Nerve Diseases , Humans , Optic Disk/pathology , Intraocular Pressure , Glaucoma/diagnosis , Optic Nerve Diseases/pathology , Myopia/complications , Myopia/diagnosis , Tomography, Optical Coherence/methods
8.
Asia Pac J Ophthalmol (Phila) ; 12(6): 512-536, 2023.
Article in English | MEDLINE | ID: mdl-38117598

ABSTRACT

The coronavirus disease 2019 (COVID-19) pandemic caused by the severe acute respiratory syndrome coronavirus 2 was one of the most devastating public health issues in recent decades. The ophthalmology community is as concerned about the COVID-19 pandemic as the global public health community is, as COVID-19 was recognized to affect multiple organs in the human body, including the eyes, early in the course of the outbreak. Ophthalmic manifestations of COVID-19 are highly variable and could range from mild ocular surface abnormalities to potentially sight and life-threatening orbital and neuro-ophthalmic diseases. Furthermore, ophthalmic manifestations may also be the presenting or the only findings in COVID-19 infections. Meanwhile, global vaccination campaigns to attain herd immunity in different populations are the major strategy to mitigate the pandemic. As novel vaccinations against COVID-19 emerged, so were reports on adverse ophthalmic reactions potentially related to such. As the world enters a post-pandemic state where COVID-19 continues to exist and evolve as an endemic globally, the ophthalmology community ought to be aware of and keep abreast of the latest knowledge of ophthalmic associations with COVID-19 and its vaccinations. This review is a summary of the latest literature on the ophthalmic manifestations of COVID-19 and the adverse ophthalmic reactions related to its vaccinations.


Subject(s)
COVID-19 , Eye Diseases , Humans , COVID-19/epidemiology , COVID-19/prevention & control , Pandemics , SARS-CoV-2 , Eye Diseases/epidemiology , Eye Diseases/etiology , Vaccination/adverse effects
10.
Asia Pac J Ophthalmol (Phila) ; 12(5): 444-450, 2023.
Article in English | MEDLINE | ID: mdl-37851561

ABSTRACT

PURPOSE: To report the outcomes of a 120-degree goniotomy (GT) with or without secondary intraocular lens (IOL) implantation in glaucoma following cataract surgery (GFCS). DESIGN: Prospective, observational study. METHODS: Pediatric patients with GFCS who underwent standalone 120-degree GT or 120-degree GT combined with secondary IOL implantation (GT+IOL) from March 2022 to August 2022 at the Zhongshan Ophthalmic Center were recruited. Primary outcomes were intraocular pressure (IOP) and the number of ocular hypotensive medications. A secondary outcome was the surgical success rate. Success was defined as a postoperative IOP within the range of 5-21 mm Hg. Complete and qualified successes were defined, as the above, without and with ocular hypotensive medications, respectively. RESULTS: Thirty-two eyes of 22 patients were included. The mean age at the time of GT was 68.5 ± 29.3 months. The mean follow-up duration was 12.2 ± 2.3 months (9-15 mo). Mean IOP decreased from 30.9 ± 4.8 mm Hg on 2 (interquartile range = 1) medications at baseline to 15.8 ± 3.6 mm Hg on 0 (interquartile range = 1.5) medication at the latest visit in all eyes. The overall complete and qualified success rates were 68.8% and 90.6%, respectively. There were no significant differences in IOP, number of medications, and complete and qualified success rates between the standalone GT and GT+IOL groups at the latest follow-up at 9 months postoperatively. CONCLUSIONS: To reduce the need for additional surgery, 120-degree GT was a safe and effective surgical treatment for GFCS in children, which could be combined with secondary IOL implantation in aphakic eyes with GFCS.


Subject(s)
Cataract , Glaucoma , Trabeculectomy , Humans , Child , Child, Preschool , Lens Implantation, Intraocular , Pilot Projects , Prospective Studies , Intraocular Pressure , Cataract/complications , Treatment Outcome , Glaucoma/surgery , Glaucoma/complications , Retrospective Studies
11.
Eur J Cancer ; 194: 113357, 2023 11.
Article in English | MEDLINE | ID: mdl-37827064

ABSTRACT

BACKGROUND: The 'Table 1 Fallacy' refers to the unsound use of significance testing for comparing the distributions of baseline variables between randomised groups to draw erroneous conclusions about balance or imbalance. We performed a cross-sectional study of the Table 1 Fallacy in phase III oncology trials. METHODS: From ClinicalTrials.gov, 1877 randomised trials were screened. Multivariable logistic regressions evaluated predictors of the Table 1 Fallacy. RESULTS: A total of 765 randomised controlled trials involving 553,405 patients were analysed. The Table 1 Fallacy was observed in 25% of trials (188 of 765), with 3% of comparisons deemed significant (59 of 2353), approximating the typical 5% type I error assertion probability. Application of trial-level multiplicity corrections reduced the rate of significant findings to 0.3% (six of 2345 tests). Factors associated with lower odds of the Table 1 Fallacy included industry sponsorship (adjusted odds ratio [aOR] 0.29, 95% confidence interval [CI] 0.18-0.47; multiplicity-corrected P < 0.0001), larger trial size (≥795 versus <280 patients; aOR 0.32, 95% CI 0.19-0.53; multiplicity-corrected P = 0.0008), and publication in a European versus American journal (aOR 0.06, 95% CI 0.03-0.13; multiplicity-corrected P < 0.0001). CONCLUSIONS: This study highlights the persistence of the Table 1 Fallacy in contemporary oncology randomised controlled trials, with one of every four trials testing for baseline differences after randomisation. Significance testing is a suboptimal method for identifying unsound randomisation procedures and may encourage misleading inferences. Journal-level enforcement is a possible strategy to help mitigate this fallacy.


Subject(s)
Neoplasms , Humans , Prevalence , Cross-Sectional Studies , Neoplasms/epidemiology , Neoplasms/therapy , Randomized Controlled Trials as Topic
12.
Semin Radiat Oncol ; 33(4): 429-437, 2023 10.
Article in English | MEDLINE | ID: mdl-37684072

ABSTRACT

Optimal management of cancer patients relies heavily on late-phase oncology randomized controlled trials. A comprehensive understanding of the key considerations in designing and interpreting late-phase trials is crucial for improving subsequent trial design, execution, and clinical decision-making. In this review, we explore important aspects of late-phase oncology trial design. We begin by examining the selection of primary endpoints, including the advantages and disadvantages of using surrogate endpoints. We address the challenges involved in assessing tumor progression and discuss strategies to mitigate bias. We define informative censoring bias and its impact on trial results, including illustrative examples of scenarios that may lead to informative censoring. We highlight the traditional roles of the log-rank test and hazard ratio in survival analyses, along with their limitations in the presence of nonproportional hazards as well as an introduction to alternative survival estimands, such as restricted mean survival time or MaxCombo. We emphasize the distinctions between the design and interpretation of superiority and noninferiority trials, and compare Bayesian and frequentist statistical approaches. Finally, we discuss appropriate utilization of phase II and phase III trial results in shaping clinical management recommendations and evaluate the inherent risks and benefits associated with relying on phase II data for treatment decisions.


Subject(s)
Neoplasms , Humans , Bayes Theorem , Clinical Decision-Making , Medical Oncology , Neoplasms/radiotherapy , Randomized Controlled Trials as Topic
13.
Surg Oncol Clin N Am ; 32(3): 461-473, 2023 07.
Article in English | MEDLINE | ID: mdl-37182987

ABSTRACT

The current preferred standard of care management for patients with locally advanced rectal cancer is total neoadjuvant therapy, in which all chemotherapy and radiotherapy is delivered before surgery. Within this approach, developed in response to persistently high distant failure rates despite excellent local control with preoperative chemoradiotherapy, there remains questions regarding the optimal radiotherapy regimen (short course vs long course) and sequencing of chemotherapy (induction vs consolidation).


Subject(s)
Neoplasm Recurrence, Local , Rectal Neoplasms , Humans , Treatment Outcome , Neoplasm Recurrence, Local/drug therapy , Rectal Neoplasms/radiotherapy , Rectal Neoplasms/surgery , Chemoradiotherapy , Neoadjuvant Therapy
14.
JAMA Netw Open ; 6(5): e2313819, 2023 05 01.
Article in English | MEDLINE | ID: mdl-37195664

ABSTRACT

Importance: Primary end point (PEP) changes to an active clinical trial raise questions regarding trial quality and the risk of outcome reporting bias. It is unknown how the frequency and transparency of the reported changes depend on reporting method and whether the PEP changes are associated with trial positivity (ie, the trial met the prespecified statistical threshold for PEP positivity). Objectives: To assess the frequency of reported PEP changes in oncology randomized clinical trials (RCTs) and whether these changes are associated with trial positivity. Design, Setting, and Participants: This cross-sectional study used publicly available data for complete oncology phase 3 RCTs registered in ClinicalTrials.gov from inception through February 2020. Main Outcomes and Measures: The main outcome was change between the initial PEP and the final reported PEP, assessed using 3 methods: (1) history of tracked changes on ClinicalTrials.gov, (2) self-reported changes noted in the article, and (3) changes reported within the protocol, including all available protocol documents. Logistic regression analyses were performed to evaluate whether PEP changes were associated with US Food and Drug Administration approval or trial positivity. Results: Of 755 included trials, 145 (19.2%) had PEP changes found by at least 1 of the 3 detection methods. Of the 145 trials with PEP changes, 102 (70.3%) did not have PEP changes disclosed within the manuscript. There was significant variability in rates of PEP detection by each method (χ2 = 72.1; P < .001). Across all methods, PEP changes were detected at higher rates when multiple versions of the protocol (47 of 148 [31.8%]) were available compared with 1 version (22 of 134 [16.4%]) or no protocol (76 of 473 [16.1%]) (χ2 = 18.7; P < .001). Multivariable analysis demonstrated that PEP changes were associated with trial positivity (odds ratio, 1.86; 95% CI, 1.25-2.82; P = .003). Conclusions and Relevance: This cross-sectional study revealed substantial rates of PEP changes among active RCTs; PEP changes were markedly underreported in published articles and mostly occurred after reported study completion dates. Significant discrepancies in the rate of detected PEP changes call into question the role of increased protocol transparency and completeness in identifying key changes occurring in active trials.


Subject(s)
Medical Oncology , Neoplasms , Humans , Incidence , Randomized Controlled Trials as Topic , Bias , Neoplasms/epidemiology
15.
Support Care Cancer ; 31(6): 322, 2023 May 06.
Article in English | MEDLINE | ID: mdl-37148382

ABSTRACT

PURPOSE: Proactive nutrition screening and intervention is associated with improved outcomes for patients with pancreatic adenocarcinoma (PDAC). To better optimize nutrition amongst our PDAC population, we implemented systematic malnutrition screening in the Johns Hopkins pancreas multidisciplinary clinic (PMDC) and assessed the effectiveness of our nutrition referral system. METHODS: This was a single institution prospective study of patients seen in the PMDC, screened for malnutrition using the Malnutrition Screening Tool (MST) (score range=0 to 5, score > 2 indicates risk of malnutrition), and offered referrals to the oncology dietitian. Patients that requested a referral but did not attend a nutrition appointment were contacted by phone to assess barriers to seeing the dietitian. Univariate (UVA) and multivariable (MVA) analyses were carried out to identify predictors of referral status and appointment completion status. RESULTS: A total of 97 patients were included in the study, of which 72 (74.2%) requested a referral and 25 (25.8%) declined. Of the 72 patients who requested a referral, 31 (43.1%) attended an appointment with the oncology dietitian. Data on information session attendance was available for 35 patients, of which 8 (22.9%) attended a pre-clinic information session in which the importance of optimal nutrition was highlighted. On MVA, information session attendance was significantly associated with requesting a referral (OR: 11.1, 95% CI 1.12-1.0E3, p=0.037) and successfully meeting with the oncology dietitian (OR: 5.88, 95% CI 1.00-33.3, p=0.049). CONCLUSION: PMDC teams should institute educational initiatives on the importance of optimal nutrition in order to increase patient engagement with nutrition services.


Subject(s)
Adenocarcinoma , Malnutrition , Pancreatic Neoplasms , Humans , Nutrition Assessment , Prospective Studies , Pancreatic Neoplasms/therapy , Nutritional Status , Malnutrition/diagnosis , Malnutrition/etiology , Malnutrition/therapy , Referral and Consultation , Pancreatic Neoplasms
16.
JAMA Netw Open ; 6(4): e236498, 2023 04 03.
Article in English | MEDLINE | ID: mdl-37010873

ABSTRACT

This cohort study assesses the relative stability of median and mean survival time estimates reported in cancer clinical trials.


Subject(s)
Neoplasms , Humans , Survival Rate , Neoplasms/drug therapy , Survival Analysis
18.
Cancers (Basel) ; 15(4)2023 Feb 16.
Article in English | MEDLINE | ID: mdl-36831594

ABSTRACT

We aimed to evaluate the impact of time from stereotactic body radiation therapy (SBRT) to surgery on treatment outcomes and post-operative complications in patients with borderline resectable or locally advanced pancreatic cancer (BRPC/LAPC). We conducted a single-institutional retrospective analysis of patients with BRPC/LAPC treated from 2016 to 2021 with neoadjuvant chemotherapy followed by SBRT and surgical resection. Covariates were stratified by time from SBRT to surgery. A Cox regression model was used to identify variables associated with survival outcomes. In 171 patients with BRPC/LAPC, the median time from SBRT to surgery was 6.4 (range: 2.7-25.3) weeks. Hence, patients were stratified by the timing of surgery: ≥6 and <6 weeks after SBRT. In univariable Cox regression, surgery ≥6 weeks was associated with improved local control (LC, HR 0.55, 95% CI 0.30-0.98; p = 0.042), pathologic node positivity, elevated baseline CA19-9, and inferior LC if of the male sex. In multivariable analysis, surgery ≥6 weeks (p = 0.013; HR 0.46, 95%CI 0.25-0.85), node positivity (p = 0.019; HR 2.09, 95% CI 1.13-3.88), and baseline elevated CA19-9 (p = 0.002; HR 2.73, 95% CI 1.44-5.18) remained independently associated with LC. Clavien-Dindo Grade ≥3B complications occurred in 4/63 (6.3%) vs. 5/99 (5.5%) patients undergoing surgery <6 weeks and ≥6 weeks after SBRT (p = 0.7). In summary, the timing of surgery ≥6 weeks after SBRT was associated with improved local control and low post-operative complication rates, irrespective of the surgical timing. Further investigation of the influence of surgical timing following radiotherapy is warranted.

19.
Arthroscopy ; 39(3): 590-591, 2023 03.
Article in English | MEDLINE | ID: mdl-36740283

ABSTRACT

The anterior cruciate ligament (ACL) is the most studied ligament in the knee and one of the most studied topics in orthopaedics, with little consensus on best options for surgical technique or graft choice. While there is little question that physical rehabilitation is one of the most important variables in the episode of care before and after ACL reconstruction (ACLR), recent research surveying orthopaedic surgeons demonstrates no consensus of how to rehab ACLR patients and how to get them to return to sport safely and quickly. Seventy-two percent of surgeons prescribe "pre-hab" prior to ACLR, and 83% of surgeons use postoperative bracing, with most (55%) bracing for 3 to 6 weeks postoperatively. Patient-reported outcome measures (35%) and assessments of psychological readiness (23%) are not commonly used to progress patients through the stages of rehab. When asked what they believe is the single most important factor in unrestricted return to sport, 52% of surgeons stated functional testing scores were most important, while 38% stated time since surgery, and 5% stated muscle strength. As for average time to return to full activity, 50% of surgeons waited until 9+ months for full return, and 42% allowed return within 6 to 8 months. Reductions in practice variability have been shown in orthopaedic surgery and other fields to reduce costs of care delivery and improve patient outcomes, and with so much variability in ACLR rehabilitation protocols, the orthopaedic community would be wise to strive for more consensus focused on evidence-based recommendations for rehabilitation and to fill in knowledge gaps with focused, high-quality research where needed.


Subject(s)
Anterior Cruciate Ligament Injuries , Anterior Cruciate Ligament Reconstruction , Humans , Anterior Cruciate Ligament/surgery , Knee Joint/surgery , Anterior Cruciate Ligament Reconstruction/methods , Return to Sport/psychology , Reference Standards
20.
Transplant Direct ; 9(2): e1431, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36700065

ABSTRACT

Living liver donor obesity has been considered a relative contraindication to living donation given the association with hepatic steatosis and potential for poor donor and recipient outcomes. We investigated the association between donor body mass index (BMI) and donor and recipient posttransplant outcomes. Methods: We studied 66 living donors and their recipients who underwent living donor liver transplant at our center between 2013 and 2020. BMI was divided into 3 categories (<25, 25-29.9, and ≥30 kg/m2). Magnetic resonance imaging-derived proton density fat fraction was used to quantify steatosis. Donor outcomes included length of stay (LOS), emergency department visits within 90 d, hospital readmissions within 90 d, and complication severity. Recipient outcomes included LOS and in-hospital mortality. The Student t test was used to compare normally distributed variables, and Kruskal-Wallis tests were used for nonparametric data. Results: There was no difference in donor or recipient characteristics based on donor BMI. There was no significant difference in mean magnetic resonance imaging fat percentage among the 3 groups. Additionally, there was no difference in donor LOS (P = 0.058), emergency department visits (P = 0.64), and hospital readmissions (P = 0.66) across BMI category. Donor complications occurred in 30 patients. There was no difference in postdonation complications across BMI category (P = 0.19); however, there was a difference in wound complications, with the highest rate being seen in the highest BMI group (0% versus 16% versus 37%; P = 0.041). Finally, there was no difference in recipient LOS (P = 0.83) and recipient in-hospital mortality (P = 0.29) across BMI category. Conclusions: Selecting donors with BMI ≥30 kg/m2 can result in successful living donor liver transplantation; however, they are at risk for perioperative wound complications. Donor counseling and perioperative strategies to mitigate wound-related issues should be used when considering obese living donors.

SELECTION OF CITATIONS
SEARCH DETAIL
...