Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 55
Filtrar
1.
Hepatology ; 2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38687634

RESUMO

Ensemble machine learning methods, like the superlearner, combine multiple models into a single one to enhance predictive accuracy. Here we explore the potential of the superlearner as a benchmarking tool for clinical risk prediction, illustrating the approach in identifying significant liver fibrosis among patients with non-alcoholic fatty liver disease (NAFLD). We used 23 demographic/clinical variables to train superlearner(s) on data from the NASH-CRN observational study (n=648) and validated models with data from the FLINT trial (n=270) and NHANES participants with NAFLD (n=1244). Comparing the superlearner's performance to existing models (FIB-4, NFS, Forns, APRI, BARD, and SAFE), it exhibited strong discriminative ability in the FLINT and NHANES validation sets, with AUCs of 0.79 (95% CI: 0.73-0.84) and 0.74 (95% CI: 0.68-0.79) respectively. Notably, the SAFE score performed similarly to the superlearner, both of which outperformed FIB-4, APRI, Forns, and BARD scores in the validation datasets. Surprisingly, the superlearner derived from 12 base models matched the performance of one with 90 base models. Overall, the superlearner, being the "best-in-class" ML predictor, excelled in detecting fibrotic NASH, and this approach can be used to benchmark the performance of conventional clinical risk prediction models.

2.
Gastroenterology ; 2024 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-38552670

RESUMO

BACKGROUND & AIMS: Colorectal cancer (CRC) screening is highly effective but underused. Blood-based biomarkers (liquid biopsy) could improve screening participation. METHODS: Using our established Markov model, screening every 3 years with a blood-based test that meets minimum Centers for Medicare & Medicaid Services' thresholds (CMSmin) (CRC sensitivity 74%, specificity 90%) was compared with established alternatives. Test attributes were varied in sensitivity analyses. RESULTS: CMSmin reduced CRC incidence by 40% and CRC mortality by 52% vs no screening. These reductions were less profound than the 68%-79% and 73%-81%, respectively, achieved with multi-target stool DNA (Cologuard; Exact Sciences) every 3 years, annual fecal immunochemical testing (FIT), or colonoscopy every 10 years. Assuming the same cost as multi-target stool DNA, CMSmin cost $28,500/quality-adjusted life-year gained vs no screening, but FIT, colonoscopy, and multi-target stool DNA were less costly and more effective. CMSmin would match FIT's clinical outcomes if it achieved 1.4- to 1.8-fold FIT's participation rate. Advanced precancerous lesion (APL) sensitivity was a key determinant of a test's effectiveness. A paradigm-changing blood-based test (sensitivity >90% for CRC and 80% for APL; 90% specificity; cost ≤$120-$140) would be cost-effective vs FIT at comparable participation. CONCLUSIONS: CMSmin could contribute to CRC control by achieving screening in those who will not use established methods. Substituting blood-based testing for established effective CRC screening methods will require higher CRC and APL sensitivities that deliver programmatic benefits matching those of FIT. High APL sensitivity, which can result in CRC prevention, should be a top priority for screening test developers. APL detection should not be penalized by a definition of test specificity that focuses on CRC only.

3.
Cancer ; 130(6): 901-912, 2024 03 15.
Artigo em Inglês | MEDLINE | ID: mdl-38180788

RESUMO

BACKGROUND: Colorectal cancer (CRC) incidence at ages <50 years is increasing worldwide. Screening initiation was lowered to 45 years in the United States. The cost-effectiveness of initiating CRC screening at 45 years in Israel was assessed with the aim of informing national policy and addressing internationally relevant questions. METHODS: A validated CRC screening model was calibrated to Israeli data and examined annual fecal immunochemical testing (FIT) or colonoscopy every 10 years from 45 to 74 years (FIT45-74 or Colo45-74) versus from 50 to 74 years (FIT50-74 or Colo50-74). The addition of a fourth colonoscopy at 75 years was explored, subanalyses were performed by sex/ethnicity, and resource demands were estimated. RESULTS: FIT50-74 and Colo50-74 reduced CRC incidence by 57% and 70% and mortality by 70% and 77%, respectively, versus no screening, with greater absolute impact in Jews/Other versus Arabs but comparable relative impact. FIT45-74 further reduced CRC incidence and mortality by an absolute 3% and 2%, respectively. With Colo45-74 versus Colo50-74, CRC cases and deaths increased slightly as three colonoscopies per lifetime shifted to 5 years earlier but mean quality-adjusted life-years gained (QALYGs) per person increased. FIT45-74 and Colo45-74 cost 23,800-53,900 new Israeli shekels (NIS)/QALYG and 110,600-162,700 NIS/QALYG, with the lowest and highest values among Jewish/Other men and Arab women, respectively. A fourth lifetime colonoscopy cost 48,700 NIS/QALYG. Lowering FIT initiation to 45 years with modest participation required 19,300 additional colonoscopies in the first 3 years. CONCLUSIONS: Beginning CRC screening at 45 years in Israel is projected to yield modest clinical benefits at acceptable costs per QALYG. Despite different estimates by sex/ethnicity, a uniform national policy is favored. These findings can inform Israeli guidelines and serve as a case study internationally.


Assuntos
Neoplasias Colorretais , Detecção Precoce de Câncer , Masculino , Humanos , Feminino , Estados Unidos , Pessoa de Meia-Idade , Israel/epidemiologia , Análise Custo-Benefício , Colonoscopia , Neoplasias Colorretais/diagnóstico , Neoplasias Colorretais/epidemiologia , Sangue Oculto , Programas de Rastreamento
4.
Mayo Clin Proc ; 98(9): 1335-1344, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37661141

RESUMO

OBJECTIVES: To analyze the impact of access to routine health care, as estimated by health insurance coverage, on hepatitis C virus (HCV) infection prevalence in US adults born after 1965 (post-baby boomer birth cohort [post-BBBC]) and to use the data to formulate strategies to optimize population screening for HCV. PATIENTS AND METHODS: Adult examinees in the National Health and Nutrition Examination Survey with available anti-HCV data were divided into era 1 (1999-2008) and era 2 (2009-2016). The prevalence of HCV infection, as defined by detectable serum HCV RNA, was determined in post-BBBC adults. In low prevalence groups, prescreening modalities were considered to increase the pretest probability. RESULTS: Of 16,966 eligible post-BBBC examinees, 0.5% had HCV infection. In both eras, more than 50% had no insurance. In era 2, HCV prevalence was 0.26% and 0.83% in those with and without insurance, respectively (P<.01). As a prescreening test, low alanine aminotransferase level (<23 U/L in women and 32 U/L in men) would identify 54% of post-BBBC adults with an extremely low (0.02%) HCV prevalence. Based on these data, a tiered approach that tests all uninsured directly for HCV and prescreens the insured with alanine aminotransferase would reduce the number to test by 56.5 million while missing less than 1% infections. CONCLUSION: For HCV elimination, passive "universal" screening in routine health care settings is insufficient, although the efficiency of screening may be improved with alanine aminotransferase prescreening. Importantly, for individuals with limited access to health care, proactive outreach programs for HCV screening are still needed.


Assuntos
Hepatite C , Adulto , Masculino , Humanos , Feminino , Alanina Transaminase , Inquéritos Nutricionais , Hepatite C/diagnóstico , Hepatite C/epidemiologia , Anticorpos , Instalações de Saúde
5.
medRxiv ; 2023 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-37577485

RESUMO

Background and Aims: Ensemble machine learning (ML) methods can combine many individual models into a single 'super' model using an optimal weighted combination. Here we demonstrate how an underutilized ensemble model, the superlearner, can be used as a benchmark for model performance in clinical risk prediction. We illustrate this by implementing a superlearner to predict liver fibrosis in patients with non-alcoholic fatty liver disease (NAFLD). Methods: We trained a superlearner based on 23 demographic and clinical variables, with the goal of predicting stage 2 or higher liver fibrosis. The superlearner was trained on data from the Non-alcoholic steatohepatitis - clinical research network observational study (NASH-CRN, n=648), and validated using data from participants in a randomized trial for NASH ('FLINT' trial, n=270) and data from examinees with NAFLD who participated in the National Health and Nutrition Examination Survey (NHANES, n=1244). We compared the performance of the superlearner with existing models, including FIB-4, NFS, Forns, APRI, BARD and SAFE. Results: In the FLINT and NHANES validation sets, the superlearner (derived from 12 base models) discriminates patients with significant fibrosis from those without well, with AUCs of 0.79 (95% CI: 0.73-0.84) and 0.74 (95% CI: 0.68-0.79). Among the existing scores considered, the SAFE score performed similarly to the superlearner, and the superlearner and SAFE scores outperformed FIB-4, APRI, Forns, and BARD scores in the validation datasets. A superlearner model derived from 12 base models performed as well as one derived from 90 base models. Conclusions: The superlearner, thought of as the "best-in-class" ML prediction, performed better than most existing models commonly used in practice in detecting fibrotic NASH. The superlearner can be used to benchmark the performance of conventional clinical risk prediction models.

6.
Hepatology ; 78(2): 540-546, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-36943091

RESUMO

BACKGROUND AND AIMS: Adolescents constitute a unique waitlist cohort that is distinct from younger children. Model for End-stage Liver Disease (MELD) 3.0, which was developed in an adult population of liver transplant candidates, is planned to replace MELD-Sodium in the current liver allocation system for both adults and adolescents aged 12-17. We evaluated the predictive performance of MELD-Sodium, MELD 3.0, and Pediatric End-stage Liver Disease for 90-day waitlist mortality risk among adolescent liver transplant registrants. APPROACH AND RESULTS: New waitlist registrations for primary liver transplants among individuals aged 12-17 and 18-25 for comparison were identified using Organ Procurement and Transplantation Network (OPTN) data from November 17, 2004, to December 31, 2021. The predictive performance of the current and proposed MELD and Pediatric End-stage Liver Disease scores was assessed using Harrell's concordance ( c ) statistic. There were 1238 eligible listings for adolescents aged 12-17 and 1740 young adults aged 18-25. In the adolescent group, 90-day survival was 97.8%, compared with 95.9% in those aged 18-25 (log-rank p = 0.005), with no significant differences when stratified by sex or indication. Among adolescents, increasing MELD 3.0 was associated with an increased hazard of mortality (HR=1.27, 95% CI: 1.18-1.37), and the c -statistic for 90-day waitlist survival using MELD 3.0 was 0.893 compared with 0.871 using MELD-Sodium and 0.852 using Pediatric End-stage Liver Disease. CONCLUSIONS: The discriminative ability of MELD 3.0 to rank adolescents according to the risk of death within 90 days was robust. Although MELD 3.0 was initially developed and validated in adults, MELD 3.0 may also improve the prediction of waitlist mortality in adolescents and better represent their urgency for liver transplants.


Assuntos
Doença Hepática Terminal , Transplante de Fígado , Obtenção de Tecidos e Órgãos , Adulto Jovem , Humanos , Adolescente , Criança , Adulto , Doença Hepática Terminal/cirurgia , Índice de Gravidade de Doença , Listas de Espera , Sódio
7.
Gastroenterology ; 164(6): 1029-1030, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-31589871
8.
Hepatology ; 77(1): 256-267, 2023 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-35477908

RESUMO

BACKGROUND: NAFLD is common in primary care. Liver fibrosis stage 2 or higher (≥F2) increases future risk of morbidity and mortality. We developed and validated a score to aid in the initial assessment of liver fibrosis for NAFLD in primary care. METHODS: Data from patients with biopsy-proven NAFLD were extracted from the NASH Clinical Research Network observational study ( n = 676). Using logistic regression and machine-learning methods, we constructed prediction models to distinguish ≥F2 from F0/1. The models were tested in participants in a trial ("FLINT," n = 280) and local patients with NAFLD with magnetic resonance elastography data ( n = 130). The final model was applied to examinees in the National Health and Nutrition Examination Survey (NHANES) III ( n = 11,953) to correlate with long-term mortality. RESULTS: A multivariable logistic regression model was selected as the Steatosis-Associated Fibrosis Estimator (SAFE) score, which consists of age, body mass index, diabetes, platelets, aspartate and alanine aminotransferases, and globulins (total serum protein minus albumin). The model yielded areas under receiver operating characteristic curves ≥0.80 in distinguishing F0/1 from ≥F2 in testing data sets, consistently higher than those of Fibrosis-4 and NAFLD Fibrosis Scores. The negative predictive values in ruling out ≥F2 at SAFE of 0 were 88% and 92% in the two testing sets. In the NHANES III set, survival up to 25 years of subjects with SAFE < 0 was comparable to that of those without steatosis ( p = 0.34), whereas increasing SAFE scores correlated with shorter survival with an adjusted HR of 1.53 ( p < 0.01) for subjects with SAFE > 100. CONCLUSION: The SAFE score, which uses widely available variables to estimate liver fibrosis in patients diagnosed with NAFLD, may be used in primary care to recognize low-risk NAFLD.


Assuntos
Hepatopatia Gordurosa não Alcoólica , Humanos , Hepatopatia Gordurosa não Alcoólica/complicações , Hepatopatia Gordurosa não Alcoólica/diagnóstico , Inquéritos Nutricionais , Cirrose Hepática/patologia , Fibrose , Biópsia , Atenção Primária à Saúde , Fígado/patologia
10.
Hepatology ; 77(3): 851-861, 2023 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-36052665

RESUMO

BACKGROUND AND AIMS: Since the implementation of the model for end-stage liver disease (MELD) score to determine waitlist priority for liver transplant (LT) in 2002, the score has been capped at 40. Recently, the MELD 3.0 score was proposed to improve upon MELD-Na. Here, we examine waitlist mortality and LT outcomes in patients with MELD 3.0 ≥ 40 to assess the potential impact of uncapping the score. APPROACH AND RESULTS: Adult waitlist registrations for LT from January 2016 to December 2021 were identified in the registry data from the Organ Procurement and Transplant Network. All MELD 3.0 scores were calculated at registration and thereafter. Waitlist mortality for up to 30 days was calculated as well as post-LT survival. There were 54,060 new waitlist registrations during the study period, of whom 2820 (5.2%) had MELD 3.0 ≥ 40 at listing. The 30-day waitlist mortality was high in these patients, yet it increased further in proportion with MELD 3.0 up to a score of 55 with 30-day mortality of 58.3% for MELD 3.0 of 40-44 and 82.4% for ≥50. The multivariable hazard ratio was 1.13 for each point of MELD 3.0, adjusting for several variables including acute-on-chronic liver failure. The number of LT recipients with MELD 40 at transplant increased from 155 in 2002 to 752 in 2021. Posttransplant survival was comparable across MELD strata including MELD of 35-39. CONCLUSION: MELD 3.0 scores beyond 40 are associated with increasing waitlist mortality without adversely affecting posttransplant outcome. Uncapping the MELD score in waitlist candidates may lead to greater survival benefit from LT.


Assuntos
Doença Hepática Terminal , Transplante de Fígado , Obtenção de Tecidos e Órgãos , Adulto , Humanos , Índice de Gravidade de Doença , Transplante de Fígado/efeitos adversos , Modelos de Riscos Proporcionais , Listas de Espera
11.
Clin Gastroenterol Hepatol ; 21(2): 507-519, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35940514

RESUMO

BACKGROUND & AIMS: Overweight and obese persons have not only elevated rates of colorectal cancer (CRC), but also higher competing mortality and healthcare spending. We examined the cost-effectiveness of intensified CRC screening in overweight and obese persons. METHODS: We adapted our validated decision analytic model of CRC screening to compare screening starting at 45 or 40 years of age instead of at 50 years of age, or shortening screening intervals, in women and men with body mass index (BMI) ranging from normal to grade III obesity. Strategies included colonoscopy every 10 years (Colo10) or every 5 years (Colo5), or annual fecal immunochemical test. RESULTS: Without screening, sex-specific total CRC deaths were similar for persons with overweight or obesity I-III, reflecting the counterbalancing of higher CRC risk by lower life expectancy as BMI rises. For all BMI and sex groups, Colo10 starting at 45 years of age or FIT starting at 40 years of age were cost-effective at a threshold of $100,000 per quality-adjusted life year gained. Colo10 starting at 40 years of age was cost-effective only for men with obesity II-III, at $93,300 and $80,400 per quality-adjusted life year gained, respectively. Shifting Colo10 to earlier starting ages was always preferred over Colo5 starting at later ages. Results were robust in sensitivity analysis, including varying all-cause mortality, complication, and BMI-specific CRC risks. CONCLUSIONS: CRC screening starting at 45 years of age with colonoscopy, or at 40 years of age with FIT, appears cost-effective for women and men across the range of BMI. In men with obesity II-III, who have the highest CRC but also all-cause mortality risks, colonoscopy starting at 40 years of age appears cost-effective. It remains to be decided whether BMI should be used as a single predictor or incorporated into a multivariable tool to tailor CRC screening.


Assuntos
Neoplasias Colorretais , Sobrepeso , Masculino , Humanos , Feminino , Pessoa de Meia-Idade , Adulto , Análise Custo-Benefício , Sobrepeso/complicações , Detecção Precoce de Câncer/métodos , Colonoscopia , Obesidade/complicações , Neoplasias Colorretais/diagnóstico , Neoplasias Colorretais/prevenção & controle , Sangue Oculto , Programas de Rastreamento/métodos
12.
Clin Gastroenterol Hepatol ; 20(12): 2895-2904.e4, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-35580769

RESUMO

BACKGROUND AND AIMS: All major U.S. guidelines now endorse average-risk colorectal cancer (CRC) screening at 45-49 years of age. Concerns exist that endoscopic capacity may be strained, that low-risk persons may self-select for screening, and that calculations of the adenoma detection rate may be diluted. We analyzed age-specific screening colonoscopy volumes and lesion detection rates before vs after the endorsement of CRC screening at 45-49 years of age. METHODS: We compared colonoscopy volumes and lesion detection rates in our healthcare system during period 1 (October 2017 to December 2018), before the first change in guidelines, vs period 2 (January 2019 to August 2021), the era of new guidelines. RESULTS: The proportion of first-time screening colonoscopies performed in 45- to 49-year-olds increased from 3.5% to 11.6% (relative risk, 3.36; 95% CI, 2.45-4.61). The period 2 detection rates for adenoma, advanced adenoma, sessile serrated lesion, advanced sessile serrated lesion, adenomas per colonoscopy, and lesions per colonoscopy were very similar for 45- to 49-year-olds (34.3%, 6.3%, 8.6%, 2.9%, 0.58, and 0.69, respectively) and 50- to 54-year-olds (38.2%, 5.8%, 9.4%, 3.0%, 0.63, and 0.76, respectively) at first-time screening, and for 60- to 64-year-olds at rescreening (33.4%, 6.1%, 7.2%, 2.3%, 0.61, and 0.70, respectively). All detection rates, adenomas per colonoscopy, and lesions per colonoscopy increased from period 1 to period 2 (eg, overall adenoma detection rate 35.1% vs 42.6%; P < .0001), without any decreases among 45- to 49-year-olds. CONCLUSIONS: In our healthcare system, a lower CRC screening initiation age has modestly affected colonoscopy volume by age without compromising screening yield. Lesion detection rates, including for advanced adenomas, in average-risk 45- to 49-year-olds approximate those in 50- to 54-year-olds at first-time screening and 60- to 64-year-olds at rescreening. National monitoring is needed to assess fully the impact of lowering the CRC screening initiation age.


Assuntos
Adenoma , Pólipos do Colo , Neoplasias Colorretais , Humanos , Pessoa de Meia-Idade , Idoso , Detecção Precoce de Câncer , Estudos Retrospectivos , Colonoscopia , Adenoma/diagnóstico , Adenoma/patologia , Neoplasias Colorretais/diagnóstico , Neoplasias Colorretais/patologia , Pólipos do Colo/diagnóstico
13.
Clin Gastroenterol Hepatol ; 20(1): 230-232, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-33285291

RESUMO

Persistent serum alanine aminotransferase (ALT) elevation in nucleotide/nucleoside analogue (NA)-treated patients with chronic hepatitis B (CHB) has been associated with unfavorable long-term outcomes.1 It has been consistently shown that a higher proportion of patients receiving tenofovir alafenamide (TAF) achieve normal ALT in comparison with recipients of tenofovir disoproxil fumarate,2-5 the mechanism for which remains unknown.2.


Assuntos
Diabetes Mellitus , Hepatite B Crônica , Alanina/uso terapêutico , Alanina Transaminase , Antivirais/uso terapêutico , Diabetes Mellitus/tratamento farmacológico , Hepatite B Crônica/tratamento farmacológico , Humanos , Tenofovir/análogos & derivados
14.
Clin Gastroenterol Hepatol ; 20(5): 1142-1150.e4, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-34358718

RESUMO

BACKGROUND & AIMS: Policy changes in the United States have lengthened overall waiting times for patients with hepatocellular carcinoma (HCC). We investigated temporal trends in utilization of locoregional therapy (LRT) and associated waitlist outcomes among liver transplant (LT) candidates in the United States. METHODS: Data for primary adult LT candidates listed from 2003 to 2018 who received HCC exception were extracted from the Organ Procurement and Transplantation Network database. Explant histology was examined, and multivariable competing risk analysis was used to evaluate the association between LRT type and waitlist dropout. RESULTS: There were 31,609 eligible patients with at least 1 approved HCC exception, and 34,610 treatments among 24,145 LT candidates. The proportion with at least 1 LRT recorded increased from 42.3% in 2003 to 92.4% in 2018. Chemoembolization remains the most frequent type, followed by thermal ablation, with a notable increase in radioembolization from 3% in 2013 to 19% in 2018. An increased incidence of LRT was observed among patients with tumor burden beyond Milan criteria, higher α-fetoprotein level, and more compensated liver disease. Receipt of any type of LRT was associated with a lower risk of waitlist dropout; there was no significant difference by number of LRTs. In inverse probability of treatment weighting-adjusted analysis, radioembolization or ablation as the first LRT was associated with a reduced risk of waitlist dropout compared with chemoembolization. CONCLUSIONS: In a large nationwide cohort of LT candidates with HCC, LRT, and in particular radioembolization, increasingly was used to bridge to LT. Patients with greater tumor burden and those with more compensated liver disease received more treatments while awaiting LT. Bridging LRT was associated with a lower risk of waitlist dropout.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Transplante de Fígado , Adulto , Carcinoma Hepatocelular/epidemiologia , Carcinoma Hepatocelular/patologia , Carcinoma Hepatocelular/terapia , Humanos , Neoplasias Hepáticas/epidemiologia , Neoplasias Hepáticas/patologia , Neoplasias Hepáticas/terapia , Estudos Retrospectivos , Resultado do Tratamento , Carga Tumoral , Estados Unidos/epidemiologia , Listas de Espera
15.
Am J Gastroenterol ; 116(12): 2430-2445, 2021 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-34693917

RESUMO

INTRODUCTION: The rates of serious cardiac, neurologic, and pulmonary events attributable to colonoscopy are poorly characterized, and background event rates are usually not accounted for. METHODS: We performed a multistate population-based study using changepoint analysis to determine the rates and timing of serious gastrointestinal and nongastrointestinal adverse events associated with screening/surveillance colonoscopy, including analyses by age (45 to <55, 55 to <65, 65 to <75, and ≥75 years). Among 4.5 million persons in the Ambulatory Surgery and Services Databases of California, Florida, and New York who underwent screening/surveillance colonoscopy in 2005-2015, we ascertained serious postcolonoscopy events in excess of background rates in Emergency Department (SEDD) and Inpatient Databases (SID). RESULTS: Most serious nongastrointestinal postcolonoscopy events were expected based on the background rate and not associated with colonoscopy itself. However, associated nongastrointestinal events predominated over gastrointestinal events at ages ≥65 years, including more myocardial infarctions plus ischemic strokes than perforations at ages ≥75 years (361 [95% confidence intervals {CI} 312-419] plus 1,279 [95% CI 1,182-1,384] vs 912 [95% CI 831-1,002] per million). At all ages, the observed-to-expected ratios for days 0-7, 0-30, and 0-60 after colonoscopy were substantially >1 for gastrointestinal bleeding and perforation, but minimally >1 for most nongastrointestinal complications. Risk periods ranged from 1 to 125 days depending on complication type and age. No excess postcolonoscopy in-hospital deaths were observed. DISCUSSION: Although crude counts substantially overestimate nongastrointestinal events associated with colonoscopy, nongastrointestinal complications exceed bleeding and perforation risk in older persons. The inability to ascertain modifications to antiplatelet therapy was a study limitation. Our results can inform benefit-to-risk determinations for preventive colonoscopy.


Assuntos
Colonoscopia/efeitos adversos , Hemorragia Gastrointestinal/epidemiologia , Pacientes Internados/estatística & dados numéricos , Perfuração Intestinal/epidemiologia , Programas de Rastreamento/métodos , Vigilância da População , Distribuição por Idade , Fatores Etários , Idoso , Idoso de 80 Anos ou mais , Feminino , Hemorragia Gastrointestinal/etiologia , Humanos , Incidência , Perfuração Intestinal/etiologia , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Fatores de Risco
16.
Gastroenterology ; 161(6): 1887-1895.e4, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34481845

RESUMO

BACKGROUND & AIMS: The Model for End-Stage Liver Disease (MELD) has been established as a reliable indicator of short-term survival in patients with end-stage liver disease. The current version (MELDNa), consisting of the international normalized ratio and serum bilirubin, creatinine, and sodium, has been used to determine organ allocation priorities for liver transplantation in the United States. The objective was to optimize MELD further by taking into account additional variables and updating coefficients with contemporary data. METHODS: All candidates registered on the liver transplant wait list in the US national registry from January 2016 through December 2018 were included. Uni- and multivariable Cox models were developed to predict survival up to 90 days after wait list registration. Model fit was tested using the concordance statistic (C-statistic) and reclassification, and the Liver Simulated Allocation Model was used to estimate the impact of replacing MELDNa with the new model. RESULTS: The final multivariable model was characterized by (1) additional variables of female sex and serum albumin, (2) interactions between bilirubin and sodium and between albumin and creatinine, and (3) an upper bound for creatinine at 3.0 mg/dL. The final model (MELD 3.0) had better discrimination than MELDNa (C-statistic, 0.869 vs 0.862; P < .01). Importantly, MELD 3.0 correctly reclassified a net of 8.8% of decedents to a higher MELD tier, affording them a meaningfully higher chance of transplantation, particularly in women. In the Liver Simulated Allocation Model analysis, MELD 3.0 resulted in fewer wait list deaths compared to MELDNa (7788 vs 7850; P = .02). CONCLUSION: MELD 3.0 affords more accurate mortality prediction in general than MELDNa and addresses determinants of wait list outcomes, including the sex disparity.


Assuntos
Técnicas de Apoio para a Decisão , Doença Hepática Terminal/diagnóstico , Transplante de Fígado , Listas de Espera , Bilirrubina/sangue , Biomarcadores/sangue , Tomada de Decisão Clínica , Creatinina/sangue , Doença Hepática Terminal/sangue , Doença Hepática Terminal/mortalidade , Doença Hepática Terminal/cirurgia , Feminino , Disparidades em Assistência à Saúde , Humanos , Coeficiente Internacional Normatizado , Transplante de Fígado/efeitos adversos , Transplante de Fígado/mortalidade , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Prognóstico , Sistema de Registros , Medição de Risco , Fatores de Risco , Índice de Gravidade de Doença , Fatores Sexuais , Sódio/sangue , Fatores de Tempo , Estados Unidos , Listas de Espera/mortalidade
18.
Clin Gastroenterol Hepatol ; 19(9): 1873-1882, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33895358

RESUMO

BACKGROUND: The adenoma detection rate at screening (ADR) predicts interval colorectal cancer. Monitoring other lesion detection rates and colonoscopy indications has been proposed. We developed a comprehensive, automated colonoscopy audit program based on standardized clinical documentation, explored detection rates across indications, and developed the Adenoma Detection Rate - Extended to all Screening / Surveillance (ADR-ESS) score. METHODS: In a prospective cohort study, we calculated overall and advanced adenoma and sessile serrated lesion (SSL) detection rates among 15,253 colonoscopies by 35 endoscopists from 4 endoscopy units across all colonoscopy indications. We explored correlations between detection rates, and the precision and stability of ADR-ESS versus ADR. RESULTS: The overall "screening, first" ADR was 36.3% (95% confidence interval [CI], 34.5%-38.1%). The adenoma detection rate was lower for "screening, not first" (relative rate [RR], 0.80; 95% CI, 0.74-0.87) and "family history" (RR, 0.84; 95% CI, 0.74-0.96), and higher for "surveillance" (RR, 1.22; 95% CI, 1.15-1.31) and "follow-up, FIT" (RR, 1.21; 95% CI, 1.07-1.37). For "screening, first," the detection rates for advanced adenoma, SSL, and advanced SSL were 6.7% (95% CI, 5.7%-7.7%), 7.2% (95% CI, 6.2%-8.2%), and 2.6% (95% CI, 2.0%-3.2%), respectively. Adenoma and SSL detection were correlated (r = 0.44; P = .008). ADR-ESS had substantially narrower confidence intervals and less period-to-period variability than ADR, and was not improved by weighting for indication volume and correction for detection by indication. CONCLUSIONS: Comprehensive, automated colonoscopy audit based on standardized clinical documentation is feasible. Adenoma detection is a fair but imperfect proxy for SSL detection. ADR-ESS increases the precision of adenoma detection assessments and emphasizes quality across colonoscopy indications.


Assuntos
Adenoma , Neoplasias Colorretais , Adenoma/diagnóstico , Colonoscopia , Neoplasias Colorretais/diagnóstico , Detecção Precoce de Câncer , Humanos , Programas de Rastreamento , Estudos Prospectivos
19.
Inflamm Bowel Dis ; 27(10): 1602-1609, 2021 10 18.
Artigo em Inglês | MEDLINE | ID: mdl-33300561

RESUMO

BACKGROUND: Patients with primary sclerosing cholangitis (PSC) are at increased risk of developing acute cholangitis. The majority of patients with PSC have comorbid inflammatory bowel disease, and many take immunosuppressive medications. The epidemiological risks for the development of acute cholangitis in patients with PSC, including the impact of immunosuppressive therapy, are unknown. METHODS: We conducted a 2-center, retrospective cohort study using data from 228 patients at Stanford University Medical Center and Santa Clara Valley Medical Center (CA), a county health care system. Patient demographics, medications, PSC disease severity, and inflammatory bowel disease status were extracted. Using stepwise variable selection, we included demographic and covariate predictors in the multiple logistic regression model assessing risk factors for cholangitis. Time-to-event analysis was performed to evaluate specific immunosuppressive medications and development of cholangitis. RESULTS: Thirty-one percent of patients had at least 1 episode of acute cholangitis (n = 72). Anti-tumor necrosis factor (TNF) therapy was associated with increased odds of acute cholangitis (odds ratio, 7.29; 95% confidence interval, 2.63-12.43), but immunomodulator use was protective against acute cholangitis (odds ratio, 0.23; 95% confidence interval, 0.05-0.76). Anti-TNF therapy was associated with decreased time-to-cholangitis, with a median time of 28.4 months; in contrast, only 11.1% of patients who were prescribed immunomodulators developed cholangitis over the same time period (P < 0.001). CONCLUSIONS: Our observations suggest that classes of immunosuppressive medications differentially modify the odds of acute cholangitis. Biologic therapy, ie, anti-TNF therapy, was shown to have significantly higher odds for patients developing acute cholangitis whereas immunomodulator therapy was shown to have a potential protective effect. These findings may help guide physicians in decision-making for determining appropriate immunosuppressive therapy.


Assuntos
Colangite Esclerosante , Doenças Inflamatórias Intestinais , Colangite Esclerosante/epidemiologia , Humanos , Razão de Chances , Estudos Retrospectivos , Inibidores do Fator de Necrose Tumoral
20.
Liver Transpl ; 27(5): 684-698, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33306254

RESUMO

The incidence of hepatocellular carcinoma (HCC) is growing in the United States, especially among the elderly. Older patients are increasingly receiving transplants as a result of HCC, but the impact of advancing age on long-term posttransplant outcomes is not clear. To study this, we used data from the US Multicenter HCC Transplant Consortium of 4980 patients. We divided the patients into 4 groups by age at transplantation: 18 to 64 years (n = 4001), 65 to 69 years (n = 683), 70 to 74 years (n = 252), and ≥75 years (n = 44). There were no differences in HCC tumor stage, type of bridging locoregional therapy, or explant residual tumor between the groups. Older age was confirmed to be an independent and significant predictor of overall survival even after adjusting for demographic, etiologic, and cancer-related factors on multivariable analysis. A dose-response effect of age on survival was observed, with every 5-year increase in age older than 50 years resulting in an absolute increase of 8.3% in the mortality rate. Competing risk analysis revealed that older patients experienced higher rates of non-HCC-related mortality (P = 0.004), and not HCC-related death (P = 0.24). To delineate the precise cause of death, we further analyzed a single-center cohort of patients who received a transplant as a result of HCC (n = 302). Patients older than 65 years had a higher incidence of de novo cancer (18.1% versus 7.6%; P = 0.006) after transplantation and higher overall cancer-related mortality (14.3% versus 6.6%; P = 0.03). Even carefully selected elderly patients with HCC have significantly worse posttransplant survival rates, which are mostly driven by non-HCC-related causes. Minimizing immunosuppression and closer surveillance for de novo cancers can potentially improve the outcomes in elderly patients who received a transplant as a result of HCC.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Transplante de Fígado , Idoso , Carcinoma Hepatocelular/epidemiologia , Carcinoma Hepatocelular/cirurgia , Humanos , Neoplasias Hepáticas/epidemiologia , Neoplasias Hepáticas/cirurgia , Transplante de Fígado/efeitos adversos , Pessoa de Meia-Idade , Estudos Retrospectivos , Medição de Risco , Taxa de Sobrevida , Estados Unidos/epidemiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...