Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 49
Filter
1.
Stat Med ; 43(8): 1615-1626, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38345148

ABSTRACT

Incorporating historical data into a current data analysis can improve estimation of parameters shared across both datasets and increase the power to detect associations of interest while reducing the time and cost of new data collection. Several methods for prior distribution elicitation have been introduced to allow for the data-driven borrowing of historical information within a Bayesian analysis of the current data. We propose scaled Gaussian kernel density estimation (SGKDE) prior distributions as potentially more flexible alternatives. SGKDE priors directly use posterior samples collected from a historical data analysis to approximate probability density functions, whose variances depend on the degree of similarity between the historical and current datasets, which are used as prior distributions in the current data analysis. We compare the performances of the SGKDE priors with some existing approaches using a simulation study. Data from a recently completed phase III clinical trial of a maternal vaccine for respiratory syncytial virus are used to further explore the properties of SGKDE priors when designing a new clinical trial while incorporating historical data. Overall, both studies suggest that the new approach results in improved parameter estimation and power in the current data analysis compared to the considered existing methods.


Subject(s)
Models, Statistical , Research Design , Humans , Bayes Theorem , Clinical Trials as Topic , Computer Simulation , Sample Size
2.
Clin Trials ; 21(2): 242-256, 2024 04.
Article in English | MEDLINE | ID: mdl-37927102

ABSTRACT

BACKGROUND: Issues with specification of margins, adherence, and analytic population can potentially bias results toward the alternative in randomized noninferiority pragmatic trials. To investigate this potential for bias, we conducted a targeted search of the medical literature to examine how noninferiority pragmatic trials address these issues. METHODS: An Ovid MEDLINE database search was performed identifying publications in New England Journal of Medicine, Journal of the American Medical Association, Lancet, or British Medical Journal published between 2015 and 2021 that included the words "pragmatic" or "comparative effectiveness" and "noninferiority" or "non-inferiority." Our search identified 14 potential trials, 12 meeting our inclusion criteria (11 individually randomized, 1 cluster-randomized). RESULTS: Eleven trials had results that met the criteria established for noninferiority. Noninferiority margins were prespecified for all trials; all but two trials provided justification of the margin. Most trials did some monitoring of treatment adherence. All trials conducted intent-to-treat or modified intent-to-treat analyses along with per-protocol analyses and these analyses reached similar conclusions. Only two trials included all randomized participants in the primary analysis, one used multiple imputation for missing data. The percentage excluded from primary analyses ranged from ∼2% to 30%. Reasons for exclusion included randomization in error, nonadherence, not receiving assigned treatment, death, withdrawal, lost to follow-up, and incomplete data. CONCLUSION: Specification of margins, adherence, and analytic population require careful consideration to prevent bias toward the alternative in noninferiority pragmatic trials. Although separate guidance has been developed for noninferiority and pragmatic trials, it is not compatible with conducting a noninferiority pragmatic trial. Hence, these trials should probably not be done in their current format without developing new guidelines.


Subject(s)
Research Design , United States , Humans , Bias , Intention to Treat Analysis
3.
Gastrointest Endosc ; 99(5): 822-825.e1, 2024 May.
Article in English | MEDLINE | ID: mdl-38103747

ABSTRACT

BACKGROUND AND AIMS: Plasma levels of renalase decrease in acute experimental pancreatitis. We aimed to determine if decreases in plasma renalase levels after ERCP predict the occurrence of post-ERCP pancreatitis (PEP). METHODS: In this prospective cohort study conducted at a tertiary hospital, plasma renalase was determined before ERCP (baseline) and at 30 and 60 minutes after ERCP. Native renalase levels, acidified renalase, and native-to-acidified renalase proportions were analyzed over time using a longitudinal regression model. RESULTS: Among 273 patients, 31 developed PEP. Only 1 PEP patient had a baseline native renalase >6.0 µg/mL, whereas 38 of 242 without PEP had a native renalase > 6.0 µg/mL, indicating a sensitivity of 97% (30/31) and specificity of 16% (38/242) in predicting PEP. Longitudinal models did not show differences over time between groups. CONCLUSIONS: Baseline native renalase levels are very sensitive for predicting PEP. Further studies are needed to determine the potential clinical role of renalase in predicting and preventing PEP.

4.
J Addict Med ; 16(3): 333-339, 2022.
Article in English | MEDLINE | ID: mdl-34483278

ABSTRACT

OBJECTIVES: Treatment of hepatitis C virus infection (HCV) with direct acting antiviral therapy is encouraged regardless of substance use status. Patients with substance use disorder are at risk of HCV reinfection after cure. Follow up viral load testing (FUVL) with HCV RNA is recommended. We investigated factors associated with adoption of FUVL in real-world clinical settings. METHODS: Medical records of all patients with SUD who achieved HCV cure with direct acting antivirals at a multidisciplinary addiction treatment program between 2014 and 2019 were reviewed as part of a quality improvement initiative. Demographic and clinical characteristics including SUD treatment, urine toxicology results, and medical service use were collected. Factors associated with FUVL were analyzed and the rate of HCV reinfection was determined. RESULTS: Among 149 patients, 58.4% received FUVL. Receipt of FUVL was associated with engagement in ongoing primary medical care after cure (AOR 4.39, 95% CI [1.67, 11.49]). The HCV reinfection rate among those who received FUVL was 1.95 per 100 person-years of follow up (95% CI [0.64, 5.98]). There was no significant difference in the percentage of negative urine toxicology results before and after cure. CONCLUSIONS: Over half of a cohort of patients with substance use disorder cured of HCV received FUVL. The relationship between FUVL and engagement in primary medical and substance use treatment highlights the importance of integrated systems in providing longitudinal care for patients cured of HCV. Standardized interventions that facilitate FUVL testing and management of infectious complications of SUD in addiction treatment settings are needed.


Subject(s)
Hepatitis C, Chronic , Hepatitis C , Substance-Related Disorders , Antiviral Agents/adverse effects , Follow-Up Studies , Hepacivirus , Hepatitis C/drug therapy , Hepatitis C/epidemiology , Hepatitis C, Chronic/drug therapy , Humans , Primary Health Care , Reinfection , Substance-Related Disorders/drug therapy , Substance-Related Disorders/therapy , Viral Load
5.
Hepatol Commun ; 6(2): 270-280, 2022 02.
Article in English | MEDLINE | ID: mdl-34520633

ABSTRACT

Liver test abnormalities are frequently observed in patients with coronavirus disease 2019 (COVID-19) and are associated with worse prognosis. However, information is limited about pathological changes in the liver in this infection, so the mechanism of liver injury is unclear. Here we describe liver histopathology and clinical correlates of 27 patients who died of COVID-19 in Manaus, Brazil. There was a high prevalence of liver injury (elevated alanine aminotransferase and aspartate aminotransferase in 44% and 48% of patients, respectively) in these patients. Histological analysis showed sinusoidal congestion and ischemic necrosis in more than 85% of the cases, but these appeared to be secondary to systemic rather than intrahepatic thrombotic events, as only 14% and 22% of samples were positive for CD61 (marker of platelet activation) and C4d (activated complement factor), respectively. Furthermore, the extent of these vascular findings did not correlate with the extent of transaminase elevations. Steatosis was present in 63% of patients, and portal inflammation was present in 52%. In most cases, hepatocytes expressed angiotensin-converting enzyme 2 (ACE2), which is responsible for binding and entry of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), even though this ectoenzyme was minimally expressed on hepatocytes in normal controls. However, SARS-CoV-2 staining was not observed. Most hepatocytes also expressed inositol 1,4,5-triphosphate receptor 3 (ITPR3), a calcium channel that becomes expressed in acute liver injury. Conclusion: The hepatocellular injury that commonly occurs in patients with severe COVID-19 is not due to the vascular events that contribute to pulmonary or cardiac damage. However, new expression of ACE2 and ITPR3 with concomitant inflammation and steatosis suggests that liver injury may result from inflammation, metabolic abnormalities, and perhaps direct viral injury.


Subject(s)
COVID-19/complications , Liver Diseases/pathology , Liver Diseases/virology , Liver/pathology , Liver/virology , Adult , Aged , Aged, 80 and over , Brazil , COVID-19/mortality , COVID-19/pathology , COVID-19/physiopathology , Female , Humans , Liver/physiopathology , Liver Diseases/diagnosis , Liver Diseases/physiopathology , Liver Function Tests , Male , Middle Aged
6.
J Trauma Stress ; 34(5): 905-916, 2021 10.
Article in English | MEDLINE | ID: mdl-34644417

ABSTRACT

The link between socioeconomic status and posttraumatic stress disorder (PTSD) symptoms is well established. Given that Black women are disproportionately burdened by both poverty and PTSD symptoms, research focusing on these constructs among this population is needed. The current study assessed the association between material hardship (i.e., difficulty meeting basic needs) and PTSD symptoms among 227 low-income Black women in the United States. We explored several potential explanations for the association between poverty and PTSD symptoms (e.g., individuals living in poverty may experience higher levels of trauma exposure; individuals living in poverty may have less access to relevant protective resources, like social support; poverty itself may represent a traumatic stressor). Using robust negative binomial regression, a positive association between material hardship and PTSD symptoms emerged, B = 0.10, p = .009, SMD = 0.08. When trauma exposure was added to the model, it was positively associated with PTSD symptoms, B = 0.18, p < .001, SMD = 0.16, and material hardship remained positively associated with PTSD symptoms, B = 0.10, p =.019, SMD = 0.08. When social support indicators were added to the model, they were not associated with PTSD symptoms; however, material hardship remained significantly associated, B = 0.10, p = .021, SMD = 0.08. In the model with material hardship and trauma exposure, a significant interaction between material hardship and trauma exposure on PTSD symptoms emerged, B = -0.04, p = .027. These results demonstrate the importance of including material hardship in trauma research, assessment, and treatment.


Subject(s)
Stress Disorders, Post-Traumatic , Female , Humans , Poverty , Social Class , Social Support , Stress Disorders, Post-Traumatic/epidemiology , United States/epidemiology
7.
Am J Clin Pathol ; 156(5): 802-809, 2021 Oct 13.
Article in English | MEDLINE | ID: mdl-33940622

ABSTRACT

OBJECTIVES: In compensated cirrhosis, thick fibrous septa and small nodules on liver biopsy specimens correlate with the presence of clinically significant portal hypertension (CSPH). In turn, CSPH is the strongest predictor of cirrhosis decompensation. The aim of the study was to correlate liver biopsy specimen characteristics with the development of decompensation in patients with compensated cirrhosis. METHODS: Patients with compensated cirrhosis and a concurrent liver biopsy specimen were reviewed. Semiquantitative grading of septal thickness and nodule size was performed. Primary end point was development of clinical decompensation. In total, 168 patients (median age, 49 years; 76% men) were included in the study; the most common etiology was viral. RESULTS: In a median follow-up of 50 months, 43 (26%) patients developed clinical decompensation (60% ascites, 16% encephalopathy, 12% variceal hemorrhage, 7% jaundice, and 5% mixed). On univariate analysis, septal width was significantly associated with decompensation, but nodule size was not. On multivariate analysis including model for end-stage liver disease score, serum albumin, and septal width, albumin and septal width were independent predictors of decompensation. CONCLUSIONS: Histologic cirrhosis in compensated patients can be subclassified by severity based on septal thickness, with thick septa denoting worse prognosis.


Subject(s)
Liver Cirrhosis/complications , Liver Cirrhosis/pathology , Adult , Aged , Ascites/etiology , Esophageal and Gastric Varices , Female , Gastrointestinal Hemorrhage/etiology , Hepatic Encephalopathy/etiology , Humans , Hypertension, Portal/etiology , Jaundice/etiology , Male , Middle Aged , Prognosis
8.
Clin Gastroenterol Hepatol ; 19(10): 2182-2191.e7, 2021 10.
Article in English | MEDLINE | ID: mdl-34004326

ABSTRACT

BACKGROUND & AIMS: Coronavirus-19 disease (COVID-19) is associated with hepatocellular liver injury of uncertain significance. We aimed to determine whether development of significant liver injury during hospitalization is related to concomitant medications or processes common in COVID-19 (eg, ischemia, hyperinflammatory, or hypercoagulable states), and whether it can result in liver failure and death. METHODS: There were 834 consecutive patients hospitalized with COVID-19 who were included. Clinical, medication, and laboratory data were obtained at admission and throughout hospitalization using an identified database. Significant liver injury was defined as an aspartate aminotransferase (AST) level 5 or more times the upper limit of normal; ischemia was defined as vasopressor use for a minimum of 2 consecutive days; hyperinflammatory state was defined as high-sensitivity C-reactive protein value of 100 mg/L or more, and hypercoagulability was defined as D-dimer 5 mg/L or more at any time during hospitalization. RESULTS: A total of 105 (12.6%) patients developed significant liver injury. Compared with patients without significant liver injury, ischemia (odds ratio [OR], 4.3; range, 2.5-7.4; P < .0001) and tocilizumab use (OR, 3.6; range, 1.9-7.0; P = .0001) were independent predictors of significant liver injury. Although AST correlated closely with alanine aminotransferase (R = 0.89) throughout hospitalization, AST did not correlate with the international normalized ratio (R = 0.10) or with bilirubin level (R = 0.09). Death during hospitalization occurred in 136 (16.3%) patients. Multivariate logistic regression showed that significant liver injury was not associated with death (OR, 1.4; range, 0.8-2.6; P = .2), while ischemic (OR, 2.4; range, 1.4-4.0; P = .001), hypercoagulable (OR, 1.7; range, 1.1-2.6; P = .02), and hyperinflammatory (OR, 1.9; range, 1.2-3.1; P = .02) disease states were significant predictors of death. CONCLUSIONS: Liver test abnormalities known to be associated with COVID-19 are secondary to other insults, mostly ischemia or drug-induced liver injury, and do not lead to liver insufficiency or death.


Subject(s)
COVID-19 , Hepatic Insufficiency , Hospitalization , Humans , Retrospective Studies , SARS-CoV-2
9.
J Trauma Stress ; 34(3): 628-640, 2021 06.
Article in English | MEDLINE | ID: mdl-33650202

ABSTRACT

Cross-sectional research suggests that posttraumatic stress symptoms (PTSS) among war zone veterans are associated with functional impairment and poor quality of life. Less is known about the long-term functional repercussions of PTSS. This study of Iraq War veterans examined the associations between increases in PTSS and long-term functional outcomes, including the potential contributions of neurocognitive decrements. Service members and veterans (N = 594) completed self-report measures of functioning and PTSS severity before Iraq War deployment and again after their return (M = 9.3 years postdeployment). Some participants (n = 278) also completed neurocognitive testing at both times. Multiple regression analyses with the full sample-adjusted for TBI, demographic characteristics, military variables, and predeployment PTSS and functioning-revealed that increased PTSS severity over time was significantly associated with unemployment, aOR = 1.04, 95% CI [1.03, 1.06]; poorer work performance; and poorer physical, emotional, and cognitive health-related functioning at long-term follow-up, f2 s = 0.37-1.79. Among participants who completed neurocognitive testing, a decline in select neurocognitive measures was associated with poorer functioning; however, neurocognitive decrements did not account for associations between increased PTSS and unemployment, aOR = 1.04, 95% CI [1.02, 1.07], with the size and direction upheld after adding neurocognitive variables, or poorer functional outcomes, with small increases after adding neurocognitive measures to the models, f2 s = 0.03-0.10. War zone veterans experiencing long-term increased PTSS and/or neurocognitive decrements may be at elevated risk for higher-level functional impairment over time, suggesting that early PTSS management may enhance long-term functioning.


Subject(s)
Stress Disorders, Post-Traumatic , Veterans , Cross-Sectional Studies , Humans , Iraq , Quality of Life , Stress Disorders, Post-Traumatic/epidemiology
10.
Hum Pathol ; 105: 67-73, 2020 11.
Article in English | MEDLINE | ID: mdl-32941964

ABSTRACT

Increasing evidence suggests that bile reflux (BR) plays a major role in mucosal injury, leading to adenocarcinoma of the proximal stomach and distal esophagus. However, gastric BR is difficult to diagnose and investigate. Reactive gastropathy (RG), in the absence of nonsteroidal anti-inflammatory drugs (NSAIDs) and other known causes, likely represents bile-mediated injury to the gastric mucosa. The goal of this study is to explore the association between antral RG and gastroesophageal junction (GEJ) mucosal inflammation and intestinal metaplasia (IM). The pathology database was searched for patients who had gastric biopsies with a diagnosis of antral RG and concurrent gastric cardia/GEJ/distal esophagus biopsies from 2013 to 2015. Age- and sex-matched patients with normal gastric antral biopsies served as controls. Biopsies from the GEJ region were evaluated for histological changes, including inflammation, antral and pancreatic metaplasia, RG, the type of gastric glands, proton pump inhibitor (PPI) changes, and IM. Detailed clinical history and medication use (including PPIs and NSAIDs) were recorded. IM in the GEJ region was more frequent in patients with antral RG than in controls (33.0% vs. 5.2%, 95% confidence interval [18.3-37.3%]). In addition, inflammation, other mucosal changes around the GEJ (RG and foveolar hyperplasia), antral IM, and PPI-associated mucosal changes were also more frequently seen in patients with antral RG. Our results show that antral RG is associated with mucosal injury and IM around GEJ, suggesting a role of BR. Further studies are needed to study duodenogastric-esophageal BR and its role in development of proximal gastric and distal esophageal adenocarcinoma.


Subject(s)
Adenocarcinoma/pathology , Esophageal Neoplasms/pathology , Esophagogastric Junction/pathology , Gastric Mucosa/pathology , Pyloric Antrum/pathology , Stomach Neoplasms/pathology , Adenocarcinoma/etiology , Adult , Aged , Bile Reflux/complications , Biopsy , Databases, Factual , Esophageal Mucosa/pathology , Esophageal Neoplasms/etiology , Female , Humans , Male , Metaplasia , Middle Aged , Retrospective Studies , Stomach Neoplasms/etiology
11.
Hepatology ; 72(4): 1169-1176, 2020 10.
Article in English | MEDLINE | ID: mdl-32725890

ABSTRACT

BACKGROUND AND AIMS: The coronavirus-19 disease (COVID-19) pandemic, caused by the severe acute respiratory syndrome coronavirus 2 virus, is associated with significant morbidity and mortality attributable to pneumonia, acute respiratory distress syndrome, and multiorgan failure. Liver injury has been reported as a nonpulmonary manifestation of COVID-19, but characterization of liver test abnormalities and their association with clinical outcomes is incomplete. APPROACH AND RESULTS: We conducted a retrospective cohort study of 1,827 patients with confirmed COVID-19 who were hospitalized within the Yale-New Haven Health System between March 14, 2020 and April 23, 2020. Clinical characteristics, liver tests (aspartate aminotransferase [AST], alanine aminotransferase [ALT], alkaline phosphatase [ALP], total bilirubin [TBIL], and albumin) at three time points (preinfection baseline, admission, and peak hospitalization), and hospitalization outcomes (severe COVID-19, intensive care unit [ICU] admission, mechanical ventilation, and death) were analyzed. Abnormal liver tests were commonly observed in hospitalized patients with COVID-19, both at admission (AST 66.9%, ALT 41.6%, ALP 13.5%, and TBIL 4.3%) and peak hospitalization (AST 83.4%, ALT 61.6%, ALP 22.7%, and TBIL 16.1%). Most patients with abnormal liver tests at admission had minimal elevations 1-2× the upper limit of normal (ULN; AST 63.7%, ALT 63.5%, ALP 80.0%, and TBIL 75.7%). A significant proportion of these patients had abnormal liver tests prehospitalization (AST 25.9%, ALT 38.0%, ALP 56.8%, and TBIL 44.4%). Multivariate analysis revealed an association between abnormal liver tests and severe COVID-19, including ICU admission, mechanical ventilation, and death; associations with age, male sex, body mass index, and diabetes mellitus were also observed. Medications used in COVID-19 treatment (lopinavir/ritonavir, hydroxychloroquine, remdesivir, and tocilizumab) were associated with peak hospitalization liver transaminase elevations >5× ULN. CONCLUSIONS: Abnormal liver tests occur in most hospitalized patients with COVID-19 and may be associated with poorer clinical outcomes.


Subject(s)
COVID-19/physiopathology , Liver/physiopathology , SARS-CoV-2 , Adolescent , Adult , Aged , Aged, 80 and over , Child , Child, Preschool , Female , Hospitalization , Humans , Infant , Liver Function Tests , Male , Middle Aged , Retrospective Studies , Young Adult
12.
J Addict Med ; 14(6): e337-e343, 2020 12.
Article in English | MEDLINE | ID: mdl-32530887

ABSTRACT

OBJECTIVES: Cirrhosis is often a consequence of substance use disorders (SUD) and can lead to significant morbidity, mortality, and hospitalizations. We aimed to determine presence and impact of SUD in recently hospitalized patients with cirrhosis, which has not been previously described. METHODS: This is a retrospective study of consecutive patients with cirrhosis seen at a post-discharge hepatology clinic. The presence of clinically-recognized SUD and documented establishment of addiction treatment, as noted in routine clinical care, was determined through medical record review. Number of hospitalizations, 30-day readmissions, and all-cause mortality at 1 year were also examined. RESULTS: Among 99 patients, 72% were male and the median age was 55 years. The most common etiologies of cirrhosis were alcohol-related liver disease and hepatitis C infection. Alcohol use disorder was documented in 71%. Nearly all patients with clinically-recognized SUD underwent social work evaluation during hospitalization and 65% were referred to addiction treatment. Establishment of addiction care at follow up was documented in 35%. Documented SUD was associated with greater odds of hospitalization over 1 year (adjusted odds ratio 5.77, 95% confidence interval [1.36, 24.49], P = 0.017), but not with 30-day readmissions or mortality. CONCLUSIONS: Clinically-recognized SUD was common in recently hospitalized patients with cirrhosis and associated with at least 1 other hospitalization within a year. Establishment of addiction treatment was documented in only a minority of patients. Further research is needed to determine whether patients with cirrhosis and SUD experience unique barriers to addiction treatment and if integration of SUD care in hepatology settings may be beneficial.


Subject(s)
Aftercare , Substance-Related Disorders , Hospitalization , Humans , Liver Cirrhosis/epidemiology , Liver Cirrhosis/therapy , Male , Middle Aged , Patient Discharge , Retrospective Studies , Substance-Related Disorders/epidemiology
13.
J Child Fam Stud ; 29(10): 2667-2677, 2020 Oct.
Article in English | MEDLINE | ID: mdl-33776388

ABSTRACT

Black female primary caregivers who receive Temporary Assistance for Needy Families (TANF) are burdened not only by economic pressure but also by a disproportionate prevalence of psychological disorders. This is particularly pernicious given that poverty and maternal mental health impact child outcomes and may decrease the economic mobility of families. Consequently, it is imperative to understand the mechanisms that explain the association between economic pressure and child outcomes. The current study addressed this gap by testing an application of the Family Stress Model (FSM), which describes how economic pressure results in parental psychological distress, particularly depression, and in turn impacts parenting quality and child outcomes. Additionally, social support was assessed as a potential culturally-salient protective factor within the model. Four hundred sixteen Black female primary caregivers who receive TANF were administered a series of measures assessing mental health and family wellbeing. Structural equation modeling was utilized to test a single model that incorporated all hypotheses. Maternal depression and quality of parenting serially mediated the relationship between economic pressure and school performance. The relationship between economic pressure and adverse child outcomes, however, was mediated only by maternal depression. Social support did not significantly moderate the relationship between economic pressure and maternal depression; however, it did demonstrate a significant direct effect on maternal depression. The current study corroborates the application of FSM to another population. Further, it demonstrates the importance of interventions that target maternal mental health, parenting, social support, and family economic mobility as well as system-level policy interventions to address poverty.

14.
Stat Methods Med Res ; 28(6): 1852-1878, 2019 06.
Article in English | MEDLINE | ID: mdl-29869564

ABSTRACT

When designing studies involving a continuous endpoint, the hypothesized difference in means ( θA ) and the assumed variability of the endpoint ( σ2 ) play an important role in sample size and power calculations. Traditional methods of sample size re-estimation often update one or both of these parameters using statistics observed from an internal pilot study. However, the uncertainty in these estimates is rarely addressed. We propose a hybrid classical and Bayesian method to formally integrate prior beliefs about the study parameters and the results observed from an internal pilot study into the sample size re-estimation of a two-stage study design. The proposed method is based on a measure of power called conditional expected power (CEP), which averages the traditional power curve using the prior distributions of θ and σ2 as the averaging weight, conditional on the presence of a positive treatment effect. The proposed sample size re-estimation procedure finds the second stage per-group sample size necessary to achieve the desired level of conditional expected interim power, an updated CEP calculation that conditions on the observed first-stage results. The CEP re-estimation method retains the assumption that the parameters are not known with certainty at an interim point in the trial. Notional scenarios are evaluated to compare the behavior of the proposed method of sample size re-estimation to three traditional methods.


Subject(s)
Bayes Theorem , Equivalence Trials as Topic , Sample Size , Endpoint Determination , Humans , Models, Statistical
15.
PLoS Med ; 15(10): e1002667, 2018 10.
Article in English | MEDLINE | ID: mdl-30300351

ABSTRACT

BACKGROUND: Sustained retention in HIV care (RIC) and viral suppression (VS) are central to US national HIV prevention strategies, but have not been comprehensively assessed in criminal justice (CJ) populations with known health disparities. The purpose of this study is to identify predictors of RIC and VS following release from prison or jail. METHODS AND FINDINGS: This is a retrospective cohort study of all adult people living with HIV (PLWH) incarcerated in Connecticut, US, during the period January 1, 2007, to December 31, 2011, and observed through December 31, 2014 (n = 1,094). Most cohort participants were unmarried (83.7%) men (77.0%) who were black or Hispanic (78.1%) and acquired HIV from injection drug use (72.6%). Prison-based pharmacy and custody databases were linked with community HIV surveillance monitoring and case management databases. Post-release RIC declined steadily over 3 years of follow-up (67.2% retained for year 1, 51.3% retained for years 1-2, and 42.5% retained for years 1-3). Compared with individuals who were not re-incarcerated, individuals who were re-incarcerated were more likely to meet RIC criteria (48% versus 34%; p < 0.001) but less likely to have VS (72% versus 81%; p = 0.048). Using multivariable logistic regression models (individual-level analysis for 1,001 individuals after excluding 93 deaths), both sustained RIC and VS at 3 years post-release were independently associated with older age (RIC: adjusted odds ratio [AOR] = 1.61, 95% CI = 1.22-2.12; VS: AOR = 1.37, 95% CI = 1.06-1.78), having health insurance (RIC: AOR = 2.15, 95% CI = 1.60-2.89; VS: AOR = 2.01, 95% CI = 1.53-2.64), and receiving an increased number of transitional case management visits. The same factors were significant when we assessed RIC and VS outcomes in each 6-month period using generalized estimating equations (for 1,094 individuals contributing 6,227 6-month periods prior to death or censoring). Additionally, receipt of antiretroviral therapy during incarceration (RIC: AOR = 1.33, 95% CI 1.07-1.65; VS: AOR = 1.91, 95% CI = 1.56-2.34), early linkage to care post-release (RIC: AOR = 2.64, 95% CI = 2.03-3.43; VS: AOR = 1.79; 95% CI = 1.45-2.21), and absolute time and proportion of follow-up time spent re-incarcerated were highly correlated with better treatment outcomes. Limited data were available on changes over time in injection drug use or other substance use disorders, psychiatric disorders, or housing status. CONCLUSIONS: In a large cohort of CJ-involved PLWH with a 3-year post-release evaluation, RIC diminished significantly over time, but was associated with HIV care during incarceration, health insurance, case management services, and early linkage to care post-release. While re-incarceration and conditional release provide opportunities to engage in care, reducing recidivism and supporting community-based RIC efforts are key to improving longitudinal treatment outcomes among CJ-involved PLWH.


Subject(s)
Anti-HIV Agents/therapeutic use , HIV Infections/drug therapy , Medication Adherence/statistics & numerical data , Patient Dropouts/statistics & numerical data , Prisoners/statistics & numerical data , Adult , Age Factors , Case Management/statistics & numerical data , Connecticut , Female , Humans , Insurance, Health/statistics & numerical data , Male , Middle Aged , Retrospective Studies , Sustained Virologic Response
16.
Lancet HIV ; 5(11): e617-e628, 2018 11.
Article in English | MEDLINE | ID: mdl-30197101

ABSTRACT

BACKGROUND: People transitioning from prisons or jails have high mortality, but data are scarce for people with HIV and no studies have integrated data from both criminal justice and community settings. We aimed to assess all-cause mortality in people with HIV released from an integrated system of prisons and jails in Connecticut, USA. METHODS: We linked pharmacy, custodial, death, case management, and HIV surveillance data from Connecticut Departments of Correction and Public Health to create a retrospective cohort of all adults with HIV released from jails and prisons in Connecticut between 2007 and 2014. We compared the mortality rate of adults with HIV released from incarceration with the general US and Connecticut populations, and modelled time-to-death from any cause after prison release with Cox proportional hazard models. FINDINGS: We identified 1350 people with HIV who were released after 24 h or more of incarceration between 2007 and 2014, of whom 184 (14%) died after index release; median age was 45 years (IQR 39-50) and median follow-up was 5·2 years (IQR 3·0-6·7) after index release. The crude mortality rate for people with HIV released from incarceration was 2868 deaths per 100 000 person-years, and the standardised mortality ratio showed that mortality was higher for this cohort than the general US population (6·97, 95% CI 5·96-7·97) and population of Connecticut (8·47, 7·25-9·69). Primary cause of death was reported for 170 individuals; the most common causes were HIV/AIDS (78 [46%]), drug overdose (26 [15%]), liver disease (17 [10%]), cardiovascular disease (16 [9%]), and accidental injury or suicide (13 [8%]). Black race (adjusted hazard ratio [HR] 0·52, 95% CI 0·34-0·80), having health insurance (0·09, 0·05-0·17), being re-incarcerated at least once for 365 days or longer (0·41, 0·22-0·76), and having a high percentage of re-incarcerations in which antiretroviral therapy was prescribed (0·08, 0·03-0·21) were protective against mortality. Positive predictors of time-to-death were age (≥50 years; adjusted HR 3·65, 95% CI 1·21-11·08), lower CD4 count (200-499 cells per µL, 2·54, 1·50-4·31; <200 cells per µL, 3·44, 1·90-6·20), a high number of comorbidities (1·86, 95% CI 1·23-2·82), virological failure (2·76, 1·94-3·92), and unmonitored viral load (2·13, 1·09-4·18). INTERPRETATION: To reduce mortality after release from incarceration in people with HIV, resources are needed to identify and treat HIV, in addition to medical comorbidities, psychiatric disorders, and substance use disorders, during and following incarceration. Policies that reduce incarceration and support integrated systems of care between prisons and communities could have a substantial effect on the survival of people with HIV. FUNDING: US National Institutes of Health.


Subject(s)
HIV Infections/mortality , Prisoners/statistics & numerical data , Prisons , Adult , Cause of Death , Connecticut , Female , Humans , Male , Middle Aged , Prisoners/psychology , Proportional Hazards Models , Retrospective Studies , Risk Factors
17.
Behav Ther ; 49(5): 653-667, 2018 09.
Article in English | MEDLINE | ID: mdl-30146134

ABSTRACT

The long-term mental health effects of war-zone deployment in the Iraq and Afghanistan wars on military personnel are a significant public health concern. Using data collected prospectively at three distinct assessments during 2003-2014 as part of the Neurocognition Deployment Health Study and VA Cooperative Studies Program Study #566, we explored how stress exposures prior, during, and after return from deployment influence the long-term mental health outcomes of posttraumatic stress disorder (PTSD), depression, anxiety disorders, and problem drinking. Longer-term mental health outcomes were assessed in 375 service members and military veterans an average of 7.5 years (standard deviation = 1.0 year) after the initial (i.e., "index") Iraq deployment following their predeployment assessment. Anxiety disorder was the most commonly observed long-term mental health outcome (36.0%), followed by depression (24.5%), PTSD (24.3%), and problem drinking (21.0%). Multivariable regression models showed that greater postdeployment stressors, as measured by the Post-Deployment Life Events scale, were associated with greater risk of depression, anxiety disorders, and problem drinking. Anxiety disorder was the only outcome affected by predeployment stress concerns. In addition, greater postdeployment social support was associated with lower risk of all outcomes except problem drinking. These findings highlight the importance of assessing postdeployment stress exposures, such as stressful or traumatic life events, given the potential impact of these stressors on long-term mental health outcomes. This study also highlights the importance of postdeployment social support as a modifiable protective factor that can be used to help mitigate risk of long-term adverse mental health outcomes following war-zone exposure.


Subject(s)
Iraq War, 2003-2011 , Mental Health/trends , Social Support , Stress Disorders, Post-Traumatic/psychology , Stress Disorders, Post-Traumatic/therapy , Veterans/psychology , Adult , Anxiety Disorders/psychology , Anxiety Disorders/therapy , Depressive Disorder/psychology , Depressive Disorder/therapy , Female , Follow-Up Studies , Humans , Longitudinal Studies , Male , Middle Aged , Military Personnel/psychology , Prospective Studies , Time Factors , Treatment Outcome , United States/epidemiology , Young Adult
18.
Lancet HIV ; 5(2): e96-e106, 2018 02.
Article in English | MEDLINE | ID: mdl-29191440

ABSTRACT

BACKGROUND: Incarceration provides an opportunity for engagement in HIV care but is associated with poor HIV treatment outcomes after release. We aimed to assess post-release linkage to HIV care (LTC) and the effect of transitional case management services. METHODS: To create a retrospective cohort of all adults with HIV released from jails and prisons in Connecticut, USA (2007-14), we linked administrative custody and pharmacy databases with mandatory HIV/AIDS surveillance monitoring and case management data. We examined time to LTC (defined as first viral load measurement after release) and viral suppression at LTC. We used generalised estimating equations to show predictors of LTC within 14 days and 30 days of release. FINDINGS: Among 3302 incarceration periods for 1350 individuals between 2007 and 2014, 672 (21%) of 3181 periods had LTC within 14 days of release, 1042 (34%) of 3064 had LTC within 30 days of release, and 301 (29%) of 1042 had detectable viral loads at LTC. Factors positively associated with LTC within 14 days of release are intermediate (31-364 days) incarceration duration (adjusted odds ratio 1·52; 95% CI 1·19-1·95), and transitional case management (1·65; 1·36-1·99), receipt of antiretroviral therapy during incarceration (1·39; 1·11-1·74), and two or more medical comorbidities (1·86; 1·48-2·36). Reincarceration (0·70; 0·56-0·88) and conditional release (0·62; 0·50-0·78) were negatively associated with LTC within 14 days. Hispanic ethnicity, bonded release, and psychiatric comorbidity were also associated with LTC within 30 days but reincarceration was not. INTERPRETATION: LTC after release is suboptimal but improves when inmates' medical, psychiatric, and case management needs are identified and addressed before release. People who are rapidly cycling through jail facilities are particularly vulnerable to missed linkage opportunities. The use of integrated programmes to align justice and health-care goals has great potential to improve long-term HIV treatment outcomes. FUNDING: US National Institutes of Health.


Subject(s)
Anti-HIV Agents/therapeutic use , Continuity of Patient Care , HIV Infections/drug therapy , Antiretroviral Therapy, Highly Active/methods , Female , HIV Infections/virology , Humans , Male , Outcome Assessment, Health Care , Population Surveillance , Prisoners/statistics & numerical data , Prisons , Retrospective Studies , Treatment Outcome , Viral Load
19.
Trials ; 18(1): 83, 2017 02 23.
Article in English | MEDLINE | ID: mdl-28231813

ABSTRACT

BACKGROUND: When designing studies that have a binary outcome as the primary endpoint, the hypothesized proportion of patients in each population experiencing the endpoint of interest (i.e., π 1,π 2) plays an important role in sample size and power calculations. Point estimates for π 1 and π 2 are often calculated using historical data. However, the uncertainty in these estimates is rarely addressed. METHODS: This paper presents a hybrid classical and Bayesian procedure that formally integrates prior information on the distributions of π 1 and π 2 into the study's power calculation. Conditional expected power (CEP), which averages the traditional power curve using the prior distributions of π 1 and π 2 as the averaging weight conditional on the presence of a positive treatment effect (i.e., π 2>π 1), is used, and the sample size is found that equates the pre-specified frequentist power (1-ß) and the conditional expected power of the trial. RESULTS: Notional scenarios are evaluated to compare the probability of achieving a target value of power with a trial design based on traditional power and a design based on CEP. We show that if there is uncertainty in the study parameters and a distribution of plausible values for π 1 and π 2, the performance of the CEP design is more consistent and robust than traditional designs based on point estimates for the study parameters. Traditional sample size calculations based on point estimates for the hypothesized study parameters tend to underestimate the required sample size needed to account for the uncertainty in the parameters. The greatest marginal benefit of the proposed method is achieved when the uncertainty in the parameters is not large. CONCLUSIONS: Through this procedure, we are able to formally integrate prior information on the uncertainty and variability of the study parameters into the design of the study while maintaining a frequentist framework for the final analysis. Solving for the sample size that is necessary to achieve a high level of CEP given the available prior information helps protect against misspecification of hypothesized treatment effect and provides a substantiated estimate that forms the basis for discussion about the study's feasibility during the design phase.


Subject(s)
Bayes Theorem , Clinical Trials as Topic , Research Design , Sample Size , Humans
20.
J Am Geriatr Soc ; 64(6): 1274-80, 2016 06.
Article in English | MEDLINE | ID: mdl-27321606

ABSTRACT

OBJECTIVES: To measure the incidence of urinary incontinence (UI) over 10 years in older women who did not report UI at baseline in 1998, to estimate the prevalence of female UI according to severity and type, and to explore potential risk factors for development of UI. DESIGN: Secondary analysis of a prospective cohort. SETTING: Health and Retirement Study. PARTICIPANTS: Women participating in the Health and Retirement Study between 1998 and 2008 who did not have UI at baseline (1998). MEASUREMENTS: UI was defined as an answer of "yes" to the question, "During the last 12 months, have you lost any amount of urine beyond your control?" UI was characterized according to severity (according to the Sandvik Severity Index) and type (according to International Continence Society definitions) at each biennial follow-up between 1998 and 2008. RESULTS: In 1998, 5,552 women aged 51 to 74 reported no UI. The cumulative incidence of UI in older women was 37.2% (95% confidence interval (CI)=36.0-38.5%). The most common incontinence type at the first report of leakage was mixed UI (49.1%, 95% CI=46.5-51.7%), and women commonly reported their symptoms at first leakage as moderate to severe (46.4%, 95% CI=43.8-49.0%). CONCLUSION: Development of UI in older women was common and tended to result in mixed type and moderate to severe symptoms.


Subject(s)
Urinary Incontinence/epidemiology , Aged , Female , Humans , Incidence , Longitudinal Studies , Prevalence , Prospective Studies , Surveys and Questionnaires , United States/epidemiology
SELECTION OF CITATIONS
SEARCH DETAIL
...