Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 36
Filter
1.
Ann Epidemiol ; 94: 81-90, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38710239

ABSTRACT

PURPOSE: Identifying predictors of opioid overdose following release from prison is critical for opioid overdose prevention. METHODS: We leveraged an individually linked, state-wide database from 2015-2020 to predict the risk of opioid overdose within 90 days of release from Massachusetts state prisons. We developed two decision tree modeling schemes: a model fit on all individuals with a single weight for those that experienced an opioid overdose and models stratified by race/ethnicity. We compared the performance of each model using several performance measures and identified factors that were most predictive of opioid overdose within racial/ethnic groups and across models. RESULTS: We found that out of 44,246 prison releases in Massachusetts between 2015-2020, 2237 (5.1%) resulted in opioid overdose in the 90 days following release. The performance of the two predictive models varied. The single weight model had high sensitivity (79%) and low specificity (56%) for predicting opioid overdose and was more sensitive for White non-Hispanic individuals (sensitivity = 84%) than for racial/ethnic minority individuals. CONCLUSIONS: Stratified models had better balanced performance metrics for both White non-Hispanic and racial/ethnic minority groups and identified different predictors of overdose between racial/ethnic groups. Across racial/ethnic groups and models, involuntary commitment (involuntary treatment for alcohol/substance use disorder) was an important predictor of opioid overdose.


Subject(s)
Decision Trees , Opiate Overdose , Humans , Male , Opiate Overdose/epidemiology , Adult , Female , Massachusetts/epidemiology , Opioid-Related Disorders/epidemiology , Opioid-Related Disorders/ethnology , Prisoners/statistics & numerical data , Prisons/statistics & numerical data , Middle Aged , Analgesics, Opioid/poisoning , Analgesics, Opioid/adverse effects , Ethnicity/statistics & numerical data , Young Adult
2.
Health Justice ; 12(1): 11, 2024 Mar 12.
Article in English | MEDLINE | ID: mdl-38472497

ABSTRACT

BACKGROUND: Currently, there are more than two million people in prisons or jails, with nearly two-thirds meeting the criteria for a substance use disorder. Following these patterns, overdose is the leading cause of death following release from prison and the third leading cause of death during periods of incarceration in jails. Traditional quantitative methods analyzing the factors associated with overdose following incarceration may fail to capture structural and environmental factors present in specific communities. People with lived experiences in the criminal legal system and with substance use disorder hold unique perspectives and must be involved in the research process. OBJECTIVE: To identify perceived factors that impact overdose following release from incarceration among people with direct criminal legal involvement and experience with substance use. METHODS: Within a community-engaged approach to research, we used concept mapping to center the perspectives of people with personal experience with the carceral system. The following prompt guided our study: "What do you think are some of the main things that make people who have been in jail or prison more and less likely to overdose?" Individuals participated in three rounds of focus groups, which included brainstorming, sorting and rating, and community interpretation. We used the Concept Systems Inc. platform groupwisdom for our analyses and constructed cluster maps. RESULTS: Eight individuals (ages 33 to 53) from four states participated. The brainstorming process resulted in 83 unique factors that impact overdose. The concept mapping process resulted in five clusters: (1) Community-Based Prevention, (2) Drug Use and Incarceration, (3) Resources for Treatment for Substance Use, (4) Carceral Factors, and (5) Stigma and Structural Barriers. CONCLUSIONS: Our study provides critical insight into community-identified factors associated with overdose following incarceration. These factors should be accounted for during resource planning and decision-making.

3.
J Cancer Policy ; 39: 100460, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38061493

ABSTRACT

In India the cancer burden for 2021 was 26.7 million disability-adjusted life years (DALYs), and this is expected to increase to 29.8 million in 2025 (Kulothungan et al., 2022). According to the World Health Organisation (WHO), cancer is a leading cause of death worldwide, accounting for one in six deaths. As per WHO, palliative care is a strategy that assists both adults and children along with their families in dealing with life-threatening illnesses. Currently, only 14% of those in need of pain and palliative (P&P) care receive it globally (WHO, 2020). Financial toxicity (FT) is the term used to describe the negative effects that an excessive financial burden resulting from cancer have on patients, their families, and society (Desai and Gyawali, 2020). Addressing this gap will require significant adjustments to both demand- and supply-side policies to ensure accessible and equitable cancer care in India (Caduff et al., 2019). Measuring FT along with health-related quality of life (HRQoL) represents a clinically relevant and patient-centred approach (de Souza et al., 2017). AIM AND OBJECTIVE: To estimate FT and its association with quality of life (QoL). MATERIALS AND METHODS: This was an observational descriptive study conducted among cancer patients recommended for P&P care. Scores were estimated from September 2022 to February 2023 using official tools: the Functional Assessment for Chronic illness Treatment Compressive Score for Financial Toxicity (FACIT-COST) and the European Organisation for Research and Treatment of Cancer (EORTC) Quality of life Questionnaires for Cancer (QLQ30). RESULTS: From 150 patients (70 males and 80 females, mean age 54.96 ± 13.5 years), 92.6% suffered from FT. Eleven patients (7.3%) were under FT grade 0, 41 (27.3%) were FT grade 1, 98 (65.3%) were FT grade 2, and no patients were under FT grade 3. At criterial alpha 0.05 (95%CI), FT and the global score for HRQoL showed an association. Among inpatient department (IPD) expenses, medication bills contributed the greatest expense at 33%, and among outpatient department (OPD) expenses treatment expenses contributed 50% of the total. Breast cancer (30 cases, 20%) and oral cancer (26 cases, 17.3%) were the most frequent cancers. CONCLUSION: FT measured using the COST tool showed an association with HRQoL. POLICY SUMMARY: This paper refers to the insurance policies available for cancer patients irrespective of P&P care treatment.


Subject(s)
Breast Neoplasms , Quality of Life , Male , Adult , Child , Female , Humans , Middle Aged , Aged , Financial Stress , Palliative Care/methods , Pain Management , Pain
4.
J Community Health ; 49(1): 91-99, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37507525

ABSTRACT

Occupational exposure to SARS-CoV-2 varies by profession, but "essential workers" are often considered in aggregate in COVID-19 models. This aggregation complicates efforts to understand risks to specific types of workers or industries and target interventions, specifically towards non-healthcare workers. We used census tract-resolution American Community Survey data to develop novel essential worker categories among the occupations designated as COVID-19 Essential Services in Massachusetts. Census tract-resolution COVID-19 cases and deaths were provided by the Massachusetts Department of Public Health. We evaluated the association between essential worker categories and cases and deaths over two phases of the pandemic from March 2020 to February 2021 using adjusted mixed-effects negative binomial regression, controlling for other sociodemographic risk factors. We observed elevated COVID-19 case incidence in census tracts in the highest tertile of workers in construction/transportation/buildings maintenance (Phase 1: IRR 1.32 [95% CI 1.22, 1.42]; Phase 2: IRR: 1.19 [1.13, 1.25]), production (Phase 1: IRR: 1.23 [1.15, 1.33]; Phase 2: 1.18 [1.12, 1.24]), and public-facing sales and services occupations (Phase 1: IRR: 1.14 [1.07, 1.21]; Phase 2: IRR: 1.10 [1.06, 1.15]). We found reduced case incidence associated with greater percentage of essential workers able to work from home (Phase 1: IRR: 0.85 [0.78, 0.94]; Phase 2: IRR: 0.83 [0.77, 0.88]). Similar trends exist in the associations between essential worker categories and deaths, though attenuated. Estimating industry-specific risk for essential workers is important in targeting interventions for COVID-19 and other diseases and our categories provide a reproducible and straightforward way to support such efforts.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , SARS-CoV-2 , Occupations , Industry , Massachusetts/epidemiology
5.
Clin Kidney J ; 16(1): 90-99, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36726432

ABSTRACT

Background: Protein biomarkers may provide insight into kidney disease pathology but their use for the identification of phenotypically distinct kidney diseases has not been evaluated. Methods: We used unsupervised hierarchical clustering on 225 plasma biomarkers in 541 individuals enrolled into the Boston Kidney Biopsy Cohort, a prospective cohort study of individuals undergoing kidney biopsy with adjudicated histopathology. Using principal component analysis, we studied biomarker levels by cluster and examined differences in clinicopathologic diagnoses and histopathologic lesions across clusters. Cox proportional hazards models tested associations of clusters with kidney failure and death. Results: We identified three biomarker-derived clusters. The mean estimated glomerular filtration rate was 72.9 ± 28.7, 72.9 ± 33.4 and 39.9 ± 30.4 mL/min/1.73 m2 in Clusters 1, 2 and 3, respectively. The top-contributing biomarker in Cluster 1 was AXIN, a negative regulator of the Wnt signaling pathway. The top-contributing biomarker in Clusters 2 and 3 was Placental Growth Factor, a member of the vascular endothelial growth factor family. Compared with Cluster 1, individuals in Cluster 3 were more likely to have tubulointerstitial disease (P < .001) and diabetic kidney disease (P < .001) and had more severe mesangial expansion [odds ratio (OR) 2.44, 95% confidence interval (CI) 1.29, 4.64] and inflammation in the fibrosed interstitium (OR 2.49 95% CI 1.02, 6.10). After multivariable adjustment, Cluster 3 was associated with higher risks of kidney failure (hazard ratio 3.29, 95% CI 1.37, 7.90) compared with Cluster 1. Conclusion: Plasma biomarkers may identify clusters of individuals with kidney disease that associate with different clinicopathologic diagnoses, histopathologic lesions and adverse outcomes, and may uncover biomarker candidates and relevant pathways for further study.

6.
Ann Epidemiol ; 80: 62-68.e3, 2023 04.
Article in English | MEDLINE | ID: mdl-36822278

ABSTRACT

PURPOSE: When studying health risks across a large geographic region such as a state or province, researchers often assume that finer-resolution data on health outcomes and risk factors will improve inferences by avoiding ecological bias and other issues associated with geographic aggregation. However, coarser-resolution data (e.g., at the town or county-level) are more commonly publicly available and packaged for easier access, allowing for rapid analyses. The advantages and limitations of using finer-resolution data, which may improve precision at the cost of time spent gaining access and processing data, have not been considered in detail to date. METHODS: We systematically examine the implications of conducting town-level mixed-effect regression analyses versus census-tract-level analyses to study sociodemographic predictors of COVID-19 in Massachusetts. In a series of negative binomial regressions, we vary the spatial resolution of the outcome, the resolution of variable selection, and the resolution of the random effect to allow for more direct comparison across models. RESULTS: We find stability in some estimates across scenarios, changes in magnitude, direction, and significance in others, and tighter confidence intervals on the census-tract level. Conclusions regarding sociodemographic predictors are robust when regions of high concentration remain consistent across town and census-tract resolutions. CONCLUSIONS: Inferences about high-risk populations may be misleading if derived from town- or county-resolution data, especially for covariates that capture small subgroups (e.g., small racial minority populations) or are geographically concentrated or skewed (e.g., % college students). Our analysis can help inform more rapid and efficient use of public health data by identifying when finer-resolution data are truly most informative, or when coarser-resolution data may be misleading.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , Massachusetts/epidemiology , Risk Factors , Students , Regression Analysis
7.
bioRxiv ; 2023 Jan 30.
Article in English | MEDLINE | ID: mdl-36711818

ABSTRACT

Rationale: Many blood-based transcriptional gene signatures for tuberculosis (TB) have been developed with potential use to diagnose disease, predict risk of progression from infection to disease, and monitor TB treatment outcomes. However, an unresolved issue is whether gene set enrichment analysis (GSEA) of the signature transcripts alone is sufficient for prediction and differentiation, or whether it is necessary to use the original statistical model created when the signature was derived. Intra-method comparison is complicated by the unavailability of original training data, missing details about the original trained model, and inadequate publicly-available software tools or source code implementing models. To facilitate these signatures' replicability and appropriate utilization in TB research, comprehensive comparisons between gene set scoring methods with cross-data validation of original model implementations are needed. Objectives: We compared the performance of 19 TB gene signatures across 24 transcriptomic datasets using both re-rebuilt original models and gene set scoring methods to evaluate whether gene set scoring is a reasonable proxy to the performance of the original trained model. We have provided an open-access software implementation of the original models for all 19 signatures for future use. Methods: We considered existing gene set scoring and machine learning methods, including ssGSEA, GSVA, PLAGE, Singscore, and Zscore, as alternative approaches to profile gene signature performance. The sample-size-weighted mean area under the curve (AUC) value was computed to measure each signature's performance across datasets. Correlation analysis and Wilcoxon paired tests were used to analyze the performance of enrichment methods with the original models. Measurement and Main Results: For many signatures, the predictions from gene set scoring methods were highly correlated and statistically equivalent to the results given by the original diagnostic models. PLAGE outperformed all other gene scoring methods. In some cases, PLAGE outperformed the original models when considering signatures' weighted mean AUC values and the AUC results within individual studies. Conclusion: Gene set enrichment scoring of existing blood-based biomarker gene sets can distinguish patients with active TB disease from latent TB infection and other clinical conditions with equivalent or improved accuracy compared to the original methods and models. These data justify using gene set scoring methods of published TB gene signatures for predicting TB risk and treatment outcomes, especially when original models are difficult to apply or implement.

8.
Bioinformatics ; 39(1)2023 01 01.
Article in English | MEDLINE | ID: mdl-36576001

ABSTRACT

MOTIVATION: In the training of predictive models using high-dimensional genomic data, multiple studies' worth of data are often combined to increase sample size and improve generalizability. A drawback of this approach is that there may be different sets of features measured in each study due to variations in expression measurement platform or technology. It is often common practice to work only with the intersection of features measured in common across all studies, which results in the blind discarding of potentially useful feature information that is measured in individual or subsets of studies. RESULTS: We characterize the loss in predictive performance incurred by using only the intersection of feature information available across all studies when training predictors using gene expression data from microarray and sequencing datasets. We study the properties of linear and polynomial regression for imputing discarded features and demonstrate improvements in the external performance of prediction functions through simulation and in gene expression data collected on breast cancer patients. To improve this process, we propose a pairwise strategy that applies any imputation algorithm to two studies at a time and averages imputed features across pairs. We demonstrate that the pairwise strategy is preferable to first merging all datasets together and imputing any resulting missing features. Finally, we provide insights on which subsets of intersected and study-specific features should be used so that missing-feature imputation best promotes cross-study replicability. AVAILABILITY AND IMPLEMENTATION: The code is available at https://github.com/YujieWuu/Pairwise_imputation. SUPPLEMENTARY INFORMATION: Supplementary information is available at Bioinformatics online.


Subject(s)
Algorithms , Genomics , Humans , Sample Size , Genome , Computer Simulation
9.
Biostatistics ; 24(3): 635-652, 2023 Jul 14.
Article in English | MEDLINE | ID: mdl-34893807

ABSTRACT

Nonignorable technical variation is commonly observed across data from multiple experimental runs, platforms, or studies. These so-called batch effects can lead to difficulty in merging data from multiple sources, as they can severely bias the outcome of the analysis. Many groups have developed approaches for removing batch effects from data, usually by accommodating batch variables into the analysis (one-step correction) or by preprocessing the data prior to the formal or final analysis (two-step correction). One-step correction is often desirable due it its simplicity, but its flexibility is limited and it can be difficult to include batch variables uniformly when an analysis has multiple stages. Two-step correction allows for richer models of batch mean and variance. However, prior investigation has indicated that two-step correction can lead to incorrect statistical inference in downstream analysis. Generally speaking, two-step approaches introduce a correlation structure in the corrected data, which, if ignored, may lead to either exaggerated or diminished significance in downstream applications such as differential expression analysis. Here, we provide more intuitive and more formal evaluations of the impacts of two-step batch correction compared to existing literature. We demonstrate that the undesired impacts of two-step correction (exaggerated or diminished significance) depend on both the nature of the study design and the batch effects. We also provide strategies for overcoming these negative impacts in downstream analyses using the estimated correlation matrix of the corrected data. We compare the results of our proposed workflow with the results from other published one-step and two-step methods and show that our methods lead to more consistent false discovery controls and power of detection across a variety of batch effect scenarios. Software for our method is available through GitHub (https://github.com/jtleek/sva-devel) and will be available in future versions of the $\texttt{sva}$ R package in the Bioconductor project (https://bioconductor.org/packages/release/bioc/html/sva.html).


Subject(s)
Gene Expression , Humans , Phylogeny , Research Design
10.
J Racial Ethn Health Disparities ; 10(4): 2071-2080, 2023 08.
Article in English | MEDLINE | ID: mdl-36056195

ABSTRACT

Infectious disease surveillance frequently lacks complete information on race and ethnicity, making it difficult to identify health inequities. Greater awareness of this issue has occurred due to the COVID-19 pandemic, during which inequities in cases, hospitalizations, and deaths were reported but with evidence of substantial missing demographic details. Although the problem of missing race and ethnicity data in COVID-19 cases has been well documented, neither its spatiotemporal variation nor its particular drivers have been characterized. Using individual-level data on confirmed COVID-19 cases in Massachusetts from March 2020 to February 2021, we show how missing race and ethnicity data: (1) varied over time, appearing to increase sharply during two different periods of rapid case growth; (2) differed substantially between towns, indicating a nonrandom distribution; and (3) was associated significantly with several individual- and town-level characteristics in a mixed-effects regression model, suggesting a combination of personal and infrastructural drivers of missing data that persisted despite state and federal data-collection mandates. We discuss how a variety of factors may contribute to persistent missing data but could potentially be mitigated in future contexts.


Subject(s)
COVID-19 , Ethnicity , Humans , Pandemics , Racial Groups , Massachusetts/epidemiology
11.
Ann Appl Stat ; 16(4): 2145-2165, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36274786

ABSTRACT

We propose the "study strap ensemble", which combines advantages of two common approaches to fitting prediction models when multiple training datasets ("studies") are available: pooling studies and fitting one model versus averaging predictions from multiple models each fit to individual studies. The study strap ensemble fits models to bootstrapped datasets, or "pseudo-studies." These are generated by resampling from multiple studies with a hierarchical resampling scheme that generalizes the randomized cluster bootstrap. The study strap is controlled by a tuning parameter that determines the proportion of observations to draw from each study. When the parameter is set to its lowest value, each pseudo-study is resampled from only a single study. When it is high, the study strap ignores the multi-study structure and generates pseudo-studies by merging the datasets and drawing observations like a standard bootstrap. We empirically show the optimal tuning value often lies in between, and prove that special cases of the study strap draw the merged dataset and the set of original studies as pseudo-studies. We extend the study strap approach with an ensemble weighting scheme that utilizes information in the distribution of the covariates of the test dataset. Our work is motivated by neuroscience experiments using real-time neurochemical sensing during awake behavior in humans. Current techniques to perform this kind of research require measurements from an electrode placed in the brain during awake neurosurgery and rely on prediction models to estimate neurotransmitter concentrations from the electrical measurements recorded by the electrode. These models are trained by combining multiple datasets that are collected in vitro under heterogeneous conditions in order to promote accuracy of the models when applied to data collected in the brain. A prevailing challenge is deciding how to combine studies or ensemble models trained on different studies to enhance model generalizability. Our methods produce marked improvements in simulations and in this application. All methods are available in the studyStrap CRAN package.

12.
Cureus ; 14(8): e28090, 2022 Aug.
Article in English | MEDLINE | ID: mdl-36134072

ABSTRACT

INTRODUCTION:  Endodontic access cavity preparation plays a vital role as preservation of enamel structure is of utmost importance for a tooth's strength to be maintained. As teeth become fragile after a root canal therapy, this study was designed to compare in vitro the fracture resistance of root-filled and restored teeth with traditional endodontic access cavity, conservative endodontic access cavity (CEC), ninja endodontic access cavity (NEC), and truss endodontic access cavity (TEC). MATERIALS AND METHODS:  Control (intact teeth) and traditional endodontic access cavity as well as CEC, NEC, and TEC groups were each given a new human mandibular molar that was freshly removed. Cone beam computed tomography (CBCT) scans of the cone beam showed the values of CEC, NEC, and TEC. After that the teeth were endodontically treated and repaired. To test the specimens, universal testing equipment was used. In order to avoid tooth breakage, the maximum load was determined. Statistical analysis was used in the form of Kolmogorov-Smirnov and Levene tests, which were used to examine data for typical dispersion and consistency in change. RESULTS:  Intact teeth showed the highest resistance to fracture compared with other groups. TEC showed significantly higher resistance to fracture compared to the CEC design. CONCLUSIONS:  It is possible, within the restrictions of this research, to infer that the TEC design enhanced tooth fracture strength in comparison with the CEC design.

13.
Environ Sci Technol Lett ; 9(9): 706-711, 2022 Sep 13.
Article in English | MEDLINE | ID: mdl-36118960

ABSTRACT

Mobility reductions following the COVID-19 pandemic in the United States were higher, and sustained longer, for aviation than ground transportation activity. We evaluate changes in ultrafine particle (UFP, Dp < 100 nm, a marker of fuel-combustion emissions) concentrations at a site near Logan Airport (Boston, Massachusetts) in relation to mobility reductions. Several years of particle number concentration (PNC) data prepandemic [1/2017-9/2018] and during the state-of-emergency (SOE) phase of the pandemic [4/2020-6/2021] were analyzed to assess the emissions reduction impact on PNC, controlling for season and wind direction. Mean PNC was 48% lower during the first three months of the SOE than prepandemic, consistent with 74% lower flight activity and 39% (local)-51% (highway) lower traffic volume. Traffic volume and mean PNC for all wind directions returned to prepandemic levels by 6/2021; however, when the site was downwind from Logan Airport, PNC remained lower than prepandemic levels (by 23%), consistent with lower-than-normal flight activity (44% below prepandemic levels). Our study shows the effect of pandemic-related mobility changes on PNC in a near-airport community, and it distinguishes aviation-related and ground transportation source contributions.

14.
Cureus ; 14(5): e25126, 2022 May.
Article in English | MEDLINE | ID: mdl-35733474

ABSTRACT

Endodontists have a major problem when dealing with perforating internal resorption, which is an uncommon condition in permanent teeth. Success in treating a resorbed root can only be achieved if the root is properly diagnosed, removed, and treated. Cone-beam computed tomography was used to locate the resorptive lesion and assess its severity (CBCT). A maxillary canine with significant root perforation owing to internal resorption was successfully surgically treated in this case report.

15.
Influenza Other Respir Viruses ; 16(2): 213-221, 2022 03.
Article in English | MEDLINE | ID: mdl-34761531

ABSTRACT

BACKGROUND: The COVID-19 pandemic has highlighted the need for targeted local interventions given substantial heterogeneity within cities and counties. Publicly available case data are typically aggregated to the city or county level to protect patient privacy, but more granular data are necessary to identify and act upon community-level risk factors that can change over time. METHODS: Individual COVID-19 case and mortality data from Massachusetts were geocoded to residential addresses and aggregated into two time periods: "Phase 1" (March-June 2020) and "Phase 2" (September 2020 to February 2021). Institutional cases associated with long-term care facilities, prisons, or homeless shelters were identified using address data and modeled separately. Census tract sociodemographic and occupational predictors were drawn from the 2015-2019 American Community Survey. We used mixed-effects negative binomial regression to estimate incidence rate ratios (IRRs), accounting for town-level spatial autocorrelation. RESULTS: Case incidence was elevated in census tracts with higher proportions of Black and Latinx residents, with larger associations in Phase 1 than Phase 2. Case incidence associated with proportion of essential workers was similarly elevated in both Phases. Mortality IRRs had differing patterns from case IRRs, decreasing less substantially between Phases for Black and Latinx populations and increasing between Phases for proportion of essential workers. Mortality models excluding institutional cases yielded stronger associations for age, race/ethnicity, and essential worker status. CONCLUSIONS: Geocoded home address data can allow for nuanced analyses of community disease patterns, identification of high-risk subgroups, and exclusion of institutional cases to comprehensively reflect community risk.


Subject(s)
COVID-19 , Health Status Disparities , Humans , Massachusetts/epidemiology , Pandemics , SARS-CoV-2
16.
Int J Drug Policy ; 100: 103534, 2022 02.
Article in English | MEDLINE | ID: mdl-34896932

ABSTRACT

BACKGROUND: People with a history of incarceration are at high risk for opioid overdose. A variety of factors contribute to this elevated risk though our understanding of these factors is deficient. Research to identify risk and protective factors for overdose is often conducted using administrative data or researcher-derived surveys and without explicit input from people with lived experience. We aimed to understand the scope of U.S. research on factors associated with opioid overdose among previously incarcerated people. We did this by conducting a narrative review of the literature and convening expert panels of people with lived experience. We then categorized these factors using a social determinants of health framework to help contextualize our findings. METHODS: We first conducted a narrative review of the published literature. A search was performed using PubMed and APA PsycInfo. We then convened two expert panels consisting of people with lived experience and people who work with people who were previously incarcerated. Experts were asked to evaluate the literature derived factors for completeness and add factors that were not identified. Finally, we categorized factors as either intermediary or structural according to the World Health Organization's Social Determinants of Health (SDOH) Framework. RESULTS: We identified 13 papers that met our inclusion criteria for the narrative review. Within these 13 papers, we identified 22 relevant factors for their role in the relationship between overdose and people with a history of incarceration, 16 were risk factors and six were protective factors. Five of these were structural factors (three risk and two protective) and 17 were intermediary factors (13 risk and four protective). The expert panels identified 21 additional factors, 10 of which were structural (six risk and four protective) and 11 of which were intermediary (eight risk and three protective). CONCLUSION: This narrative review along with expert panels demonstrates a gap in the published literature regarding factors associated with overdose among people who were previously incarcerated. Additionally, this review highlights a substantial gap with regard to the types of factors that are typically identified. Incorporating voices of people with lived experience is crucial to our understanding of overdose in this at-risk population.


Subject(s)
Drug Overdose , Opiate Overdose , Opioid-Related Disorders , Prisoners , Analgesics, Opioid/therapeutic use , Drug Overdose/drug therapy , Drug Overdose/epidemiology , Humans , Opiate Overdose/epidemiology , Opioid-Related Disorders/drug therapy , Opioid-Related Disorders/epidemiology
17.
BMC Infect Dis ; 21(1): 686, 2021 Jul 16.
Article in English | MEDLINE | ID: mdl-34271870

ABSTRACT

BACKGROUND: Associations between community-level risk factors and COVID-19 incidence have been used to identify vulnerable subpopulations and target interventions, but the variability of these associations over time remains largely unknown. We evaluated variability in the associations between community-level predictors and COVID-19 case incidence in 351 cities and towns in Massachusetts from March to October 2020. METHODS: Using publicly available sociodemographic, occupational, environmental, and mobility datasets, we developed mixed-effect, adjusted Poisson regression models to depict associations between these variables and town-level COVID-19 case incidence data across five distinct time periods from March to October 2020. We examined town-level demographic variables, including population proportions by race, ethnicity, and age, as well as factors related to occupation, housing density, economic vulnerability, air pollution (PM2.5), and institutional facilities. We calculated incidence rate ratios (IRR) associated with these predictors and compared these values across the multiple time periods to assess variability in the observed associations over time. RESULTS: Associations between key predictor variables and town-level incidence varied across the five time periods. We observed reductions over time in the association with percentage of Black residents (IRR = 1.12 [95%CI: 1.12-1.13]) in early spring, IRR = 1.01 [95%CI: 1.00-1.01] in early fall) and COVID-19 incidence. The association with number of long-term care facility beds per capita also decreased over time (IRR = 1.28 [95%CI: 1.26-1.31] in spring, IRR = 1.07 [95%CI: 1.05-1.09] in fall). Controlling for other factors, towns with higher percentages of essential workers experienced elevated incidences of COVID-19 throughout the pandemic (e.g., IRR = 1.30 [95%CI: 1.27-1.33] in spring, IRR = 1.20 [95%CI: 1.17-1.22] in fall). Towns with higher proportions of Latinx residents also had sustained elevated incidence over time (IRR = 1.19 [95%CI: 1.18-1.21] in spring, IRR = 1.14 [95%CI: 1.13-1.15] in fall). CONCLUSIONS: Town-level COVID-19 risk factors varied with time in this study. In Massachusetts, racial (but not ethnic) disparities in COVID-19 incidence may have decreased across the first 8 months of the pandemic, perhaps indicating greater success in risk mitigation in selected communities. Our approach can be used to evaluate effectiveness of public health interventions and target specific mitigation efforts on the community level.


Subject(s)
COVID-19/epidemiology , Occupations/statistics & numerical data , Social Environment , Transportation/statistics & numerical data , Adult , Aged , Aged, 80 and over , COVID-19/ethnology , Ethnicity/statistics & numerical data , Female , Health Status Disparities , Humans , Incidence , Income/statistics & numerical data , Male , Massachusetts/epidemiology , Middle Aged , Movement/physiology , Pandemics , Residence Characteristics/statistics & numerical data , Risk Factors , SARS-CoV-2/physiology , Socioeconomic Factors , Time Factors , Vulnerable Populations/ethnology , Vulnerable Populations/statistics & numerical data , Young Adult
18.
AIDS Patient Care STDS ; 35(7): 271-277, 2021 07.
Article in English | MEDLINE | ID: mdl-34242092

ABSTRACT

Retention in HIV pre-exposure prophylaxis (PrEP) care is critical for effective PrEP implementation. Few studies have reported long-term lost to follow-up (LTFU) and re-engagement in PrEP care in the United States. Medical record data for all cisgender patients presenting to the major Rhode Island PrEP clinic from 2013 to 2019 were included. LTFU was defined as no PrEP follow-up appointment within 98 days. Re-engagement in care was defined as individuals who were ever LTFU and later attended a follow-up appointment. Recurrent event survival analysis was performed to explore factors associated with PrEP retention over time. Of 654 PrEP patients, the median age was 31 years old [interquartile range (IQR): 25, 43]. The majority were male (96%), White (64%), non-Hispanic (82%), and insured (97%). Overall, 72% patients were ever LTFU and 27% of those ever LTFU re-engaged in care. Female patients were 1.37 times [crude hazard ratio (cHR): 1.37; 95% confidence interval (CI): 0.86-2.18] more likely to be LTFU than male patients, and a 1-year increase in age was associated with a 1% lower hazard of being LTFU (cHR: 0.99; CI: 0.98-0.99). Being either heterosexual (aHR: 2.25, 95% (CI): 1.70-2.99] or bisexual (aHR: 2.35, 95% CI: 1.15-4.82) was associated with a higher hazard of loss to follow-up compared with having same-sex partners only. The majority of PrEP users were LTFU, especially at the first 6 months of PrEP initiation. Although a significant number were re-engaged in care, targeted interventions are needed to improve retention in PrEP care. This study characterized the natural projection of loss to follow-up and re-engagement in HIV PrEP care using a longitudinal clinic cohort data and explored associated factors for guiding future interventions to improve retention in PrEP care.


Subject(s)
Anti-HIV Agents , HIV Infections , Pre-Exposure Prophylaxis , Sexual and Gender Minorities , Adult , Anti-HIV Agents/therapeutic use , Female , Follow-Up Studies , HIV Infections/drug therapy , HIV Infections/prevention & control , Humans , Lost to Follow-Up , Male , United States
19.
Res Sq ; 2021 Feb 17.
Article in English | MEDLINE | ID: mdl-33619475

ABSTRACT

BACKGROUND: Associations between community-level risk factors and COVID-19 incidence are used to identify vulnerable subpopulations and target interventions, but the variability of these associations over time remains largely unknown. We evaluated variability in the associations between community-level predictors and COVID-19 case incidence in 351 cities and towns in Massachusetts from March to October 2020. METHODS: Using publicly available sociodemographic, occupational, environmental, and mobility datasets, we developed mixed-effect, adjusted Poisson regression models to depict associations between these variables and town-level COVID-19 case incidence data across five distinct time periods. We examined town-level demographic variables, including z-scores of percent Black, Latinx, over 80 years and undergraduate students, as well as factors related to occupation, housing density, economic vulnerability, air pollution (PM 2.5 ), and institutional facilities. RESULTS: Associations between key predictor variables and town-level incidence varied across the five time periods. We observed reductions over time in the association with percentage Black residents (IRR=1.12 CI=(1.12-1.13) in spring, IRR=1.01 CI=(1.00-1.01) in fall). The association with number of long-term care facility beds per capita also decreased over time (IRR=1.28 CI=(1.26-1.31) in spring, IRR=1.07 CI=(1.05-1.09)in fall). Controlling for other factors, towns with higher percentages of essential workers experienced elevated incidence of COVID-19 throughout the pandemic (e.g., IRR=1.30 CI=(1.27-1.33) in spring, IRR=1.20, CI=(1.17-1.22) in fall). Towns with higher percentages of Latinx residents also had sustained elevated incidence over time (e.g., IRR=1.19 CI=(1.18-1.21) in spring, IRR=1.14 CI=(1.13-1.15) in fall). CONCLUSIONS: Town-level COVID-19 risk factors vary with time. In Massachusetts, racial (but not ethnic) disparities in COVID-19 incidence have decreased over time, perhaps indicating greater success in risk mitigation in selected communities. Our approach can be used to evaluate effectiveness of public health interventions and target specific mitigation efforts on the community level.

20.
Surg Endosc ; 35(1): 182-191, 2021 01.
Article in English | MEDLINE | ID: mdl-31953733

ABSTRACT

BACKGROUND: Postoperative gastrointestinal leak and venous thromboembolism (VTE) are devastating complications of bariatric surgery. The performance of currently available predictive models for these complications remains wanting, while machine learning has shown promise to improve on traditional modeling approaches. The purpose of this study was to compare the ability of two machine learning strategies, artificial neural networks (ANNs), and gradient boosting machines (XGBs) to conventional models using logistic regression (LR) in predicting leak and VTE after bariatric surgery. METHODS: ANN, XGB, and LR prediction models for leak and VTE among adults undergoing initial elective weight loss surgery were trained and validated using preoperative data from 2015 to 2017 from Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program database. Data were randomly split into training, validation, and testing populations. Model performance was measured by the area under the receiver operating characteristic curve (AUC) on the testing data for each model. RESULTS: The study cohort contained 436,807 patients. The incidences of leak and VTE were 0.70% and 0.46%. ANN (AUC 0.75, 95% CI 0.73-0.78) was the best-performing model for predicting leak, followed by XGB (AUC 0.70, 95% CI 0.68-0.72) and then LR (AUC 0.63, 95% CI 0.61-0.65, p < 0.001 for all comparisons). In detecting VTE, ANN, and XGB, LR achieved similar AUCs of 0.65 (95% CI 0.63-0.68), 0.67 (95% CI 0.64-0.70), and 0.64 (95% CI 0.61-0.66), respectively; the performance difference between XGB and LR was statistically significant (p = 0.001). CONCLUSIONS: ANN and XGB outperformed traditional LR in predicting leak. These results suggest that ML has the potential to improve risk stratification for bariatric surgery, especially as techniques to extract more granular data from medical records improve. Further studies investigating the merits of machine learning to improve patient selection and risk management in bariatric surgery are warranted.


Subject(s)
Anastomotic Leak/etiology , Bariatric Surgery/adverse effects , Machine Learning , Postoperative Complications/etiology , Venous Thromboembolism/etiology , Adult , Cohort Studies , Databases, Factual , Diagnosis, Computer-Assisted , Humans , Logistic Models , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...