Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 111
Filter
2.
Pharmacoeconomics ; 42(5): 479-486, 2024 May.
Article in English | MEDLINE | ID: mdl-38583100

ABSTRACT

Value of Information (VOI) analyses calculate the economic value that could be generated by obtaining further information to reduce uncertainty in a health economic decision model. VOI has been suggested as a tool for research prioritisation and trial design as it can highlight economically valuable avenues for future research. Recent methodological advances have made it increasingly feasible to use VOI in practice for research; however, there are critical differences between the VOI approach and the standard methods used to design research studies such as clinical trials. We aimed to highlight key differences between the research design approach based on VOI and standard clinical trial design methods, in particular the importance of considering the full decision context. We present two hypothetical examples to demonstrate that VOI methods are only accurate when (1) all feasible comparators are included in the decision model when designing research, and (2) all comparators are retained in the decision model once the data have been collected and a final treatment recommendation is made. Omitting comparators from either the design or analysis phase of research when using VOI methods can lead to incorrect trial designs and/or treatment recommendations. Overall, we conclude that incorrectly specifying the health economic model by ignoring potential comparators can lead to misleading VOI results and potentially waste scarce research resources.


Subject(s)
Clinical Trials as Topic , Decision Support Techniques , Models, Economic , Research Design , Humans , Clinical Trials as Topic/economics , Clinical Trials as Topic/methods , Cost-Benefit Analysis , Uncertainty , Decision Making
3.
Med Decis Making ; 44(4): 393-404, 2024 May.
Article in English | MEDLINE | ID: mdl-38584481

ABSTRACT

OBJECTIVES: Utility scores associated with preference-based health-related quality-of-life instruments such as the EQ-5D-3L are reported as point estimates. In this study, we develop methods for capturing the uncertainty associated with the valuation study of the UK EQ-5D-3L that arises from the variability inherent in the underlying data, which is tacitly ignored by point estimates. We derive a new tariff that properly accounts for this and assigns a specific closed-form distribution to the utility of each of the 243 health states of the EQ-5D-3L. METHODS: Using the UK EQ-5D-3L valuation study, we used a Bayesian approach to obtain the posterior distributions of the derived utility scores. We constructed a hierarchical model that accounts for model misspecification and the responses of the survey participants to obtain Markov chain Monte Carlo (MCMC) samples from the posteriors. The posterior distributions were approximated by mixtures of normal distributions under the Kullback-Leibler (KL) divergence as the criterion for the assessment of the approximation. We considered the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm to estimate the parameters of the mixture distributions. RESULTS: We derived an MCMC sample of total size 4,000 × 243. No evidence of nonconvergence was found. Our model was robust to changes in priors and starting values. The posterior utility distributions of the EQ-5D-3L states were summarized as 3-component mixtures of normal distributions, and the corresponding KL divergence values were low. CONCLUSIONS: Our method accounts for layers of uncertainty in valuation studies, which are otherwise ignored. Our techniques can be applied to other instruments and countries' populations. HIGHLIGHTS: Guidelines for health technology assessments typically require that uncertainty be accounted for in economic evaluations, but the parameter uncertainty of the regression model used in the valuation study of the health instrument is often tacitly ignored.We consider the UK valuation study of the EQ-5D-3L and construct a Bayesian model that accounts for layers of uncertainty that would otherwise be disregarded, and we derive closed-form utility distributions.The derived tariff can be used by researchers in economic evaluations, as it allows analysts to directly sample a utility value from its corresponding distribution, which reflects the associated uncertainty of the utility score.


Subject(s)
Bayes Theorem , Health Status , Markov Chains , Monte Carlo Method , Quality of Life , Humans , Uncertainty , Quality of Life/psychology , Surveys and Questionnaires , United Kingdom , Quality-Adjusted Life Years
4.
Lancet Psychiatry ; 11(3): 183-192, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38360023

ABSTRACT

BACKGROUND: In 2012, the UK Government announced a series of immigration policy reforms known as the hostile environment policy, culminating in the Windrush scandal. We aimed to investigate the effect of the hostile environment policy on mental health for people from minoritised ethnic backgrounds. We hypothesised that people from Black Caribbean backgrounds would have worse mental health relative to people from White ethnic backgrounds after the Immigration Act 2014 and the Windrush scandal media coverage in 2017, since they were particularly targeted. METHODS: Using data from the UK Household Longitudinal Study, we performed a Bayesian interrupted time series analysis, accounting for fixed effects of confounders (sex, age, urbanicity, relationship status, number of children, education, physical or mental health impairment, housing, deprivation, employment, place of birth, income, and time), and random effects for residual temporal and spatial variation. We measured mental ill health using a widely used, self-administered questionnaire on psychological distress, the 12-item General Health Questionnaire (GHQ-12). We compared mean differences (MDs) and 95% credible intervals (CrIs) in mental ill health among people from minoritised ethnic groups (Black Caribbean, Black African, Indian, Bangladeshi, and Pakistani) relative to people of White ethnicity during three time periods: before the Immigration Act 2014, after the Immigration Act 2014, and after the start of the Windrush scandal media coverage in 2017. FINDINGS: We included 58 087 participants with a mean age of 45·0 years (SD 34·6; range 16-106), including 31 168 (53·6%) female and 26 919 (46·3%) male participants. The cohort consisted of individuals from the following ethnic backgrounds: 2519 (4·3%) Black African, 2197 (3·8%) Black Caribbean, 3153 (5·4%) Indian, 1584 (2·7%) Bangladeshi, 2801 (4·8%) Pakistani, and 45 833 (78·9%) White. People from Black Caribbean backgrounds had worse mental health than people of White ethnicity after the Immigration Act 2014 (MD in GHQ-12 score 0·67 [95% CrI 0·06-1·28]) and after the 2017 media coverage (1·28 [0·34-2·21]). For Black Caribbean participants born outside of the UK, mental health worsened after the Immigration Act 2014 (1·25 [0·11-2·38]), and for those born in the UK, mental health worsened after the 2017 media coverage (2·00 [0·84-3·15]). We did not observe effects in other minoritised ethnic groups. INTERPRETATION: Our finding that the hostile environment policy worsened the mental health of people from Black Caribbean backgrounds in the UK suggests that sufficient, appropriate mental health and social welfare support should be provided to those affected. Impact assessments of new policies on minority mental health should be embedded in all policy making. FUNDING: Wellcome Trust.


Subject(s)
Ethnicity , Mental Health , Child , Humans , Male , Female , Middle Aged , Longitudinal Studies , Bayes Theorem , Interrupted Time Series Analysis , England , Emigration and Immigration
5.
BMC Med Res Methodol ; 24(1): 32, 2024 Feb 10.
Article in English | MEDLINE | ID: mdl-38341552

ABSTRACT

BACKGROUND: When studying the association between treatment and a clinical outcome, a parametric multivariable model of the conditional outcome expectation is often used to adjust for covariates. The treatment coefficient of the outcome model targets a conditional treatment effect. Model-based standardization is typically applied to average the model predictions over the target covariate distribution, and generate a covariate-adjusted estimate of the marginal treatment effect. METHODS: The standard approach to model-based standardization involves maximum-likelihood estimation and use of the non-parametric bootstrap. We introduce a novel, general-purpose, model-based standardization method based on multiple imputation that is easily applicable when the outcome model is a generalized linear model. We term our proposed approach multiple imputation marginalization (MIM). MIM consists of two main stages: the generation of synthetic datasets and their analysis. MIM accommodates a Bayesian statistical framework, which naturally allows for the principled propagation of uncertainty, integrates the analysis into a probabilistic framework, and allows for the incorporation of prior evidence. RESULTS: We conduct a simulation study to benchmark the finite-sample performance of MIM in conjunction with a parametric outcome model. The simulations provide proof-of-principle in scenarios with binary outcomes, continuous-valued covariates, a logistic outcome model and the marginal log odds ratio as the target effect measure. When parametric modeling assumptions hold, MIM yields unbiased estimation in the target covariate distribution, valid coverage rates, and similar precision and efficiency than the standard approach to model-based standardization. CONCLUSION: We demonstrate that multiple imputation can be used to marginalize over a target covariate distribution, providing appropriate inference with a correctly specified parametric outcome model and offering statistical performance comparable to that of the standard approach to model-based standardization.


Subject(s)
Models, Statistical , Humans , Bayes Theorem , Linear Models , Computer Simulation , Logistic Models , Reference Standards
6.
Stat Methods Med Res ; 32(10): 1994-2015, 2023 10.
Article in English | MEDLINE | ID: mdl-37590094

ABSTRACT

In recent years regression discontinuity designs have been used increasingly for the estimation of treatment effects in observational medical data where a rule-based decision to apply treatment is taken using a continuous assignment variable. Most regression discontinuity design applications have focused on effect estimation where the outcome of interest is continuous, with scenarios with binary outcomes receiving less attention, despite their ubiquity in medical studies. In this work, we develop an approach to estimation of the risk ratio in a fuzzy regression discontinuity design (where treatment is not always strictly applied according to the decision rule), derived using common regression discontinuity design assumptions. This method compares favourably to other risk ratio estimation approaches: the established Wald estimator and a risk ratio estimate from a multiplicative structural mean model, with promising results from extensive simulation studies. A demonstration and further comparison are made using a real example to evaluate the effect of statins (where a statin prescription is made based on a patient's 10-year cardiovascular disease risk score) on low-density lipoprotein cholesterol reduction in UK Primary Care.


Subject(s)
Hydroxymethylglutaryl-CoA Reductase Inhibitors , Humans , Hydroxymethylglutaryl-CoA Reductase Inhibitors/therapeutic use , Odds Ratio , Cholesterol , Primary Health Care , United Kingdom
7.
Res Synth Methods ; 14(4): 652-658, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37287211

ABSTRACT

We examine four important considerations in the development of covariate adjustment methodologies for indirect treatment comparisons. First, we consider potential advantages of weighting versus outcome modeling, placing focus on bias-robustness. Second, we outline why model-based extrapolation may be required and useful, in the specific context of indirect treatment comparisons with limited overlap. Third, we describe challenges for covariate adjustment based on data-adaptive outcome modeling. Finally, we offer further perspectives on the promise of doubly robust covariate adjustment frameworks.


Subject(s)
Bias
8.
Med Decis Making ; 43(5): 610-620, 2023 07.
Article in English | MEDLINE | ID: mdl-37125724

ABSTRACT

BACKGROUND: External evidence is commonly used to inform survival modeling for health technology assessment (HTA). While there are a range of methodological approaches that have been proposed, it is unclear which methods could be used and how they compare. PURPOSE: This review aims to identify, describe, and categorize established methods to incorporate external evidence into survival extrapolation for HTA. DATA SOURCES: Embase, MEDLINE, EconLit, and Web of Science databases were searched to identify published methodological studies, supplemented by hand searching and citation tracking. STUDY SELECTION: Eligible studies were required to present a novel extrapolation approach incorporating external evidence (i.e., data or information) within survival model estimation. DATA EXTRACTION: Studies were classified according to how the external evidence was integrated as a part of model fitting. Information was extracted concerning the model-fitting process, key requirements, assumptions, software, application contexts, and presentation of comparisons with, or validation against, other methods. DATA SYNTHESIS: Across 18 methods identified from 22 studies, themes included use of informative prior(s) (n = 5), piecewise (n = 7), and general population adjustment (n = 9), plus a variety of "other" (n = 8) approaches. Most methods were applied in cancer populations (n = 13). No studies compared or validated their method against another method that also incorporated external evidence. LIMITATIONS: As only studies with a specific methodological objective were included, methods proposed as part of another study type (e.g., an economic evaluation) were excluded from this review. CONCLUSIONS: Several methods were identified in this review, with common themes based on typical data sources and analytical approaches. Of note, no evidence was found comparing the identified methods to one another, and so an assessment of different methods would be a useful area for further research.HighlightsThis review aims to identify methods that have been used to incorporate external evidence into survival extrapolations, focusing on those that may be used to inform health technology assessment.We found a range of different approaches, including piecewise methods, Bayesian methods using informative priors, and general population adjustment methods, as well as a variety of "other" approaches.No studies attempted to compare the performance of alternative methods for incorporating external evidence with respect to the accuracy of survival predictions. Further research investigating this would be valuable.


Subject(s)
Neoplasms , Technology Assessment, Biomedical , Humans , Bayes Theorem , Cost-Benefit Analysis
9.
PLoS One ; 18(5): e0286259, 2023.
Article in English | MEDLINE | ID: mdl-37252922

ABSTRACT

BACKGROUND: Schools are high-risk settings for infectious disease transmission. Wastewater monitoring for infectious diseases has been used to identify and mitigate outbreaks in many near-source settings during the COVID-19 pandemic, including universities and hospitals but less is known about the technology when applied for school health protection. This study aimed to implement a wastewater surveillance system to detect SARS-CoV-2 and other public health markers from wastewater in schools in England. METHODS: A total of 855 wastewater samples were collected from 16 schools (10 primary, 5 secondary and 1 post-16 and further education) over 10 months of school term time. Wastewater was analysed for SARS-CoV-2 genomic copies of N1 and E genes by RT-qPCR. A subset of wastewater samples was sent for genomic sequencing, enabling determination of the presence of SARS-CoV-2 and emergence of variant(s) contributing to COVID-19 infections within schools. In total, >280 microbial pathogens and >1200 AMR genes were screened using RT-qPCR and metagenomics to consider the utility of these additional targets to further inform on health threats within the schools. RESULTS: We report on wastewater-based surveillance for COVID-19 within English primary, secondary and further education schools over a full academic year (October 2020 to July 2021). The highest positivity rate (80.4%) was observed in the week commencing 30th November 2020 during the emergence of the Alpha variant, indicating most schools contained people who were shedding the virus. There was high SARS-CoV-2 amplicon concentration (up to 9.2x106 GC/L) detected over the summer term (8th June - 6th July 2021) during Delta variant prevalence. The summer increase of SARS-CoV-2 in school wastewater was reflected in age-specific clinical COVID-19 cases. Alpha variant and Delta variant were identified in the wastewater by sequencing of samples collected from December to March and June to July, respectively. Lead/lag analysis between SARS-CoV-2 concentrations in school and WWTP data sets show a maximum correlation between the two-time series when school data are lagged by two weeks. Furthermore, wastewater sample enrichment coupled with metagenomic sequencing and rapid informatics enabled the detection of other clinically relevant viral and bacterial pathogens and AMR. CONCLUSIONS: Passive wastewater monitoring surveillance in schools can identify cases of COVID-19. Samples can be sequenced to monitor for emerging and current variants of concern at the resolution of school catchments. Wastewater based monitoring for SARS-CoV-2 is a useful tool for SARS-CoV-2 passive surveillance and could be applied for case identification and containment, and mitigation in schools and other congregate settings with high risks of transmission. Wastewater monitoring enables public health authorities to develop targeted prevention and education programmes for hygiene measures within undertested communities across a broad range of use cases.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , SARS-CoV-2/genetics , Wastewater , Public Health , Pandemics , Wastewater-Based Epidemiological Monitoring , England/epidemiology , RNA, Viral
10.
Transl Psychiatry ; 13(1): 131, 2023 04 21.
Article in English | MEDLINE | ID: mdl-37085531

ABSTRACT

Cannabidiol (CBD) has shown promise in treating psychiatric disorders, including cannabis use disorder - a major public health burden with no approved pharmacotherapies. However, the mechanisms through which CBD acts are poorly understood. One potential mechanism of CBD is increasing levels of anandamide, which has been implicated in psychiatric disorders including depression and cannabis use disorder. However, there is a lack of placebo-controlled human trials investigating this in psychiatric disorders. We therefore assessed whether CBD affects plasma anandamide levels compared to placebo, within a randomised clinical trial of CBD for the treatment of cannabis use disorder. Individuals meeting criteria for cannabis use disorder and attempting cannabis cessation were randomised to 28-day administration with placebo (n = 23), 400 mg CBD/day (n = 24) or 800 mg CBD/day (n = 23). We estimated the effects of each CBD dose compared to placebo on anandamide levels from baseline to day 28. Analyses were conducted both unadjusted and adjusted for cannabis use during the trial to account for effects of cannabis on the endocannabinoid system. We also investigated whether changes in plasma anandamide levels were associated with clinical outcomes relevant for cannabis use disorder (cannabis use, withdrawal, anxiety, depression). There was an effect of 800 mg CBD compared to placebo on anandamide levels from baseline to day 28 after adjusting for cannabis use. Pairwise comparisons indicated that anandamide levels unexpectedly reduced from baseline to day 28 in the placebo group (-0.048, 95% CI [-0.089, -0.007]), but did not change in the 800 mg CBD group (0.005, 95% CI [-0.036, 0.047]). There was no evidence for an effect of 400 mg CBD compared to placebo. Changes in anandamide levels were not associated with clinical outcomes. In conclusion, this study found preliminary evidence that 28-day treatment with CBD modulates anandamide levels in individuals with cannabis use disorder at doses of 800 mg/day but not 400 mg/day compared to placebo.


Subject(s)
Cannabidiol , Cannabis , Hallucinogens , Marijuana Abuse , Humans , Cannabidiol/therapeutic use , Cannabidiol/pharmacology , Endocannabinoids , Marijuana Abuse/drug therapy , Dronabinol/pharmacology , Double-Blind Method
11.
Psychopharmacology (Berl) ; 240(2): 337-346, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36598543

ABSTRACT

RATIONALE: Chronic cannabis use is associated with impaired cognitive function. Evidence indicates cannabidiol (CBD) might be beneficial for treating cannabis use disorder. CBD may also have pro-cognitive effects; however, its effect on cognition in people with cannabis use disorder is currently unclear. OBJECTIVES: We aimed to assess whether a 4-week CBD treatment impacted cognitive function. We hypothesised that CBD treatment would improve cognition from baseline to week 4, compared to placebo. METHODS: Cognition was assessed as a secondary outcome in a phase 2a randomised, double-blind, parallel-group and placebo-controlled clinical trial of 4-week daily 200 mg, 400 mg and 800 mg CBD for the treatment of cannabis use disorder. Participants had moderate or severe DSM-5 cannabis use disorder and intended to quit cannabis use. Our pre-registered primary cognitive outcome was delayed prose recall. Secondary cognitive outcomes were immediate prose recall, stop signal reaction time, trail-making task performance, verbal fluency and digit span. RESULTS: Seventy participants were randomly assigned to placebo (n = 23), 400 mg CBD (n = 24) and 800 mg CBD (n = 23). A 200 mg group was eliminated from the trial because it was an inefficacious dose at interim analysis (n = 12) and was not analysed here. For the primary cognitive outcome, there was no effect of CBD compared to placebo, evidenced by a lack of dose-by-time interaction at 400 mg (0.46, 95%CIs: - 1.41, 2.54) and 800 mg (0.89, 95%CIs: - 0.99, 2.81). There was no effect of CBD compared to placebo on secondary cognitive outcomes, except backwards digit span which increased following 800 mg CBD (0.30, 95%CIs: 0.02, 0.58). CONCLUSIONS: In this clinical trial for cannabis use disorder, CBD did not influence delayed verbal memory. CBD did not have broad cognitive effects but 800 mg daily treatment may improve working memory manipulation. CLINICAL TRIAL REGISTRATION: The trial was registered with ClinicalTrials.gov (NCT02044809) and the EU Clinical Trials Register (2013-000,361-36).


Subject(s)
Cannabidiol , Cannabis , Hallucinogens , Marijuana Abuse , Substance-Related Disorders , Humans , Cannabidiol/pharmacology , Cannabidiol/therapeutic use , Marijuana Abuse/complications , Marijuana Abuse/drug therapy , Hallucinogens/pharmacology , Substance-Related Disorders/drug therapy , Cannabis/adverse effects , Cognition , Double-Blind Method
12.
ArXiv ; 2023 Mar 06.
Article in English | MEDLINE | ID: mdl-35075432

ABSTRACT

COVID-19 related deaths underestimate the pandemic burden on mortality because they suffer from completeness and accuracy issues. Excess mortality is a popular alternative, as it compares observed with expected deaths based on the assumption that the pandemic did not occur. Expected deaths had the pandemic not occurred depend on population trends, temperature, and spatio-temporal patterns. In addition to this, high geographical resolution is required to examine within country trends and the effectiveness of the different public health policies. In this tutorial, we propose a framework using R to estimate and visualise excess mortality at high geographical resolution. We show a case study estimating excess deaths during 2020 in Italy. The proposed framework is fast to implement and allows combining different models and presenting the results in any age, sex, spatial and temporal aggregation desired. This makes it particularly powerful and appealing for online monitoring of the pandemic burden and timely policy making.

13.
Stat Methods Med Res ; 32(1): 55-70, 2023 01.
Article in English | MEDLINE | ID: mdl-36366738

ABSTRACT

The regression discontinuity design is a quasi-experimental design that estimates the causal effect of a treatment when its assignment is defined by a threshold for a continuous variable. The regression discontinuity design assumes that subjects with measurements within a bandwidth around the threshold belong to a common population, so that the threshold can be seen as a randomising device assigning treatment to those falling just above the threshold and withholding it from those who fall below. Bandwidth selection represents a compelling decision for the regression discontinuity design analysis as results may be highly sensitive to its choice. A few methods to select the optimal bandwidth, mainly from the econometric literature, have been proposed. However, their use in practice is limited. We propose a methodology that, tackling the problem from an applied point of view, considers units' exchangeability, that is, their similarity with respect to measured covariates, as the main criteria to select subjects for the analysis, irrespectively of their distance from the threshold. We cluster the sample using a Dirichlet process mixture model to identify balanced and homogeneous clusters. Our proposal exploits the posterior similarity matrix, which contains the pairwise probabilities that two observations are allocated to the same cluster in the Markov chain Monte Carlo sample. Thus we include in the regression discontinuity design analysis only those clusters for which we have stronger evidence of exchangeability. We illustrate the validity of our methodology with both a simulated experiment and a motivating example on the effect of statins on cholesterol levels.


Subject(s)
Hydroxymethylglutaryl-CoA Reductase Inhibitors , Humans , Regression Analysis , Causality , Hydroxymethylglutaryl-CoA Reductase Inhibitors/therapeutic use , Research Design , Markov Chains , Bayes Theorem
14.
Med Decis Making ; 43(3): 299-310, 2023 04.
Article in English | MEDLINE | ID: mdl-36314662

ABSTRACT

BACKGROUND: Survival extrapolation is essential in cost-effectiveness analysis to quantify the lifetime survival benefit associated with a new intervention, due to the restricted duration of randomized controlled trials (RCTs). Current approaches of extrapolation often assume that the treatment effect observed in the trial can continue indefinitely, which is unrealistic and may have a huge impact on decisions for resource allocation. OBJECTIVE: We introduce a novel methodology as a possible solution to alleviate the problem of survival extrapolation with heavily censored data from clinical trials. METHOD: The main idea is to mix a flexible model (e.g., Cox semiparametric) to fit as well as possible the observed data and a parametric model encoding assumptions on the expected behavior of underlying long-term survival. The two are "blended" into a single survival curve that is identical with the Cox model over the range of observed times and gradually approaching the parametric model over the extrapolation period based on a weight function. The weight function regulates the way two survival curves are blended, determining how the internal and external sources contribute to the estimated survival over time. RESULTS: A 4-y follow-up RCT of rituximab in combination with fludarabine and cyclophosphamide versus fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia is used to illustrate the method. CONCLUSION: Long-term extrapolation from immature trial data may lead to significantly different estimates with various modelling assumptions. The blending approach provides sufficient flexibility, allowing a wide range of plausible scenarios to be considered as well as the inclusion of external information, based, for example, on hard data or expert opinion. Both internal and external validity can be carefully examined. HIGHLIGHTS: Interim analyses of trials with limited follow-up are often subject to high degrees of administrative censoring, which may result in implausible long-term extrapolations using standard approaches.In this article, we present an innovative methodology based on "blending" survival curves to relax the traditional proportional hazard assumption and simultaneously incorporate external information to guide the extrapolation.The blended method provides a simple and powerful framework to allow a careful consideration of a wide range of plausible scenarios, accounting for model fit to the short-term data as well as the plausibility of long-term extrapolations.


Subject(s)
Leukemia, Lymphocytic, Chronic, B-Cell , Technology Assessment, Biomedical , Humans , Cyclophosphamide/therapeutic use , Leukemia, Lymphocytic, Chronic, B-Cell/drug therapy , Survival Analysis , Cost-Benefit Analysis
15.
Int J Chron Obstruct Pulmon Dis ; 17: 1633-1642, 2022.
Article in English | MEDLINE | ID: mdl-35915738

ABSTRACT

Objectives: In the IMPACT trial (NCT02164513), triple therapy with fluticasone furoate/umeclidinium/vilanterol (FF/UMEC/VI) showed clinical benefit compared with dual therapy with either FF/VI or UMEC/VI in the treatment of chronic obstructive pulmonary disease (COPD). We used data from IMPACT to determine whether this translated into differences in COPD-related healthcare resource utilization (HRU) costs in a United Kingdom (UK) setting. Methods: In a within-trial analysis, individual patient data from the IMPACT intention-to-treat (ITT) population were analyzed to estimate rates of COPD-related HRU with FF/UMEC/VI, FF/VI, or UMEC/VI. A Bayesian approach was applied to address issues typically encountered with this kind of data, namely data missing due to early study withdrawal, subjects with zero reported HRU, and skewness. Rates of HRU were estimated under alternate assumptions of data being missing at random (MAR) or missing not at random (MNAR). UK-specific unit costs were then applied to estimated HRU rates to calculate treatment-specific costs. Results: Under each MNAR scenario, per patient per year (PPPY) rates of COPD-related HRU were lowest amongst those patients who received treatment with FF/UMEC/VI compared with those receiving either FF/VI or UMEC/VI. Although absolute HRU rates and costs were typically higher for all treatment groups under MNAR scenarios versus MAR, final economic conclusions were robust to patient withdrawals. Conclusions: PPPY rates were typically lower with FF/UMEC/VI versus FF/VI or UMEC/VI.


Subject(s)
Pulmonary Disease, Chronic Obstructive , Administration, Inhalation , Androstadienes/adverse effects , Bayes Theorem , Benzyl Alcohols/adverse effects , Bronchodilator Agents/adverse effects , Chlorobenzenes/adverse effects , Delivery of Health Care , Double-Blind Method , Drug Combinations , Fluticasone/therapeutic use , Humans , Nebulizers and Vaporizers , Pulmonary Disease, Chronic Obstructive/chemically induced , Pulmonary Disease, Chronic Obstructive/diagnosis , Pulmonary Disease, Chronic Obstructive/drug therapy , Quinuclidines/adverse effects
16.
PLoS One ; 17(6): e0270168, 2022.
Article in English | MEDLINE | ID: mdl-35714109

ABSTRACT

Clinical testing of children in schools is challenging, with economic implications limiting its frequent use as a monitoring tool of the risks assumed by children and staff during the COVID-19 pandemic. Here, a wastewater-based epidemiology approach has been used to monitor 16 schools (10 primary, 5 secondary and 1 post-16 and further education) in England. A total of 296 samples over 9 weeks have been analysed for N1 and E genes using qPCR methods. Of the samples returned, 47.3% were positive for one or both genes with a detection frequency in line with the respective local community. WBE offers a low cost, non-invasive approach for supplementing clinical testing and can provide longitudinal insights that are impractical with traditional clinical testing.


Subject(s)
COVID-19 , SARS-CoV-2 , COVID-19/epidemiology , Child , Humans , Pandemics , SARS-CoV-2/genetics , Schools , Wastewater
17.
Res Synth Methods ; 13(6): 716-744, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35485582

ABSTRACT

Population adjustment methods such as matching-adjusted indirect comparison (MAIC) are increasingly used to compare marginal treatment effects when there are cross-trial differences in effect modifiers and limited patient-level data. MAIC is based on propensity score weighting, which is sensitive to poor covariate overlap and cannot extrapolate beyond the observed covariate space. Current outcome regression-based alternatives can extrapolate but target a conditional treatment effect that is incompatible in the indirect comparison. When adjusting for covariates, one must integrate or average the conditional estimate over the relevant population to recover a compatible marginal treatment effect. We propose a marginalization method based on parametric G-computation that can be easily applied where the outcome regression is a generalized linear model or a Cox model. The approach views the covariate adjustment regression as a nuisance model and separates its estimation from the evaluation of the marginal treatment effect of interest. The method can accommodate a Bayesian statistical framework, which naturally integrates the analysis into a probabilistic framework. A simulation study provides proof-of-principle and benchmarks the method's performance against MAIC and the conventional outcome regression. Parametric G-computation achieves more precise and more accurate estimates than MAIC, particularly when covariate overlap is poor, and yields unbiased marginal treatment effect estimates under no failures of assumptions. Furthermore, the marginalized regression-adjusted estimates provide greater precision and accuracy than the conditional estimates produced by the conventional outcome regression, which are systematically biased because the measure of effect is non-collapsible.


Subject(s)
Bayes Theorem , Humans , Computer Simulation , Proportional Hazards Models , Propensity Score
18.
Annu Rev Stat Appl ; 9: 95-118, 2022 Mar 07.
Article in English | MEDLINE | ID: mdl-35415193

ABSTRACT

Value of information (VoI) is a decision-theoretic approach to estimating the expected benefits from collecting further information of different kinds, in scientific problems based on combining one or more sources of data. VoI methods can assess the sensitivity of models to different sources of uncertainty and help to set priorities for further data collection. They have been widely applied in healthcare policy making, but the ideas are general to a range of evidence synthesis and decision problems. This article gives a broad overview of VoI methods, explaining the principles behind them, the range of problems that can be tackled with them, and how they can be implemented, and discusses the ongoing challenges in the area.

20.
Value Health ; 25(9): 1654-1662, 2022 09.
Article in English | MEDLINE | ID: mdl-35341690

ABSTRACT

OBJECTIVES: Cost-effectiveness analysis (CEA) alongside randomized controlled trials often relies on self-reported multi-item questionnaires that are invariably prone to missing item-level data. The purpose of this study is to review how missing multi-item questionnaire data are handled in trial-based CEAs. METHODS: We searched the National Institute for Health Research journals to identify within-trial CEAs published between January 2016 and April 2021 using multi-item instruments to collect costs and quality of life (QOL) data. Information on missing data handling and methods, with a focus on the level and type of imputation, was extracted. RESULTS: A total of 87 trial-based CEAs were included in the review. Complete case analysis or available case analysis and multiple imputation (MI) were the most popular methods, selected by similar numbers of studies, to handle missing costs and QOL in base-case analysis. Nevertheless, complete case analysis or available case analysis dominated sensitivity analysis. Once imputation was chosen, missing costs were widely imputed at item-level via MI, whereas missing QOL was usually imputed at the more aggregated time point level during the follow-up via MI. CONCLUSIONS: Missing costs and QOL tend to be imputed at different levels of missingness in current CEAs alongside randomized controlled trials. Given the limited information provided by included studies, the impact of applying different imputation methods at different levels of aggregation on CEA decision making remains unclear.


Subject(s)
Carcinoembryonic Antigen , Quality of Life , Cost-Benefit Analysis , Data Interpretation, Statistical , Humans , Surveys and Questionnaires
SELECTION OF CITATIONS
SEARCH DETAIL
...