Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
1.
Cochrane Database Syst Rev ; 4: CD015636, 2024 04 10.
Article in English | MEDLINE | ID: mdl-38597256

ABSTRACT

BACKGROUND: Dengue is a global health problem of high significance, with 3.9 billion people at risk of infection. The geographic expansion of dengue virus (DENV) infection has resulted in increased frequency and severity of the disease, and the number of deaths has increased in recent years. Wolbachia,an intracellular bacterial endosymbiont, has been under investigation for several years as a novel dengue-control strategy. Some dengue vectors (Aedes mosquitoes) can be transinfected with specific strains of Wolbachia, which decreases their fitness (ability to survive and mate) and their ability to reproduce, inhibiting the replication of dengue. Both laboratory and field studies have demonstrated the potential effect of Wolbachia deployments on reducing dengue transmission, and modelling studies have suggested that this may be a self-sustaining strategy for dengue prevention, although long-term effects are yet to be elucidated. OBJECTIVES: To assess the efficacy of Wolbachia-carrying Aedes speciesdeployments (specifically wMel-, wMelPop-, and wAlbB- strains of Wolbachia) for preventing dengue virus infection. SEARCH METHODS: We searched CENTRAL, MEDLINE, Embase, four other databases, and two trial registries up to 24 January 2024. SELECTION CRITERIA: Randomized controlled trials (RCTs), including cluster-randomized controlled trials (cRCTs), conducted in dengue endemic or epidemic-prone settings were eligible. We sought studies that investigated the impact of Wolbachia-carrying Aedes deployments on epidemiological or entomological dengue-related outcomes, utilizing either the population replacement or population suppression strategy. DATA COLLECTION AND ANALYSIS: Two review authors independently selected eligible studies, extracted data, and assessed the risk of bias using the Cochrane RoB 2 tool. We used odds ratios (OR) with the corresponding 95% confidence intervals (CI) as the effect measure for dichotomous outcomes. For count/rate outcomes, we planned to use the rate ratio with 95% CI as the effect measure. We used adjusted measures of effect for cRCTs. We assessed the certainty of evidence using GRADE. MAIN RESULTS: One completed cRCT met our inclusion criteria, and we identified two further ongoing cRCTs. The included trial was conducted in an urban setting in Yogyakarta, Indonesia. It utilized a nested test-negative study design, whereby all participants aged three to 45 years who presented at healthcare centres with a fever were enrolled in the study provided they had resided in the study area for the previous 10 nights. The trial showed that wMel-Wolbachia infected Ae aegypti deployments probably reduce the odds of contracting virologically confirmed dengue by 77% (OR 0.23, 95% CI 0.15 to 0.35; 1 trial, 6306 participants; moderate-certainty evidence). The cluster-level prevalence of wMel Wolbachia-carrying mosquitoes remained high over two years in the intervention arm of the trial, reported as 95.8% (interquartile range 91.5 to 97.8) across 27 months in clusters receiving wMel-Wolbachia Ae aegypti deployments, but there were no reliable comparative data for this outcome. Other primary outcomes were the incidence of virologically confirmed dengue, the prevalence of dengue ribonucleic acid in the mosquito population, and mosquito density, but there were no data for these outcomes. Additionally, there were no data on adverse events. AUTHORS' CONCLUSIONS: The included trial demonstrates the potential significant impact of wMel-Wolbachia-carrying Ae aegypti mosquitoes on preventing dengue infection in an endemic setting, and supports evidence reported in non-randomized and uncontrolled studies. Further trials across a greater diversity of settings are required to confirm whether these findings apply to other locations and country settings, and greater reporting of acceptability and cost are important.


Subject(s)
Aedes , Dengue Virus , Dengue , Wolbachia , Animals , Humans , Aedes/microbiology , Mosquito Vectors/microbiology , Dengue/prevention & control
3.
Health Technol Assess ; 27(10): 1-115, 2023 07.
Article in English | MEDLINE | ID: mdl-37839810

ABSTRACT

Background: Magnetic resonance imaging-based technologies are non-invasive diagnostic tests that can be used to assess non-alcoholic fatty liver disease. Objectives: The study objectives were to assess the diagnostic test accuracy, clinical impact and cost-effectiveness of two magnetic resonance imaging-based technologies (LiverMultiScan and magnetic resonance elastography) for patients with non-alcoholic fatty liver disease for whom advanced fibrosis or cirrhosis had not been diagnosed and who had indeterminate results from fibrosis testing, or for whom transient elastography or acoustic radiation force impulse was unsuitable, or who had discordant results from fibrosis testing. Data sources: The data sources searched were MEDLINE, MEDLINE Epub Ahead of Print, In-Process & Other Non-Indexed Citations, Embase, Cochrane Database of Systematic Reviews, Cochrane Central Database of Controlled Trials, Database of Abstracts of Reviews of Effects and the Health Technology Assessment. Methods: A systematic review was conducted using established methods. Diagnostic test accuracy estimates were calculated using bivariate models and a summary receiver operating characteristic curve was calculated using a hierarchical model. A simple decision-tree model was developed to generate cost-effectiveness results. Results: The diagnostic test accuracy review (13 studies) and the clinical impact review (11 studies) only included one study that provided evidence for patients who had indeterminate or discordant results from fibrosis testing. No studies of patients for whom transient elastography or acoustic radiation force impulse were unsuitable were identified. Depending on fibrosis level, relevant published LiverMultiScan diagnostic test accuracy results ranged from 50% to 88% (sensitivity) and from 42% to 75% (specificity). No magnetic resonance elastography diagnostic test accuracy data were available for the specific population of interest. Results from the clinical impact review suggested that acceptability of LiverMultiScan was generally positive. To explore how the decision to proceed to biopsy is influenced by magnetic resonance imaging-based technologies, the External Assessment Group presented cost-effectiveness analyses for LiverMultiScan plus biopsy versus biopsy only. Base-case incremental cost-effectiveness ratio per quality-adjusted life year gained results for seven of the eight diagnostic test strategies considered showed that LiverMultiScan plus biopsy was dominated by biopsy only; for the remaining strategy (Brunt grade ≥2), the incremental cost-effectiveness ratio per quality-adjusted life year gained was £1,266,511. Results from threshold and scenario analyses demonstrated that External Assessment Group base-case results were robust to plausible variations in the magnitude of key parameters. Limitations: Diagnostic test accuracy, clinical impact and cost-effectiveness data for magnetic resonance imaging-based technologies for the population that is the focus of this assessment were limited. Conclusions: Magnetic resonance imaging-based technologies may be useful to identify patients who may benefit from additional testing in the form of liver biopsy and those for whom this additional testing may not be necessary. However, there is a paucity of diagnostic test accuracy and clinical impact data for patients who have indeterminate results from fibrosis testing, for whom transient elastography or acoustic radiation force impulse are unsuitable or who had discordant results from fibrosis testing. Given the External Assessment Group cost-effectiveness analyses assumptions, the use of LiverMultiScan and magnetic resonance elastography for assessing non-alcoholic fatty liver disease for patients with inconclusive results from previous fibrosis testing is unlikely to be a cost-effective use of National Health Service resources compared with liver biopsy only. Study registration: This study is registered as PROSPERO CRD42021286891. Funding: Funding for this study was provided by the Evidence Synthesis Programme of the National Institute for Health and Care Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 27, No. 10. See the NIHR Journals Library website for further project information.


Non-alcoholic fatty liver disease includes a range of conditions that are caused by a build-up of fat in the liver, and not by alcohol consumption. This build-up of fat can cause inflammation. Persistent inflammation can cause scar tissue (fibrosis) to develop. It is important to identify patients with fibrosis because severe fibrosis can cause permanent liver damage (cirrhosis), which can lead to liver failure and liver cancer. In the National Health Service, patients with non-alcoholic fatty liver disease undergo tests to determine whether they have fibrosis. The test results are not always accurate and multiple tests can give conflicting results. Some of the tests may not be suitable for patients who have a very high body mass index. In the National Health Service, a liver biopsy may be offered to patients with inconclusive or conflicting test results or to those patients for whom other tests are unsuitable. However, liver biopsy is expensive, and is associated with side-effects such as pain and bleeding. Magnetic resonance imaging-based testing could be used as an extra test to help clinicians assess non-alcoholic fatty liver disease and identify patients who may need a liver biopsy. We assessed two magnetic resonance imaging-based diagnostic tests, LiverMultiScan and magnetic resonance elastography. LiverMultiScan is imaging software that is used alongside magnetic resonance imaging to measure markers of liver disease. Magnetic resonance elastography is used in some National Health Service centres to assess liver fibrosis; however, magnetic resonance elastography requires more equipment than just an magnetic resonance imaging scanner. We reviewed all studies examining how well LiverMultiScan and magnetic resonance elastography assess patients with non-alcoholic fatty liver disease. We also built an economic model to estimate the costs and benefits of using LiverMultiScan to identify patients who should be sent for a biopsy. Results from the model showed that LiverMultiScan may not provide good value for money to the National Health Service.


Subject(s)
Non-alcoholic Fatty Liver Disease , Humans , Cost-Benefit Analysis , Liver Cirrhosis/diagnosis , Liver Cirrhosis/pathology , Magnetic Resonance Imaging , Non-alcoholic Fatty Liver Disease/diagnostic imaging , State Medicine
4.
Pharmacoecon Open ; 7(6): 863-875, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37731145

ABSTRACT

As part of the National Institute for Health and Care Excellence (NICE) highly specialised technology (HST) evaluation programme, Novartis submitted evidence to support the use of onasemnogene abeparvovec as a treatment option for patients with pre-symptomatic 5q spinal muscular atrophy (SMA) with a bi-allelic mutation in the survival of motor neuron (SMN) 1 gene and up to three copies of the SMN2 gene. The Liverpool Reviews and Implementation Group at the University of Liverpool was commissioned to act as the External Assessment Group (EAG). This article summarises the EAG's review of the evidence submitted by the company and provides an overview of the NICE Evaluation Committee's final decision, published in April 2023. The primary source of evidence for this evaluation was the SPR1NT trial, a single-arm trial including 29 babies. The EAG and committee considered that the SPR1NT trial results suggested that onasemnogene abeparvovec is effective in treating pre-symptomatic SMA; however, long-term efficacy data were unavailable and efficacy in babies aged over 6 weeks remained uncertain. Cost-effectiveness analyses conducted by the company and the EAG (using a discounted price for onasemnogene abeparvovec) explored various assumptions; all analyses generated incremental cost-effectiveness ratios (ICERs) that were less than £100,000 per quality-adjusted life-year (QALY) gained. The committee recommended onasemnogene abeparvovec as an option for treating pre-symptomatic 5q SMA with a bi-allelic mutation in the SMN1 gene and up to three copies of the SMN2 gene in babies aged ≤ 12 months only if the company provides it according to the commercial arrangement (i.e. simple discount patient access scheme).

5.
Pharmacoecon Open ; 7(4): 525-536, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37195551

ABSTRACT

As part of the Single Technology Appraisal (STA) process, the UK National Institute for Health and Care Excellence (NICE) invited Apellis Pharmaceuticals/Sobi to submit evidence for the clinical and cost effectiveness of pegcetacoplan versus eculizumab and pegcetacoplan versus ravulizumab for treating paroxysmal nocturnal haemoglobinuria (PNH) in adults whose anaemia is uncontrolled after treatment with a C5 inhibitor. The Liverpool Reviews and Implementation Group at the University of Liverpool was commissioned as the Evidence Review Group (ERG). The company pursued a low incremental cost-effectiveness ratio (ICER) Fast Track Appraisal (FTA). This was a form of STA processed in a shorter time frame and designed for technologies with company base-case ICER < £10,000 per quality-adjusted life-year (QALY) gained and most plausible ICER < £20,000 per QALY gained. This article summarises the ERG's review of the company's evidence submission, and the NICE Appraisal Committee's (AC's) final decision. The company presented clinical evidence from the PEGASUS trial that assessed the efficacy of pegcetacoplan versus eculizumab. At Week 16, patients in the pegcetacoplan arm had statistically significantly greater change from baseline in haemoglobin levels and a higher rate of transfusion avoidance than patients in the eculizumab arm. Using the PEGASUS trial and Study 302 data (a non-inferiority trial that assessed ravulizumab versus eculizumab), the company conducted an anchored matching-adjusted indirect comparison (MAIC) to indirectly estimate the efficacy of pegcetacoplan versus ravulizumab. The company identified key differences between trial designs and populations that could not be adjusted for using anchored MAIC methods. The company and ERG agreed that the anchored MAIC results were not robust and should not inform decision making. In the absence of robust indirect estimates, the company assumed that ravulizumab had equivalent efficacy to eculizumab in the PEGASUS trial population. Results from the company base-case cost-effectiveness analysis showed that treatment with pegcetacoplan dominated eculizumab and ravulizumab. The ERG considered that the long-term effectiveness of pegcetacoplan was uncertain and ran a scenario assuming that after 1 year the efficacy of pegcetacoplan would be the same as eculizumab; treatment with pegcetacoplan continued to dominate eculizumab and ravulizumab. The AC noted that treatment with pegcetacoplan had lower total costs than treatment with eculizumab or ravulizumab because it is self-administered and reduces the need for blood transfusions. If the assumption that ravulizumab has equivalent efficacy to eculizumab does not hold, then this will affect the estimate of the cost effectiveness of pegcetacoplan versus ravulizumab; however, the AC was satisfied that the assumption was reasonable. The AC recommended pegcetacoplan as an option for the treatment of PNH in adults who have uncontrolled anaemia despite treatment with a stable dose of a C5 inhibitor for ≥ 3 months. Pegcetacoplan was the first technology recommended by NICE via the low ICER FTA process.

6.
Pharmacoecon Open ; 7(3): 345-358, 2023 May.
Article in English | MEDLINE | ID: mdl-37084172

ABSTRACT

The National Institute for Health and Care Excellence (NICE) provides guidance to improve health and social care in England and Wales. NICE invited Daiichi Sankyo to submit evidence for the use of trastuzumab deruxtecan (T-DXd) for treating human epidermal growth factor 2 (HER2)-positive unresectable or metastatic breast cancer (UBC/MBC) after two or more anti-HER2 therapies, in accordance with NICE's Single Technology Appraisal process. The Liverpool Reviews and Implementation Group, part of the University of Liverpool, was commissioned to act as the Evidence Review Group (ERG). This article summarises the ERG's review of the evidence submitted by the company and provides an overview of the NICE Appraisal Committee's (AC's) final decision made in May 2021. Results from the company's base-case fully incremental analysis showed that, compared with T-DXd, eribulin and vinorelbine were dominated and the incremental cost-effectiveness ratio (ICER) per quality-adjusted life year (QALY) gained versus capecitabine was £47,230. The ERG scenario analyses generated a range of ICERs, with the highest being a scenario relating to a comparison of T-DXd versus capecitabine (£78,142 per QALY gained). The ERG considered that due to a lack of appropriate clinical effectiveness evidence, the relative effectiveness of T-DXd versus any comparator treatment could not be determined with any degree of certainty. The NICE AC agreed that the modelling of overall survival was highly uncertain and concluded that treatment with T-DXd could not be recommended for routine use within the National Health Service (NHS). T-DXd was, however, recommended for use within the Cancer Drugs Fund, provided Managed Access Agreement conditions were followed.

7.
Cochrane Database Syst Rev ; 10: CD013398, 2022 10 06.
Article in English | MEDLINE | ID: mdl-36200610

ABSTRACT

BACKGROUND: Malaria remains an important public health problem. Research in 1900 suggested house modifications may reduce malaria transmission. A previous version of this review concluded that house screening may be effective in reducing malaria. This update includes data from five new studies. OBJECTIVES: To assess the effects of house modifications that aim to reduce exposure to mosquitoes on malaria disease and transmission. SEARCH METHODS: We searched the Cochrane Infectious Diseases Group Specialized Register; Central Register of Controlled Trials (CENTRAL), published in the Cochrane Library; MEDLINE (PubMed); Embase (OVID); Centre for Agriculture and Bioscience International (CAB) Abstracts (Web of Science); and the Latin American and Caribbean Health Science Information database (LILACS) up to 25 May 2022. We also searched the World Health Organization International Clinical Trials Registry Platform, ClinicalTrials.gov, and the ISRCTN registry to identify ongoing trials up to 25 May 2022. SELECTION CRITERIA: Randomized controlled trials, including cluster-randomized controlled trials (cRCTs), cross-over studies, and stepped-wedge designs were eligible, as were quasi-experimental trials, including controlled before-and-after studies, controlled interrupted time series, and non-randomized cross-over studies. We sought studies investigating primary construction and house modifications to existing homes reporting epidemiological outcomes (malaria case incidence, malaria infection incidence or parasite prevalence). We extracted any entomological outcomes that were also reported in these studies. DATA COLLECTION AND ANALYSIS: Two review authors independently selected eligible studies, extracted data, and assessed the risk of bias. We used risk ratios (RR) to compare the effect of the intervention with the control for dichotomous data. For continuous data, we presented the mean difference; and for count and rate data, we used rate ratios. We presented all results with 95% confidence intervals (CIs). We assessed the certainty of evidence using the GRADE approach. MAIN RESULTS: One RCT and six cRCTs met our inclusion criteria, with an additional six ongoing RCTs. We did not identify any eligible non-randomized studies. All included trials were conducted in sub-Saharan Africa since 2009; two randomized by household and four at the block or village level. All trials assessed screening of windows, doors, eaves, ceilings, or any combination of these; this was either alone, or in combination with roof modification or eave tube installation (an insecticidal "lure and kill" device that reduces mosquito entry whilst maintaining some airflow). In one trial, the screening material was treated with 2% permethrin insecticide. In five trials, the researchers implemented the interventions. A community-based approach was adopted in the other trial. Overall, the implementation of house modifications probably reduced malaria parasite prevalence (RR 0.68, 95% CI 0.57 to 0.82; 5 trials, 5183 participants; moderate-certainty evidence), although an inconsistent effect was observed in a subpopulation of children in one study. House modifications reduced moderate to severe anaemia prevalence (RR 0.70, 95% CI 0.55 to 0.89; 3 trials, 3643 participants; high-certainty evidence). There was no consistent effect on clinical malaria incidence, with rate ratios ranging from 0.38 to 1.62 (3 trials, 3365 participants, 4126.6 person-years). House modifications may reduce indoor mosquito density (rate ratio 0.63, 95% CI 0.30 to 1.30; 4 trials, 9894 household-nights; low-certainty evidence), although two studies showed little effect on this parameter. AUTHORS' CONCLUSIONS: House modifications - largely screening, sometimes combined with insecticide and lure and kill devices - were associated with a reduction in malaria parasite prevalence and a reduction in people with anaemia. Findings on malaria incidence were mixed. Modifications were also associated with lower indoor adult mosquito density, but this effect was not present in some studies.


Subject(s)
Anemia , Culicidae , Insecticides , Malaria , Adult , Anemia/epidemiology , Animals , Child , Humans , Malaria/epidemiology , Malaria/prevention & control , Permethrin
9.
Cochrane Database Syst Rev ; 7: CD013080, 2022 07 25.
Article in English | MEDLINE | ID: mdl-35871531

ABSTRACT

BACKGROUND: Good patient adherence to antiretroviral (ART) medication determines effective HIV viral suppression, and thus reduces the risk of progression and transmission of HIV. With accurate methods to monitor treatment adherence, we could use simple triage to target adherence support interventions that could help in the community or at health centres in resource-limited settings. OBJECTIVES: To determine the accuracy of simple measures of ART adherence (including patient self-report, tablet counts, pharmacy records, electronic monitoring, or composite methods) for detecting non-suppressed viral load in people living with HIV and receiving ART treatment. SEARCH METHODS: The Cochrane Infectious Diseases Group Information Specialists searched CENTRAL, MEDLINE, Embase, LILACS, CINAHL, African-Wide information, and Web of Science up to 22 April 2021. They also searched the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) and ClinicalTrials.gov for ongoing studies. No restrictions were placed on the language or date of publication when searching the electronic databases. SELECTION CRITERIA: We included studies of all designs that evaluated a simple measure of adherence (index test) such as self-report, tablet counts, pharmacy records or secondary database analysis, or both, electronic monitoring or composite measures of any of those tests, in people living with HIV and receiving ART treatment. We used a viral load assay with a limit of detection ranging from 10 copies/mL to 400 copies/mL as the reference standard. We created 2 × 2 tables to calculate sensitivity and specificity. DATA COLLECTION AND ANALYSIS: We screened studies, extracted data, and assessed risk of bias using QUADAS-2 independently and in duplicate. We assessed the certainty of evidence using the GRADE method. The results of estimated sensitivity and specificity were presented using paired forest plots and tabulated summaries. We encountered a high level of variation among studies which precluded a meaningful meta-analysis or comparison of adherence measures. We explored heterogeneity using pre-defined subgroup analysis. MAIN RESULTS: We included 51 studies involving children and adults with HIV, mostly living in low- and middle-income settings, conducted between 2003 and 2021. Several studies assessed more than one index test, and the most common measure of adherence to ART was self-report. - Self-report questionnaires (25 studies, 9211 participants; very low-certainty): sensitivity ranged from 10% to 85% and specificity ranged from 10% to 99%. - Self-report using a visual analogue scale (VAS) (11 studies, 4235 participants; very low-certainty): sensitivity ranged from 0% to 58% and specificity ranged from 55% to 100%. - Tablet counts (12 studies, 3466 participants; very low-certainty): sensitivity ranged from 0% to 100% and specificity ranged from 5% to 99%. - Electronic monitoring devices (3 studies, 186 participants; very low-certainty): sensitivity ranged from 60% to 88% and the specificity ranged from 27% to 67%. - Pharmacy records or secondary databases (6 studies, 2254 participants; very low-certainty): sensitivity ranged from 17% to 88% and the specificity ranged from 9% to 95%. - Composite measures (9 studies, 1513 participants; very low-certainty): sensitivity ranged from 10% to 100% and specificity ranged from 49% to 100%. Across all included studies, the ability of adherence measures to detect viral non-suppression showed a large variation in both sensitivity and specificity that could not be explained by subgroup analysis. We assessed the overall certainty of the evidence as very low due to risk of bias, indirectness, inconsistency, and imprecision. The risk of bias and the applicability concerns for patient selection, index test, and reference standard domains were generally low or unclear due to unclear reporting. The main methodological issues identified were related to flow and timing due to high numbers of missing data. For all index tests, we assessed the certainty of the evidence as very low due to limitations in the design and conduct of the studies, applicability concerns and inconsistency of results. AUTHORS' CONCLUSIONS: We encountered high variability for all index tests, and the overall certainty of evidence in all areas was very low. No measure consistently offered either a sufficiently high sensitivity or specificity to detect viral non-suppression. These concerns limit their value in triaging patients for viral load monitoring or enhanced adherence support interventions.


Subject(s)
Anti-Retroviral Agents , HIV Infections , Adult , Anti-Retroviral Agents/therapeutic use , Child , HIV Infections/complications , HIV Infections/drug therapy , Humans , Reference Standards , Sensitivity and Specificity , Viral Load
11.
Cochrane Database Syst Rev ; 5: CD014841, 2022 05 18.
Article in English | MEDLINE | ID: mdl-35583175

ABSTRACT

BACKGROUND: The World Health Organization (WHO) End TB Strategy stresses universal access to drug susceptibility testing (DST). DST determines whether Mycobacterium tuberculosis bacteria are susceptible or resistant to drugs. Xpert MTB/XDR is a rapid nucleic acid amplification test for detection of tuberculosis and drug resistance in one test suitable for use in peripheral and intermediate level laboratories. In specimens where tuberculosis is detected by Xpert MTB/XDR, Xpert MTB/XDR can also detect resistance to isoniazid, fluoroquinolones, ethionamide, and amikacin. OBJECTIVES: To assess the diagnostic accuracy of Xpert MTB/XDR for pulmonary tuberculosis in people with presumptive pulmonary tuberculosis (having signs and symptoms suggestive of tuberculosis, including cough, fever, weight loss, night sweats). To assess the diagnostic accuracy of Xpert MTB/XDR for resistance to isoniazid, fluoroquinolones, ethionamide, and amikacin in people with tuberculosis detected by Xpert MTB/XDR, irrespective of rifampicin resistance (whether or not rifampicin resistance status was known) and with known rifampicin resistance. SEARCH METHODS: We searched multiple databases to 23 September 2021. We limited searches to 2015 onwards as Xpert MTB/XDR was launched in 2020. SELECTION CRITERIA: Diagnostic accuracy studies using sputum in adults with presumptive or confirmed pulmonary tuberculosis. Reference standards were culture (pulmonary tuberculosis detection); phenotypic DST (pDST), genotypic DST (gDST),composite (pDST and gDST) (drug resistance detection). DATA COLLECTION AND ANALYSIS: Two review authors independently reviewed reports for eligibility and extracted data using a standardized form. For multicentre studies, we anticipated variability in the type and frequency of mutations associated with resistance to a given drug at the different centres and considered each centre as an independent study cohort for quality assessment and analysis. We assessed methodological quality with QUADAS-2, judging risk of bias separately for each target condition and reference standard. For pulmonary tuberculosis detection, owing to heterogeneity in participant characteristics and observed specificity estimates, we reported a range of sensitivity and specificity estimates and did not perform a meta-analysis. For drug resistance detection, we performed meta-analyses by reference standard using bivariate random-effects models. Using GRADE, we assessed certainty of evidence of Xpert MTB/XDR accuracy for detection of resistance to isoniazid and fluoroquinolones in people irrespective of rifampicin resistance and to ethionamide and amikacin in people with known rifampicin resistance, reflecting real-world situations. We used pDST, except for ethionamide resistance where we considered gDST a better reference standard. MAIN RESULTS: We included two multicentre studies from high multidrug-resistant/rifampicin-resistant tuberculosis burden countries, reporting on six independent study cohorts, involving 1228 participants for pulmonary tuberculosis detection and 1141 participants for drug resistance detection. The proportion of participants with rifampicin resistance in the two studies was 47.9% and 80.9%. For tuberculosis detection, we judged high risk of bias for patient selection owing to selective recruitment. For ethionamide resistance detection, we judged high risk of bias for the reference standard, both pDST and gDST, though we considered gDST a better reference standard. Pulmonary tuberculosis detection - Xpert MTB/XDR sensitivity range, 98.3% (96.1 to 99.5) to 98.9% (96.2 to 99.9) and specificity range, 22.5% (14.3 to 32.6) to 100.0% (86.3 to 100.0); median prevalence of pulmonary tuberculosis 91.3%, (interquartile range, 89.3% to 91.8%), (2 studies; 1 study reported on 2 cohorts, 1228 participants; very low-certainty evidence, sensitivity and specificity). Drug resistance detection People irrespective of rifampicin resistance - Isoniazid resistance: Xpert MTB/XDR summary sensitivity and specificity (95% confidence interval (CI)) were 94.2% (87.5 to 97.4) and 98.5% (92.6 to 99.7) against pDST, (6 cohorts, 1083 participants, moderate-certainty evidence, sensitivity and specificity). - Fluoroquinolone resistance: Xpert MTB/XDR summary sensitivity and specificity were 93.2% (88.1 to 96.2) and 98.0% (90.8 to 99.6) against pDST, (6 cohorts, 1021 participants; high-certainty evidence, sensitivity; moderate-certainty evidence, specificity). People with known rifampicin resistance - Ethionamide resistance: Xpert MTB/XDR summary sensitivity and specificity were 98.0% (74.2 to 99.9) and 99.7% (83.5 to 100.0) against gDST, (4 cohorts, 434 participants; very low-certainty evidence, sensitivity and specificity). - Amikacin resistance: Xpert MTB/XDR summary sensitivity and specificity were 86.1% (75.0 to 92.7) and 98.9% (93.0 to 99.8) against pDST, (4 cohorts, 490 participants; low-certainty evidence, sensitivity; high-certainty evidence, specificity). Of 1000 people with pulmonary tuberculosis, detected as tuberculosis by Xpert MTB/XDR: - where 50 have isoniazid resistance, 61 would have an Xpert MTB/XDR result indicating isoniazid resistance: of these, 14/61 (23%) would not have isoniazid resistance (FP); 939 (of 1000 people) would have a result indicating the absence of isoniazid resistance: of these, 3/939 (0%) would have isoniazid resistance (FN). - where 50 have fluoroquinolone resistance, 66 would have an Xpert MTB/XDR result indicating fluoroquinolone resistance: of these, 19/66 (29%) would not have fluoroquinolone resistance (FP); 934 would have a result indicating the absence of fluoroquinolone resistance: of these, 3/934 (0%) would have fluoroquinolone resistance (FN). - where 300 have ethionamide resistance, 296 would have an Xpert MTB/XDR result indicating ethionamide resistance: of these, 2/296 (1%) would not have ethionamide resistance (FP); 704 would have a result indicating the absence of ethionamide resistance: of these, 6/704 (1%) would have ethionamide resistance (FN). - where 135 have amikacin resistance, 126 would have an Xpert MTB/XDR result indicating amikacin resistance: of these, 10/126 (8%) would not have amikacin resistance (FP); 874 would have a result indicating the absence of amikacin resistance: of these, 19/874 (2%) would have amikacin resistance (FN). AUTHORS' CONCLUSIONS: Review findings suggest that, in people determined by Xpert MTB/XDR to be tuberculosis-positive, Xpert MTB/XDR provides accurate results for detection of isoniazid and fluoroquinolone resistance and can assist with selection of an optimised treatment regimen. Given that Xpert MTB/XDR targets a limited number of resistance variants in specific genes, the test may perform differently in different settings. Findings in this review should be interpreted with caution. Sensitivity for detection of ethionamide resistance was based only on Xpert MTB/XDR detection of mutations in the inhA promoter region, a known limitation. High risk of bias limits our confidence in Xpert MTB/XDR accuracy for pulmonary tuberculosis. Xpert MTB/XDR's impact will depend on its ability to detect tuberculosis (required for DST), prevalence of resistance to a given drug, health care infrastructure, and access to other tests.


Subject(s)
Antibiotics, Antitubercular , Mycobacterium tuberculosis , Tuberculosis, Lymph Node , Tuberculosis, Multidrug-Resistant , Tuberculosis, Pulmonary , Adult , Amikacin/pharmacology , Amikacin/therapeutic use , Antibiotics, Antitubercular/pharmacology , Antibiotics, Antitubercular/therapeutic use , Drug Resistance, Bacterial/genetics , Ethionamide/pharmacology , Ethionamide/therapeutic use , Fluoroquinolones/pharmacology , Fluoroquinolones/therapeutic use , Humans , Isoniazid/pharmacology , Isoniazid/therapeutic use , Microbial Sensitivity Tests , Mycobacterium tuberculosis/genetics , Rifampin/pharmacology , Rifampin/therapeutic use , Sensitivity and Specificity , Tuberculosis, Lymph Node/diagnosis , Tuberculosis, Multidrug-Resistant/diagnosis , Tuberculosis, Multidrug-Resistant/drug therapy , Tuberculosis, Pulmonary/diagnosis , Tuberculosis, Pulmonary/drug therapy
12.
Cochrane Database Syst Rev ; 1: CD013334, 2022 01 28.
Article in English | MEDLINE | ID: mdl-35088407

ABSTRACT

BACKGROUND: Debates on effective and safe diets for managing obesity in adults are ongoing. Low-carbohydrate weight-reducing diets (also known as 'low-carb diets') continue to be widely promoted, marketed and commercialised as being more effective for weight loss, and healthier, than 'balanced'-carbohydrate weight-reducing diets. OBJECTIVES: To compare the effects of low-carbohydrate weight-reducing diets to weight-reducing diets with balanced ranges of carbohydrates, in relation to changes in weight and cardiovascular risk, in overweight and obese adults without and with type 2 diabetes mellitus (T2DM). SEARCH METHODS: We searched MEDLINE (PubMed), Embase (Ovid), the Cochrane Central Register of Controlled Trials (CENTRAL), Web of Science Core Collection (Clarivate Analytics), ClinicalTrials.gov and WHO International Clinical Trials Registry Platform (ICTRP) up to 25 June 2021, and screened reference lists of included trials and relevant systematic reviews. Language or publication restrictions were not applied. SELECTION CRITERIA: We included randomised controlled trials (RCTs) in adults (18 years+) who were overweight or living with obesity, without or with T2DM, and without or with cardiovascular conditions or risk factors. Trials had to compare low-carbohydrate weight-reducing diets to balanced-carbohydrate (45% to 65% of total energy (TE)) weight-reducing diets, have a weight-reducing phase of 2 weeks or longer and be explicitly implemented for the primary purpose of reducing weight, with or without advice to restrict energy intake.  DATA COLLECTION AND ANALYSIS: Two review authors independently screened titles and abstracts and full-text articles to determine eligibility; and independently extracted data, assessed risk of bias using RoB 2 and assessed the certainty of the evidence using GRADE. We stratified analyses by participants without and with T2DM, and by diets with weight-reducing phases only and those with weight-reducing phases followed by weight-maintenance phases. Primary outcomes were change in body weight (kg) and the number of participants per group with weight loss of at least 5%, assessed at short- (three months to < 12 months) and long-term (≥ 12 months) follow-up. MAIN RESULTS: We included 61 parallel-arm RCTs that randomised 6925 participants to either low-carbohydrate or balanced-carbohydrate weight-reducing diets. All trials were conducted in high-income countries except for one in China. Most participants (n = 5118 randomised) did not have T2DM. Mean baseline weight across trials was 95 kg (range 66 to 132 kg). Participants with T2DM were older (mean 57 years, range 50 to 65) than those without T2DM (mean 45 years, range 22 to 62). Most trials included men and women (42/61; 3/19 men only; 16/19 women only), and people without baseline cardiovascular conditions, risk factors or events (36/61). Mean baseline diastolic blood pressure (DBP) and low-density lipoprotein (LDL) cholesterol across trials were within normal ranges. The longest weight-reducing phase of diets was two years in participants without and with T2DM. Evidence from studies with weight-reducing phases followed by weight-maintenance phases was limited. Most trials investigated low-carbohydrate diets (> 50 g to 150 g per day or < 45% of TE; n = 42), followed by very low (≤ 50 g per day or < 10% of TE; n = 14), and then incremental increases from very low to low (n = 5). The most common diets compared were low-carbohydrate, balanced-fat (20 to 35% of TE) and high-protein (> 20% of TE) treatment diets versus control diets balanced for the three macronutrients (24/61). In most trials (45/61) the energy prescription or approach used to restrict energy intake was similar in both groups. We assessed the overall risk of bias of outcomes across trials as predominantly high, mostly from bias due to missing outcome data. Using GRADE, we assessed the certainty of evidence as moderate to very low across outcomes.  Participants without and with T2DM lost weight when following weight-reducing phases of both diets at the short (range: 12.2 to 0.33 kg) and long term (range: 13.1 to 1.7 kg).  In overweight and obese participants without T2DM: low-carbohydrate weight-reducing diets compared to balanced-carbohydrate weight-reducing diets (weight-reducing phases only) probably result in little to no difference in change in body weight over three to 8.5 months (mean difference (MD) -1.07 kg, (95% confidence interval (CI) -1.55 to -0.59, I2 = 51%, 3286 participants, 37 RCTs, moderate-certainty evidence) and over one to two years (MD -0.93 kg, 95% CI -1.81 to -0.04, I2 = 40%, 1805 participants, 14 RCTs, moderate-certainty evidence); as well as change in DBP and LDL cholesterol over one to two years. The evidence is very uncertain about whether there is a difference in the number of participants per group with weight loss of at least 5% at one year (risk ratio (RR) 1.11, 95% CI 0.94 to 1.31, I2 = 17%, 137 participants, 2 RCTs, very low-certainty evidence).  In overweight and obese participants with T2DM: low-carbohydrate weight-reducing diets compared to balanced-carbohydrate weight-reducing diets (weight-reducing phases only) probably result in little to no difference in change in body weight over three to six months (MD -1.26 kg, 95% CI -2.44 to -0.09, I2 = 47%, 1114 participants, 14 RCTs, moderate-certainty evidence) and over one to two years (MD -0.33 kg, 95% CI -2.13 to 1.46, I2 = 10%, 813 participants, 7 RCTs, moderate-certainty evidence); as well in change in DBP, HbA1c and LDL cholesterol over 1 to 2 years. The evidence is very uncertain about whether there is a difference in the number of participants per group with weight loss of at least 5% at one to two years (RR 0.90, 95% CI 0.68 to 1.20, I2 = 0%, 106 participants, 2 RCTs, very low-certainty evidence).  Evidence on participant-reported adverse effects was limited, and we could not draw any conclusions about these.  AUTHORS' CONCLUSIONS: There is probably little to no difference in weight reduction and changes in cardiovascular risk factors up to two years' follow-up, when overweight and obese participants without and with T2DM are randomised to either low-carbohydrate or balanced-carbohydrate weight-reducing diets.


Subject(s)
Diet, Carbohydrate-Restricted , Energy Intake , Adult , Body Weight , Carbohydrates , Female , Heart Disease Risk Factors , Humans , Male
13.
Cochrane Database Syst Rev ; 8: CD014641, 2021 08 20.
Article in English | MEDLINE | ID: mdl-34416013

ABSTRACT

BACKGROUND: Tuberculosis is the primary cause of hospital admission in people living with HIV, and the likelihood of death in the hospital is unacceptably high. The Alere Determine TB LAM Ag test (AlereLAM) is a point-of-care test and the only lateral flow lipoarabinomannan assay (LF-LAM) assay currently commercially available and recommended by the World Health Organization (WHO). A 2019 Cochrane Review summarised the diagnostic accuracy of LF-LAM for tuberculosis in people living with HIV. This systematic review assesses the impact of the use of LF-LAM (AlereLAM) on mortality and other patient-important outcomes. OBJECTIVES: To assess the impact of the use of LF-LAM (AlereLAM) on mortality in adults living with HIV in inpatient and outpatient settings. To assess the impact of the use of LF-LAM (AlereLAM) on other patient-important outcomes in adults living with HIV, including time to diagnosis of tuberculosis, and time to initiation of tuberculosis treatment. SEARCH METHODS: We searched the Cochrane Infectious Diseases Group Specialized Register; the Cochrane Central Register of Controlled Trials (CENTRAL); MEDLINE (PubMed); Embase (Ovid); Science Citation Index Expanded (Web of Science), BIOSIS Previews, Scopus, LILACS; ProQuest Dissertations and Theses; ClinicalTrials.gov; and the WHO ICTRP up to 12 March 2021. SELECTION CRITERIA: Randomized controlled trials that compared a diagnostic intervention including LF-LAM with diagnostic strategies that used smear microscopy, mycobacterial culture, a nucleic acid amplification test such as Xpert MTB/RIF, or a combination of these tests. We included adults (≥ 15 years) living with HIV. DATA COLLECTION AND ANALYSIS: Two review authors independently assessed trials for eligibility, extracted data, and analysed risk of bias using the Cochrane tool for assessing risk of bias in randomized studies. We contacted study authors for clarification as needed. We used risk ratio (RR) with 95% confidence intervals (CI). We used a fixed-effect model except in the presence of clinical or statistical heterogeneity, in which case we used a random-effects model. We assessed the certainty of the evidence using GRADE. MAIN RESULTS: We included three trials, two in inpatient settings and one in outpatient settings. All trials were conducted in sub-Saharan Africa and assessed the impact of diagnostic strategies that included LF-LAM on mortality when the test was used in conjunction with other tuberculosis diagnostic tests or clinical assessment for clinical decision-making in adults living with HIV. Inpatient settings  In inpatient settings, the use of LF-LAM testing as part of a tuberculosis diagnostic strategy likely reduces mortality in people living with HIV at eight weeks compared to routine tuberculosis diagnostic testing without LF-LAM (pooled RR 0.85, 95% CI 0.76 to 0.94; 5102 participants, 2 trials; moderate-certainty evidence). That is, people living with HIV who received LF-LAM had 15% lower risk of mortality. The absolute effect was 34 fewer deaths per 1000 (from 14 fewer to 55 fewer). In inpatient settings, the use of LF-LAM testing as part of a tuberculosis diagnostic strategy probably results in a slight increase in the proportion of people living with HIV who were started on tuberculosis treatment compared to routine tuberculosis diagnostic testing without LF-LAM (pooled RR 1.26, 95% CI 0.94 to 1.69; 5102 participants, 2 trials; moderate-certainty evidence).  Outpatient settings In outpatient settings, the use of LF-LAM testing as part of a tuberculosis diagnostic strategy may reduce mortality in people living with HIV at six months compared to routine tuberculosis diagnostic testing without LF-LAM (RR 0.89, 95% CI 0.71 to 1.11; 2972 participants, 1 trial; low-certainty evidence). Although this trial did not detect a difference in mortality, the direction of effect was towards a mortality reduction, and the effect size was similar to that in inpatient settings.  In outpatient settings, the use of LF-LAM testing as part of a tuberculosis diagnostic strategy may result in a large increase in the proportion of people living with HIV who were started on tuberculosis treatment compared to routine tuberculosis diagnostic testing without LF-LAM (RR 5.44, 95% CI 4.70 to 6.29, 3022 participants, 1 trial; low-certainty evidence). Other patient-important outcomes Assessment of other patient-important and implementation outcomes in the trials varied. The included trials demonstrated that a higher proportion of people living with HIV were able to produce urine compared to sputum for tuberculosis diagnostic testing; a higher proportion of people living with HIV were diagnosed with tuberculosis in the group that received LF-LAM; and the incremental diagnostic yield was higher for LF-LAM than for urine or sputum Xpert MTB/RIF. AUTHORS' CONCLUSIONS: In inpatient settings, the use of LF-LAM as part of a tuberculosis diagnostic testing strategy likely reduces mortality and probably results in a slight increase in tuberculosis treatment initiation in people living with HIV. The reduction in mortality may be due to earlier diagnosis, which facilitates prompt treatment initiation. In outpatient settings, the use of LF-LAM testing as part of a tuberculosis diagnostic strategy may reduce mortality and may result in a large increase in tuberculosis treatment initiation in people living with HIV. Our results support the implementation of LF-LAM to be used in conjunction with other WHO-recommended tuberculosis diagnostic tests to assist in the rapid diagnosis of tuberculosis in people living with HIV.


Subject(s)
Antibiotics, Antitubercular , HIV Infections , Mycobacterium tuberculosis , Tuberculosis, Pulmonary , Tuberculosis , Adult , Antibiotics, Antitubercular/therapeutic use , HIV Infections/complications , HIV Infections/drug therapy , Humans , Lipopolysaccharides , Rifampin , Sensitivity and Specificity , Tuberculosis/diagnosis , Tuberculosis/drug therapy , Tuberculosis, Pulmonary/drug therapy
14.
Cochrane Database Syst Rev ; 5: CD013235, 2021 05 04.
Article in English | MEDLINE | ID: mdl-34097767

ABSTRACT

BACKGROUND: Rapid antimicrobial susceptibility tests are expected to reduce the time to clinically important results of a blood culture. This might enable clinicians to better target therapy to a person's needs, and thereby, improve health outcomes (mortality, length of hospital stay), and reduce unnecessary prescribing of broad-spectrum antibiotics; thereby reducing antimicrobial resistance rates. OBJECTIVES: To assess the effects of rapid susceptibility testing versus standard susceptibility testing for bloodstream infections (BSIs). SEARCH METHODS: To identify studies with selected outcomes, we searched the Cochrane Infectious Diseases Group Specialised Register, CENTRAL, MEDLINE, LILACS, and two trials registries, between 1987 and October 2020. We used 'bloodstream infection' and 'antimicrobial susceptibility tests' as search terms. We had no language or publication status limitations. SELECTION CRITERIA: Randomized controlled trials (RCTs) comparing rapid antimicrobial susceptibility testing (with a time-to-result of ≤ 8 hours) versus conventional antimicrobial susceptibility testing in people with a BSI caused by any bacteria, as identified by a positive blood culture. DATA COLLECTION AND ANALYSIS: Two review authors independently screened references, full-text reports of potentially relevant studies, extracted data from the studies, and assessed risk of bias. Any disagreement was discussed and resolved with a third review author. For mortality, a dichotomous outcome, we extracted the number of events in each arm, and presented a risk ratio (RR) with 95% confidence interval (CI) to compare rapid susceptibility testing to conventional methods. We used Review Manager 5.4 to meta-analyse the data. For other outcomes, which are time-to-event outcomes (time-to-discharge from hospital, time-to-first appropriate antibiotic change), we conducted qualitative narrative synthesis, due to heterogeneity of outcome measures.  MAIN RESULTS: We included six trials, with 1638 participants. For rapid antimicrobial susceptibility testing compared to conventional methods, there was little or no difference in mortality between groups (RR 1.10, 95% CI 0.82 to 1.46; 6 RCTs, 1638 participants; low-certainty evidence). In subgroup analysis, for rapid genotypic or molecular antimicrobial susceptibility testing compared to conventional methods, there was little or no difference in mortality between groups (RR 1.02, 95% CI 0.69 to 1.49; 4 RCTs, 1074 participants; low-certainty evidence). For phenotypic rapid susceptibility testing compared to conventional methods, there was little or no difference in mortality between groups  (RR 1.37, 95% CI 0.80 to 2.35; 2 RCTs, 564 participants; low-certainty evidence). In qualitative analysis, rapid susceptibility testing may make little or no difference in time-to-discharge (4 RCTs, 1165 participants; low-certainty evidence). In qualitative analysis, rapid genotypic susceptibility testing compared to conventional testing may make little or no difference in time-to-appropriate antibiotic (3 RCTs, 929 participants; low-certainty evidence). In subgroup analysis, rapid phenotypic susceptibility testing compared to conventional testing may improve time-to-appropriate antibiotic (RR -17.29, CI -45.05 to 10.47; 2 RCTs, 564 participants; low-certainty evidence).  AUTHORS' CONCLUSIONS: The theoretical benefits of rapid susceptibility testing have not been demonstrated to directly improve mortality, time-to-discharge, or time-to-appropriate antibiotic in these randomized studies. Future large prospective studies should be designed to focus on the most clinically meaningful outcomes, and aim to optimize blood culture pathways.


Subject(s)
Anti-Bacterial Agents/therapeutic use , Microbial Sensitivity Tests/methods , Sepsis/drug therapy , Bias , Humans , Odds Ratio , Randomized Controlled Trials as Topic , Sepsis/microbiology , Sepsis/mortality , Time-to-Treatment
15.
Cochrane Database Syst Rev ; 5: CD012776, 2021 05 24.
Article in English | MEDLINE | ID: mdl-34027998

ABSTRACT

BACKGROUND: Pyrethroid long-lasting insecticidal nets (LLINs) have been important in the large reductions in malaria cases in Africa, but insecticide resistance in Anopheles mosquitoes threatens their impact. Insecticide synergists may help control insecticide-resistant populations. Piperonyl butoxide (PBO) is such a synergist; it has been incorporated into pyrethroid-LLINs to form pyrethroid-PBO nets, which are currently produced by five LLIN manufacturers and, following a recommendation from the World Health Organization (WHO) in 2017, are being included in distribution campaigns. This review examines epidemiological and entomological evidence on the addition of PBO to pyrethroid nets on their efficacy. OBJECTIVES: To compare effects of pyrethroid-PBO nets currently in commercial development or on the market with effects of their non-PBO equivalent in relation to: 1. malaria parasite infection (prevalence or incidence); and 2. entomological outcomes. SEARCH METHODS: We searched the Cochrane Infectious Diseases Group (CIDG) Specialized Register, CENTRAL, MEDLINE, Embase, Web of Science, CAB Abstracts, and two clinical trial registers (ClinicalTrials.gov and WHO International Clinical Trials Registry Platform) up to 25 September 2020. We contacted organizations for unpublished data. We checked the reference lists of trials identified by these methods. SELECTION CRITERIA: We included experimental hut trials, village trials, and randomized controlled trials (RCTs) with mosquitoes from the Anopheles gambiae complex or the Anopheles funestus group. DATA COLLECTION AND ANALYSIS: Two review authors assessed each trial for eligibility, extracted data, and determined the risk of bias for included trials. We resolved disagreements through discussion with a third review author. We analysed data using Review Manager 5 and assessed the certainty of evidence using the GRADE approach. MAIN RESULTS: Sixteen trials met the inclusion criteria: 10 experimental hut trials, four village trials, and two cluster-RCTs (cRCTs). Three trials are awaiting classification, and four trials are ongoing.  Two cRCTs examined the effects of pyrethroid-PBO nets on parasite prevalence in people living in areas with highly pyrethroid-resistant mosquitoes (< 30% mosquito mortality in discriminating dose assays). At 21 to 25 months post intervention, parasite prevalence was lower in the intervention arm (odds ratio (OR) 0.79, 95% confidence interval (CI) 0.67 to 0.95; 2 trials, 2 comparisons; moderate-certainty evidence). In highly pyrethroid-resistant areas, unwashed pyrethroid-PBO nets led to higher mosquito mortality compared to unwashed standard-LLINs (risk ratio (RR) 1.84, 95% CI 1.60 to 2.11; 14,620 mosquitoes, 5 trials, 9 comparisons; high-certainty evidence) and lower blood feeding success (RR 0.60, 95% CI 0.50 to 0.71; 14,000 mosquitoes, 4 trials, 8 comparisons; high-certainty evidence). However, in comparisons of washed pyrethroid-PBO nets to washed LLINs, we do not know if PBO nets had a greater effect on mosquito mortality (RR 1.20, 95% CI 0.88 to 1.63; 10,268 mosquitoes, 4 trials, 5 comparisons; very low-certainty evidence), although the washed pyrethroid-PBO nets did decrease blood-feeding success compared to standard-LLINs (RR 0.81, 95% CI 0.72 to 0.92; 9674 mosquitoes, 3 trials, 4 comparisons; high-certainty evidence). In areas where pyrethroid resistance is moderate (31% to 60% mosquito mortality), mosquito mortality was higher with unwashed pyrethroid-PBO nets compared to unwashed standard-LLINs (RR 1.68, 95% CI 1.33 to 2.11; 751 mosquitoes, 2 trials, 3 comparisons; moderate-certainty evidence), but there was little to no difference in effects on blood-feeding success (RR 0.90, 95% CI 0.72 to 1.11; 652 mosquitoes, 2 trials, 3 comparisons; moderate-certainty evidence). For washed pyrethroid-PBO nets compared to washed standard-LLINs, we found little to no evidence for higher mosquito mortality or reduced blood feeding (mortality: RR 1.07, 95% CI 0.74 to 1.54; 329 mosquitoes, 1 trial, 1 comparison, low-certainty evidence; blood feeding success: RR 0.91, 95% CI 0.74 to 1.13; 329 mosquitoes, 1 trial, 1 comparison; low-certainty evidence). In areas where pyrethroid resistance is low (61% to 90% mosquito mortality), studies reported little to no difference in the effects of unwashed pyrethroid-PBO nets compared to unwashed standard-LLINs on mosquito mortality (RR 1.25, 95% CI 0.99 to 1.57; 948 mosquitoes, 2 trials, 3 comparisons; moderate-certainty evidence), and we do not know if there was any effect on blood-feeding success (RR 0.75, 95% CI 0.27 to 2.11; 948 mosquitoes, 2 trials, 3 comparisons; very low-certainty evidence). For washed pyrethroid-PBO nets compared to washed standard-LLINs, we do not know if there was any difference in mosquito mortality (RR 1.39, 95% CI 0.95 to 2.04; 1022 mosquitoes, 2 trials, 3 comparisons; very low-certainty evidence) or on blood feeding (RR 1.07, 95% CI 0.49 to 2.33; 1022 mosquitoes, 2 trials, 3 comparisons; low-certainty evidence). In areas where mosquito populations are susceptible to insecticides (> 90% mosquito mortality), there may be little to no difference in the effects of unwashed pyrethroid-PBO nets compared to unwashed standard-LLINs on mosquito mortality (RR 1.20, 95% CI 0.64 to 2.26; 2791 mosquitoes, 2 trials, 2 comparisons; low-certainty evidence). This is similar for washed nets (RR 1.07, 95% CI 0.92 to 1.25; 2644 mosquitoes, 2 trials, 2 comparisons; low-certainty evidence). We do not know if unwashed pyrethroid-PBO nets had any effect on the blood-feeding success of susceptible mosquitoes (RR 0.52, 95% CI 0.12 to 2.22; 2791 mosquitoes, 2 trials, 2 comparisons; very low-certainty evidence). The same applies to washed nets (RR 1.25, 95% CI 0.82 to 1.91; 2644 mosquitoes, 2 trials, 2 comparisons; low-certainty evidence). In village trials comparing pyrethroid-PBO nets to LLINs, there was no difference in sporozoite rate (4 trials, 5 comparisons) nor in mosquito parity (3 trials, 4 comparisons). AUTHORS' CONCLUSIONS: In areas of high insecticide resistance, pyrethroid-PBO nets have greater entomological and epidemiological efficacy compared to standard LLINs, with sustained reduction in parasite prevalence, higher mosquito mortality and reduction in mosquito blood feeding rates 21 to 25 months post intervention. Questions remain about the durability of PBO on nets, as the impact of pyrethroid-PBO nets on mosquito mortality was not sustained over 20 washes in experimental hut trials, and epidemiological data on pyrethroid-PBO nets for the full intended three-year life span of the nets is not available. Little evidence is available to support greater entomological efficacy of pyrethroid-PBO nets in areas where mosquitoes show lower levels of resistance to pyrethroids.


Subject(s)
Insecticide Resistance/drug effects , Insecticide-Treated Bednets , Malaria/prevention & control , Mosquito Control/methods , Pesticide Synergists , Piperonyl Butoxide , Pyrethrins , Africa/epidemiology , Animals , Culicidae , Drug Combinations , Feeding Behavior , Humans , Malaria/epidemiology , Mortality , Randomized Controlled Trials as Topic
16.
Cochrane Database Syst Rev ; 3: CD010383, 2021 03 18.
Article in English | MEDLINE | ID: mdl-33734432

ABSTRACT

BACKGROUND: Epidermal growth factor receptor (EGFR) mutation positive (M+) non-small cell lung cancer (NSCLC) is an important subtype of lung cancer comprising 10% to 15% of non-squamous tumours. This subtype is more common in women than men, is less associated with smoking, but occurs at a younger age than sporadic tumours. OBJECTIVES: To assess the clinical effectiveness of single-agent or combination EGFR therapies used in the first-line treatment of people with locally advanced or metastatic EGFR M+ NSCLC compared with other cytotoxic chemotherapy (CTX) agents used alone or in combination, or best supportive care (BSC). The primary outcomes were overall survival and progression-free survival. Secondary outcomes included response rate, symptom palliation, toxicity, and health-related quality of life. SEARCH METHODS: We conducted electronic searches of the Cochrane Register of Controlled Trials (CENTRAL) (2020, Issue 7), MEDLINE (1946 to 27th July 2020), Embase (1980 to 27th July 2020), and ISI Web of Science (1899 to 27th July 2020). We also searched the conference abstracts of the American Society for Clinical Oncology and the European Society for Medical Oncology (July 2020); Evidence Review Group submissions to the National Institute for Health and Care Excellence; and the reference lists of retrieved articles. SELECTION CRITERIA: Parallel-group randomised controlled trials comparing EGFR-targeted agents (alone or in combination with cytotoxic agents or BSC) with cytotoxic chemotherapy (single or doublet) or BSC in chemotherapy-naive patients with locally advanced or metastatic (stage IIIB or IV) EGFR M+ NSCLC unsuitable for treatment with curative intent. DATA COLLECTION AND ANALYSIS: Two review authors independently identified articles, extracted data, and carried out the 'Risk of bias' assessment. We conducted meta-analyses using a fixed-effect model unless there was substantial heterogeneity, in which case we also performed a random-effects analysis as a sensitivity analysis. MAIN RESULTS: Twenty-two trials met the inclusion criteria. Ten of these exclusively recruited people with EGFR M+ NSCLC; the remainder recruited a mixed population and reported results for people with EGFR M+ NSCLC as subgroup analyses. The number of participants with EGFR M+ tumours totalled 3023, of whom approximately 2563 were of Asian origin. Overall survival (OS) data showed inconsistent results between the included trials that compared EGFR-targeted treatments against cytotoxic chemotherapy or placebo. Erlotinib was used in eight trials, gefitinib in nine trials, afatinib in two trials, cetuximab in two trials, and icotinib in one trial. The findings of FASTACT 2 suggested a clinical benefit for OS for participants treated with erlotinib plus cytotoxic chemotherapy when compared to cytotoxic chemotherapy alone, as did the Han 2017 trial for gefitinib plus cytotoxic chemotherapy, but both results were based on a small number of participants (n = 97 and 122, respectively). For progression-free survival (PFS), a pooled analysis of four trials showed evidence of clinical benefit for erlotinib compared with cytotoxic chemotherapy (hazard ratio (HR) 0.31; 95% confidence interval (CI) 0.25 to 0.39 ; 583 participants ; high-certainty evidence). A pooled analysis of two trials of gefitinib versus paclitaxel plus carboplatin showed evidence of clinical benefit for PFS for gefitinib (HR 0.39; 95% CI 0.32 to 0.48 ; 491 participants high-certainty evidence), and a pooled analysis of two trials of gefitinib versus pemetrexed plus carboplatin with pemetrexed maintenance also showed evidence of clinical benefit for PFS for gefitinib (HR 0.59; 95% CI 0.46 to 0.74, 371 participants ; moderate-certainty evidence). Afatinib showed evidence of clinical benefit for PFS when compared with chemotherapy in a pooled analysis of two trials (HR 0.42; 95% CI 0.34 to 0.53, 709 participants high-certainty evidence). All but one small trial showed a corresponding improvement in response rate with tyrosine-kinase inhibitor (TKI) compared to chemotherapy. Commonly reported grade 3/4 adverse events associated with afatinib, erlotinib, gefitinib and icotinib monotherapy were rash and diarrhoea. Myelosuppression was consistently worse in the chemotherapy arms; fatigue and anorexia were also associated with some chemotherapies. Seven trials reported on health-related quality of life and symptom improvement using different methodologies. For each of erlotinib, gefitinib, and afatinib, two trials showed improvement in one or more indices for the TKI compared to chemotherapy. The quality of evidence was high for the comparisons of erlotinib and gefitinib with cytotoxic chemotherapy and for the comparison of afatinib with cytotoxic chemotherapy. AUTHORS' CONCLUSIONS: Erlotinib, gefitinib, afatinib and icotinib are all active agents in EGFR M+ NSCLC patients, and demonstrate an increased tumour response rate and prolonged PFS compared to cytotoxic chemotherapy. We found a beneficial effect of the TKI compared to cytotoxic chemotherapy in adverse effect and health-related quality of life. We found limited evidence for increased OS for the TKI when compared with standard chemotherapy, but the majority of the included trials allowed participants to switch treatments on disease progression, which will have a confounding effect on any OS analysis. Single agent-TKI remains the standard of care and the benefit of combining a TKI and chemotherapy remains uncertain as the evidence is based on small patient numbers. Cytotoxic chemotherapy is less effective in EGFR M+ NSCLC than erlotinib, gefitinib, afatinib or icotinib and is associated with greater toxicity. There are no data supporting the use of monoclonal antibody therapy. Icotinib is not available outside China.


Subject(s)
Antineoplastic Agents/therapeutic use , Carcinoma, Non-Small-Cell Lung/drug therapy , ErbB Receptors/genetics , Lung Neoplasms/drug therapy , Mutation , Afatinib/adverse effects , Afatinib/therapeutic use , Aged , Antineoplastic Agents/adverse effects , Antineoplastic Combined Chemotherapy Protocols/therapeutic use , Bias , Carboplatin/therapeutic use , Carcinoma, Non-Small-Cell Lung/genetics , Carcinoma, Non-Small-Cell Lung/mortality , Cetuximab/adverse effects , Cetuximab/therapeutic use , Crown Ethers/adverse effects , Crown Ethers/therapeutic use , Erlotinib Hydrochloride/adverse effects , Erlotinib Hydrochloride/therapeutic use , Female , Gefitinib/adverse effects , Gefitinib/therapeutic use , Humans , Lung Neoplasms/genetics , Lung Neoplasms/mortality , Male , Middle Aged , Paclitaxel/therapeutic use , Pemetrexed/therapeutic use , Progression-Free Survival , Protein Kinase Inhibitors/adverse effects , Protein Kinase Inhibitors/therapeutic use , Quality of Life , Quinazolines/adverse effects , Quinazolines/therapeutic use , Randomized Controlled Trials as Topic
17.
Cochrane Database Syst Rev ; 2: CD013587, 2021 02 12.
Article in English | MEDLINE | ID: mdl-33624299

ABSTRACT

BACKGROUND: The coronavirus disease 2019 (COVID-19) pandemic has resulted in substantial mortality. Some specialists proposed chloroquine (CQ) and hydroxychloroquine (HCQ) for treating or preventing the disease. The efficacy and safety of these drugs have been assessed in randomized controlled trials. OBJECTIVES: To evaluate the effects of chloroquine (CQ) or hydroxychloroquine (HCQ) for 1) treating people with COVID-19 on death and time to clearance of the virus; 2) preventing infection in people at risk of SARS-CoV-2 exposure; 3) preventing infection in people exposed to SARS-CoV-2. SEARCH METHODS: We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, Current Controlled Trials (www.controlled-trials.com), and the COVID-19-specific resources www.covid-nma.com and covid-19.cochrane.org, for studies of any publication status and in any language. We performed all searches up to 15 September 2020. We contacted researchers to identify unpublished and ongoing studies. SELECTION CRITERIA: We included randomized controlled trials (RCTs) testing chloroquine or hydroxychloroquine in people with COVID-19, people at risk of COVID-19 exposure, and people exposed to COVID-19. Adverse events (any, serious, and QT-interval prolongation on electrocardiogram) were also extracted. DATA COLLECTION AND ANALYSIS: Two review authors independently assessed eligibility of search results, extracted data from the included studies, and assessed risk of bias using the Cochrane 'Risk of bias' tool. We contacted study authors for clarification and additional data for some studies. We used risk ratios (RR) for dichotomous outcomes and mean differences (MD) for continuous outcomes, with 95% confidence intervals (CIs). We performed meta-analysis using a random-effects model for outcomes where pooling of effect estimates was appropriate. MAIN RESULTS: 1. Treatment of COVID-19 disease We included 12 trials involving 8569 participants, all of whom were adults. Studies were from China (4); Brazil, Egypt, Iran, Spain, Taiwan, the UK, and North America (each 1 study); and a global study in 30 countries (1 study). Nine were in hospitalized patients, and three from ambulatory care. Disease severity, prevalence of comorbidities, and use of co-interventions varied substantially between trials. We found potential risks of bias across all domains for several trials. Nine trials compared HCQ with standard care (7779 participants), and one compared HCQ with placebo (491 participants); dosing schedules varied. HCQ makes little or no difference to death due to any cause (RR 1.09, 95% CI 0.99 to 1.19; 8208 participants; 9 trials; high-certainty evidence). A sensitivity analysis using modified intention-to-treat results from three trials did not influence the pooled effect estimate.  HCQ may make little or no difference to the proportion of people having negative PCR for SARS-CoV-2 on respiratory samples at day 14 from enrolment (RR 1.00, 95% CI 0.91 to 1.10; 213 participants; 3 trials; low-certainty evidence). HCQ probably results in little to no difference in progression to mechanical ventilation (RR 1.11, 95% CI 0.91 to 1.37; 4521 participants; 3 trials; moderate-certainty evidence). HCQ probably results in an almost three-fold increased risk of adverse events (RR 2.90, 95% CI 1.49 to 5.64; 1394 participants; 6 trials; moderate-certainty evidence), but may make little or no difference to the risk of serious adverse events (RR 0.82, 95% CI 0.37 to 1.79; 1004 participants; 6 trials; low-certainty evidence). We are very uncertain about the effect of HCQ on time to clinical improvement or risk of prolongation of QT-interval on electrocardiogram (very low-certainty evidence). One trial (22 participants) randomized patients to CQ versus lopinavir/ritonavir, a drug with unknown efficacy against SARS-CoV-2, and did not report any difference for clinical recovery or adverse events. One trial compared HCQ combined with azithromycin against standard care (444 participants). This trial did not detect a difference in death, requirement for mechanical ventilation, length of hospital admission, or serious adverse events. A higher risk of adverse events was reported in the HCQ-and-azithromycin arm; this included QT-interval prolongation, when measured. One trial compared HCQ with febuxostat, another drug with unknown efficacy against SARS-CoV-2 (60 participants). There was no difference detected in risk of hospitalization or change in computed tomography (CT) scan appearance of the lungs; no deaths were reported. 2. Preventing COVID-19 disease in people at risk of exposure to SARS-CoV-2 Ongoing trials are yet to report results for this objective. 3. Preventing COVID-19 disease in people who have been exposed to SARS-CoV-2 One trial (821 participants) compared HCQ with placebo as a prophylactic agent in the USA (around 90% of participants) and Canada. Asymptomatic adults (66% healthcare workers; mean age 40 years; 73% without comorbidity) with a history of exposure to people with confirmed COVID-19 were recruited. We are very uncertain about the effect of HCQ on the primary outcomes, for which few events were reported: 20/821 (2.4%) developed confirmed COVID-19 at 14 days from enrolment, and 2/821 (0.2%) were hospitalized due to COVID-19 (very low-certainty evidence). HCQ probably increases the risk of adverse events compared with placebo (RR 2.39, 95% CI 1.83 to 3.11; 700 participants; 1 trial; moderate-certainty evidence). HCQ may result in little or no difference in serious adverse events (no RR: no participants experienced serious adverse events; low-certainty evidence). One cluster-randomized trial (2525 participants) compared HCQ with standard care for the prevention of COVID-19 in people with a history of exposure to SARS-CoV-2 in Spain. Most participants were working or residing in nursing homes; mean age was 49 years. There was no difference in the risk of symptomatic confirmed COVID-19 or production of antibodies to SARS-CoV-2 between the two study arms. AUTHORS' CONCLUSIONS: HCQ for people infected with COVID-19 has little or no effect on the risk of death and probably no effect on progression to mechanical ventilation. Adverse events are tripled compared to placebo, but very few serious adverse events were found. No further trials of hydroxychloroquine or chloroquine for treatment should be carried out. These results make it less likely that the drug is effective in protecting people from infection, although this is not excluded entirely. It is probably sensible to complete trials examining prevention of infection, and ensure these are carried out to a high standard to provide unambiguous results.


Subject(s)
Antimalarials/therapeutic use , COVID-19 Drug Treatment , COVID-19/prevention & control , Chloroquine/therapeutic use , Hydroxychloroquine/therapeutic use , SARS-CoV-2 , Adult , Aged , Antimalarials/adverse effects , Antiviral Agents/adverse effects , Antiviral Agents/therapeutic use , Bias , COVID-19/epidemiology , COVID-19/mortality , COVID-19 Nucleic Acid Testing/statistics & numerical data , Cause of Death , Chloroquine/adverse effects , Humans , Hydroxychloroquine/adverse effects , Middle Aged , Pandemics , Prognosis , Randomized Controlled Trials as Topic , Respiration, Artificial/statistics & numerical data , Standard of Care , Treatment Outcome
18.
PLoS Med ; 17(9): e1003344, 2020 09.
Article in English | MEDLINE | ID: mdl-32956352

ABSTRACT

BACKGROUND: Large sample sizes are often required to detect statistically significant associations between pharmacogenetic markers and treatment response. Meta-analysis may be performed to synthesize data from several studies, increasing sample size and, consequently, power to detect significant genetic effects. However, performing robust synthesis of data from pharmacogenetic studies is often challenging because of poor reporting of key data in study reports. There is currently no guideline for the reporting of pharmacogenetic studies that has been developed using a widely accepted robust methodology. The objective of this project was to develop the STrengthening the Reporting Of Pharmacogenetic Studies (STROPS) guideline. METHODS AND FINDINGS: We established a preliminary checklist of reporting items to be considered for inclusion in the guideline. We invited representatives of key stakeholder groups to participate in a 2-round Delphi survey. A total of 52 individuals participated in both rounds of the survey, scoring items with regards to their importance for inclusion in the STROPS guideline. We then held a consensus meeting, at which 8 individuals considered the results of the Delphi survey and voted on whether each item ought to be included in the final guideline. The STROPS guideline consists of 54 items and is accompanied by an explanation and elaboration document. The guideline contains items that are particularly important in the field of pharmacogenetics, such as the drug regimen of interest and whether adherence to treatment was accounted for in the conducted analyses. The guideline also requires that outcomes be clearly defined and justified, because in pharmacogenetic studies, there may be a greater number of possible outcomes than in other types of study (for example, disease-gene association studies). A limitation of this project is that our consensus meeting involved a small number of individuals, the majority of whom are based in the United Kingdom. CONCLUSIONS: Our aim is for the STROPS guideline to improve the transparency of reporting of pharmacogenetic studies and also to facilitate the conduct of high-quality systematic reviews and meta-analyses. We encourage authors to adhere to the STROPS guideline when publishing pharmacogenetic studies.


Subject(s)
Pharmacogenetics/methods , Pharmacogenomic Testing/standards , Pharmacogenomic Testing/trends , Adult , Checklist , Consensus , Delphi Technique , Female , Genetic Association Studies , Goals , Humans , Male , Middle Aged , Pharmacogenetics/standards , Politics , Publishing/standards , Research Design/standards , Stakeholder Participation , Surveys and Questionnaires , United Kingdom
19.
Cochrane Database Syst Rev ; 6: CD013459, 2020 06 26.
Article in English | MEDLINE | ID: mdl-32597510

ABSTRACT

BACKGROUND: Plague is a severe disease associated with high mortality. Late diagnosis leads to advance stage of the disease with worse outcomes and higher risk of spread of the disease. A rapid diagnostic test (RDT) could help in establishing a prompt diagnosis of plague. This would improve patient care and help appropriate public health response. OBJECTIVES: To determine the diagnostic accuracy of the RDT based on the antigen F1 (F1RDT) for detecting plague in people with suspected disease. SEARCH METHODS: We searched the CENTRAL, Embase, Science Citation Index, Google Scholar, the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov up to 15 May 2019, and PubMed (MEDLINE) up to 27 August 2019, regardless of language, publication status, or publication date. We handsearched the reference lists of relevant papers and contacted researchers working in the field. SELECTION CRITERIA: We included cross-sectional studies that assessed the accuracy of the F1RDT for diagnosing plague, where participants were tested with both the F1RDT and at least one reference standard. The reference standards were bacterial isolation by culture, polymerase chain reaction (PCR), and paired serology (this is a four-fold difference in F1 antibody titres between two samples from acute and convalescent phases). DATA COLLECTION AND ANALYSIS: Two review authors independently selected studies and extracted data. We appraised the methodological quality of each selected studies and applicability by using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. When meta-analysis was appropriate, we used the bivariate model to obtain pooled estimates of sensitivity and specificity. We stratified all analyses by the reference standard used and presented disaggregated data for forms of plague. We assessed the certainty of the evidence using GRADE. MAIN RESULTS: We included eight manuscripts reporting seven studies. Studies were conducted in three countries in Africa among adults and children with any form of plague. All studies except one assessed the F1RDT produced at the Institut Pasteur of Madagascar (F1RDT-IPM) and one study assessed a F1RDT produced by New Horizons (F1RDT-NH), utilized by the US Centers for Disease Control and Prevention. We could not pool the findings from the F1RDT-NH in meta-analyses due to a lack of raw data and a threshold of the test for positivity different from the F1RDT-IPM. Risk of bias was high for participant selection (retrospective studies, recruitment of participants not consecutive or random, unclear exclusion criteria), low or unclear for index test (blinding of F1RDT interpretation unknown), low for reference standards, and high or unclear for flow and timing (time of sample transportation was longer than seven days, which can lead to decreased viability of the pathogen and overgrowth of contaminating bacteria, with subsequent false-negative results and misclassification of the target condition). F1RDT for diagnosing all forms of plague F1RDT-IPM pooled sensitivity against culture was 100% (95% confidence interval (CI) 82 to 100; 4 studies, 1692 participants; very low certainty evidence) and pooled specificity was 70.3% (95% CI 65 to 75; 4 studies, 2004 participants; very low-certainty evidence). The performance of F1RDT-IPM against PCR was calculated from a single study in participants with bubonic plague (see below). There were limited data on the performance of F1RDT against paired serology. F1RDT for diagnosing pneumonic plague Performed in sputum, F1RDT-IPM pooled sensitivity against culture was 100% (95% CI 0 to 100; 2 studies, 56 participants; very low-certainty evidence) and pooled specificity was 71% (95% CI 59 to 80; 2 studies, 297 participants; very low-certainty evidence). There were limited data on the performance of F1RDT against PCR or against paired serology for diagnosing pneumonic plague. F1RDT for diagnosing bubonic plague Performed in bubo aspirate, F1RDT-IPM pooled sensitivity against culture was 100% (95% CI not calculable; 2 studies, 1454 participants; low-certainty evidence) and pooled specificity was 67% (95% CI 65 to 70; 2 studies, 1198 participants; very low-certainty evidence). Performed in bubo aspirate, F1RDT-IPM pooled sensitivity against PCR for the caf1 gene was 95% (95% CI 89 to 99; 1 study, 88 participants; very low-certainty evidence) and pooled specificity was 93% (95% CI 84 to 98; 1 study, 61 participants; very low-certainty evidence). There were no data providing data on both F1RDT and paired serology for diagnosing bubonic plague. AUTHORS' CONCLUSIONS: Against culture, the F1RDT appeared highly sensitive for diagnosing either pneumonic or bubonic plague, and can help detect plague in remote areas to assure management and enable a public health response. False positive results mean culture or PCR confirmation may be needed. F1RDT does not replace culture, which provides additional information on resistance to antibiotics and bacterial strains.


Subject(s)
Antigens, Bacterial/analysis , Plague/diagnosis , Yersinia pestis/immunology , Adult , Child , Confidence Intervals , Cross-Sectional Studies , False Negative Reactions , False Positive Reactions , Humans , Plague/immunology , Sensitivity and Specificity , Time Factors
20.
Pharmacoecon Open ; 4(4): 563-574, 2020 Dec.
Article in English | MEDLINE | ID: mdl-32207075

ABSTRACT

As part of the single technology appraisal process, the National Institute for Health and Care Excellence invited Takeda UK Ltd to submit clinical- and cost-effectiveness evidence for brentuximab vedotin (BV) for treating relapsed or refractory CD30-positive (CD30+) cutaneous T-cell lymphoma (CTCL). The Liverpool Reviews and Implementation Group at the University of Liverpool was commissioned to act as the evidence review group (ERG). This article summarises the ERG's review of the company's submission for BV and the appraisal committee (AC) decision. The principal clinical evidence was derived from a subgroup of patients with advanced-stage CD30+ mycosis fungoides (MF) or primary cutaneous anaplastic large-cell lymphoma (pcALCL) in the phase III ALCANZA randomised controlled trial (RCT). This trial compared BV versus physician's choice (PC) of methotrexate or bexarotene. Evidence from three observational studies was also presented, which included patients with other CTCL subtypes. The ERG's main concerns with the clinical evidence were the lack of RCT evidence for CTCL subtypes other than MF or pcALCL, lack of robust overall survival data (data were immature and confounded by subsequent treatment and treatment crossover on disease progression) and lack of conclusive results from analyses of health-related quality-of-life data. The ERG noted that many areas of uncertainty in the cost-effectiveness analysis were related to the clinical data, arising from the rarity of the condition and its subtypes and the complexity of the treatment pathway. The ERG highlighted that the inclusion of allogeneic stem-cell transplant (alloSCT) as an option in the treatment pathway was based on weak evidence and generated more uncertainty in a disease area that, because of its rarity and diversity, was already highly uncertain. The ERG also lacked confidence in the company's modelling of the post-progression pathway and was concerned that it may not produce reliable results. Results from the company's base-case comparison (including a simple discount patient access scheme [PAS] for BV) showed that treatment with BV dominated PC. The ERG's revisions and scenario analyses highlighted the high level of uncertainty around the company base-case cost-effectiveness results, ranging from BV dominating PC to an incremental cost-effectiveness ratio per quality-adjusted life-year gained of £494,981. The AC concluded that it was appropriate to include alloSCT in the treatment pathway even though data were limited. The AC recommended BV as an option for treating CD30+ CTCL after at least one systemic therapy in adults if they have MF, stage IIB or higher pcALCL or Sézary syndrome and if the company provides BV according to the commercial arrangement (i.e. simple discount PAS).

SELECTION OF CITATIONS
SEARCH DETAIL
...