ABSTRACT
BACKGROUND: For the results of clinical trials to have external validity, the patients included in the study must be representative of the population presenting in the general clinical settings. A scoping literature review was performed to evaluate how the eligibility criteria used in anti-malarial efficacy and safety trials translate into patient selection. METHODS: A search of the WorldWide Antimalarial Resistance Network (WWARN) Clinical Trials Publication Library, MEDLINE, The Cochrane Library, and clinicaltrials.gov was conducted to identify trials investigating anti-malarial efficacy and safety, published between 14th April 2001 and 31st December 2017. An updated search using the WWARN Clinical Trial Publication Library was undertaken to identify eligible publications from 1st January 2018 to 31st July 2021. The review included studies in patients of any age with uncomplicated malaria and any pharmaceutical therapeutic intervention administered. The proportion of trials with malaria-positive patients excluded was calculated and linked to the reported reason for exclusion. A subgroup analysis on eligibility criteria and trial baseline demographics was conducted to assess whether criteria are complied with when recruiting patients. RESULTS: Out of 847 studies, 176 (21%) trials were included in the final synthesis, screening a total of 157,516 malaria-positive patients, of whom 56,293 (36%) were enrolled and treated. Across the 176 studies included, 84 different inclusion and exclusion criteria were identified. The reason for exclusion of patients who tested positive for malaria was reported in 144 (82%) studies. Three criteria account for about 70% of malaria-positive patients excluded: mixed-species malaria infections or other specific Plasmodium species, parasite counts outside the set study ranges, and refusal of consent. CONCLUSIONS: Nearly two-thirds of the malaria-positive subjects who present to health facilities are systematically excluded from anti-malarial treatment trials. Reasons for exclusions are largely under-reported. Anti-malarial treatment in the general population is informed by studies on a narrow selection of patients who do not fully represent the totality of those seeking antimalarial treatment in routine practice. While entry criteria ensure consistency across trials, pragmatic trials are also necessary to supplement the information currently available and improve the external validity of the findings of malaria clinical trials.
Subject(s)
Antimalarials , Artemisinins , Folic Acid Antagonists , Malaria, Falciparum , Malaria , Plasmodium , Humans , Antimalarials/therapeutic use , Malaria, Falciparum/parasitology , Artemisinins/therapeutic use , Malaria/drug therapyABSTRACT
BACKGROUND: The impact of the COVID-19 pandemic on paediatric populations varied between high-income countries (HICs) versus low-income to middle-income countries (LMICs). We sought to investigate differences in paediatric clinical outcomes and identify factors contributing to disparity between countries. METHODS: The International Severe Acute Respiratory and Emerging Infections Consortium (ISARIC) COVID-19 database was queried to include children under 19 years of age admitted to hospital from January 2020 to April 2021 with suspected or confirmed COVID-19 diagnosis. Univariate and multivariable analysis of contributing factors for mortality were assessed by country group (HICs vs LMICs) as defined by the World Bank criteria. RESULTS: A total of 12 860 children (3819 from 21 HICs and 9041 from 15 LMICs) participated in this study. Of these, 8961 were laboratory-confirmed and 3899 suspected COVID-19 cases. About 52% of LMICs children were black, and more than 40% were infants and adolescent. Overall in-hospital mortality rate (95% CI) was 3.3% [=(3.0% to 3.6%), higher in LMICs than HICs (4.0% (3.6% to 4.4%) and 1.7% (1.3% to 2.1%), respectively). There were significant differences between country income groups in intervention profile, with higher use of antibiotics, antivirals, corticosteroids, prone positioning, high flow nasal cannula, non-invasive and invasive mechanical ventilation in HICs. Out of the 439 mechanically ventilated children, mortality occurred in 106 (24.1%) subjects, which was higher in LMICs than HICs (89 (43.6%) vs 17 (7.2%) respectively). Pre-existing infectious comorbidities (tuberculosis and HIV) and some complications (bacterial pneumonia, acute respiratory distress syndrome and myocarditis) were significantly higher in LMICs compared with HICs. On multivariable analysis, LMIC as country income group was associated with increased risk of mortality (adjusted HR 4.73 (3.16 to 7.10)). CONCLUSION: Mortality and morbidities were higher in LMICs than HICs, and it may be attributable to differences in patient demographics, complications and access to supportive and treatment modalities.
Subject(s)
COVID-19 , Tuberculosis , Adolescent , Humans , Child , COVID-19 Testing , Pandemics , COVID-19/epidemiology , COVID-19/therapy , Health ResourcesABSTRACT
BACKGROUND: The impact of the COVID-19 pandemic on paediatric populations varied between high-income countries (HICs) versus low-income to middle-income countries (LMICs). We sought to investigate differences in paediatric clinical outcomes and identify factors contributing to disparity between countries. METHODS: The International Severe Acute Respiratory and Emerging Infections Consortium (ISARIC) COVID-19 database was queried to include children under 19 years of age admitted to hospital from January 2020 to April 2021 with suspected or confirmed COVID-19 diagnosis. Univariate and multivariable analysis of contributing factors for mortality were assessed by country group (HICs vs LMICs) as defined by the World Bank criteria. RESULTS: A total of 12 860 children (3819 from 21 HICs and 9041 from 15 LMICs) participated in this study. Of these, 8961 were laboratory-confirmed and 3899 suspected COVID-19 cases. About 52% of LMICs children were black, and more than 40% were infants and adolescent. Overall in-hospital mortality rate (95% CI) was 3.3% [=(3.0% to 3.6%), higher in LMICs than HICs (4.0% (3.6% to 4.4%) and 1.7% (1.3% to 2.1%), respectively). There were significant differences between country income groups in intervention profile, with higher use of antibiotics, antivirals, corticosteroids, prone positioning, high flow nasal cannula, non-invasive and invasive mechanical ventilation in HICs. Out of the 439 mechanically ventilated children, mortality occurred in 106 (24.1%) subjects, which was higher in LMICs than HICs (89 (43.6%) vs 17 (7.2%) respectively). Pre-existing infectious comorbidities (tuberculosis and HIV) and some complications (bacterial pneumonia, acute respiratory distress syndrome and myocarditis) were significantly higher in LMICs compared with HICs. On multivariable analysis, LMIC as country income group was associated with increased risk of mortality (adjusted HR 4.73 (3.16 to 7.10)). CONCLUSION: Mortality and morbidities were higher in LMICs than HICs, and it may be attributable to differences in patient demographics, complications and access to supportive and treatment modalities.
Subject(s)
COVID-19 , Tuberculosis , Adolescent , Humans , Child , COVID-19 Testing , Pandemics , COVID-19/epidemiology , COVID-19/therapy , Health ResourcesABSTRACT
Background: Whilst timely clinical characterisation of infections caused by novel SARS-CoV-2 variants is necessary for evidence-based policy response, individual-level data on infecting variants are typically only available for a minority of patients and settings. Methods: Here, we propose an innovative approach to study changes in COVID-19 hospital presentation and outcomes after the Omicron variant emergence using publicly available population-level data on variant relative frequency to infer SARS-CoV-2 variants likely responsible for clinical cases. We apply this method to data collected by a large international clinical consortium before and after the emergence of the Omicron variant in different countries. Results: Our analysis, that includes more than 100,000 patients from 28 countries, suggests that in many settings patients hospitalised with Omicron variant infection less often presented with commonly reported symptoms compared to patients infected with pre-Omicron variants. Patients with COVID-19 admitted to hospital after Omicron variant emergence had lower mortality compared to patients admitted during the period when Omicron variant was responsible for only a minority of infections (odds ratio in a mixed-effects logistic regression adjusted for likely confounders, 0.67 [95% confidence interval 0.61-0.75]). Qualitatively similar findings were observed in sensitivity analyses with different assumptions on population-level Omicron variant relative frequencies, and in analyses using available individual-level data on infecting variant for a subset of the study population. Conclusions: Although clinical studies with matching viral genomic information should remain a priority, our approach combining publicly available data on variant frequency and a multi-country clinical characterisation dataset with more than 100,000 records allowed analysis of data from a wide range of settings and novel insights on real-world heterogeneity of COVID-19 presentation and clinical outcome. Funding: Bronner P. Gonçalves, Peter Horby, Gail Carson, Piero L. Olliaro, Valeria Balan, Barbara Wanjiru Citarella, and research costs were supported by the UK Foreign, Commonwealth and Development Office (FCDO) and Wellcome [215091/Z/18/Z, 222410/Z/21/Z, 225288/Z/22/Z]; and Janice Caoili and Madiha Hashmi were supported by the UK FCDO and Wellcome [222048/Z/20/Z]. Peter Horby, Gail Carson, Piero L. Olliaro, Kalynn Kennon and Joaquin Baruch were supported by the Bill & Melinda Gates Foundation [OPP1209135]; Laura Merson was supported by University of Oxford's COVID-19 Research Response Fund - with thanks to its donors for their philanthropic support. Matthew Hall was supported by a Li Ka Shing Foundation award to Christophe Fraser. Moritz U.G. Kraemer was supported by the Branco Weiss Fellowship, Google.org, the Oxford Martin School, the Rockefeller Foundation, and the European Union Horizon 2020 project MOOD (#874850). The contents of this publication are the sole responsibility of the authors and do not necessarily reflect the views of the European Commission. Contributions from Srinivas Murthy, Asgar Rishu, Rob Fowler, James Joshua Douglas, François Martin Carrier were supported by CIHR Coronavirus Rapid Research Funding Opportunity OV2170359 and coordinated out of Sunnybrook Research Institute. Contributions from Evert-Jan Wils and David S.Y. Ong were supported by a grant from foundation Bevordering Onderzoek Franciscus; and Andrea Angheben by the Italian Ministry of Health "Fondi Ricerca corrente-L1P6" to IRCCS Ospedale Sacro Cuore-Don Calabria. The data contributions of J.Kenneth Baillie, Malcolm G. Semple, and Ewen M. Harrison were supported by grants from the National Institute for Health Research (NIHR; award CO-CIN-01), the Medical Research Council (MRC; grant MC_PC_19059), and by the NIHR Health Protection Research Unit (HPRU) in Emerging and Zoonotic Infections at University of Liverpool in partnership with Public Health England (PHE) (award 200907), NIHR HPRU in Respiratory Infections at Imperial College London with PHE (award 200927), Liverpool Experimental Cancer Medicine Centre (grant C18616/A25153), NIHR Biomedical Research Centre at Imperial College London (award IS-BRC-1215-20013), and NIHR Clinical Research Network providing infrastructure support. All funders of the ISARIC Clinical Characterisation Group are listed in the appendix.
Subject(s)
COVID-19 , SARS-CoV-2 , COVID-19/epidemiology , COVID-19/virology , Humans , SARS-CoV-2/geneticsABSTRACT
BACKGROUND: Up to 30% of hospitalised patients with COVID-19 require advanced respiratory support, including high-flow nasal cannulas (HFNC), non-invasive mechanical ventilation (NIV), or invasive mechanical ventilation (IMV). We aimed to describe the clinical characteristics, outcomes and risk factors for failing non-invasive respiratory support in patients treated with severe COVID-19 during the first two years of the pandemic in high-income countries (HICs) and low middle-income countries (LMICs). METHODS: This is a multinational, multicentre, prospective cohort study embedded in the ISARIC-WHO COVID-19 Clinical Characterisation Protocol. Patients with laboratory-confirmed SARS-CoV-2 infection who required hospital admission were recruited prospectively. Patients treated with HFNC, NIV, or IMV within the first 24 h of hospital admission were included in this study. Descriptive statistics, random forest, and logistic regression analyses were used to describe clinical characteristics and compare clinical outcomes among patients treated with the different types of advanced respiratory support. RESULTS: A total of 66,565 patients were included in this study. Overall, 82.6% of patients were treated in HIC, and 40.6% were admitted to the hospital during the first pandemic wave. During the first 24 h after hospital admission, patients in HICs were more frequently treated with HFNC (48.0%), followed by NIV (38.6%) and IMV (13.4%). In contrast, patients admitted in lower- and middle-income countries (LMICs) were less frequently treated with HFNC (16.1%) and the majority received IMV (59.1%). The failure rate of non-invasive respiratory support (i.e. HFNC or NIV) was 15.5%, of which 71.2% were from HIC and 28.8% from LMIC. The variables most strongly associated with non-invasive ventilation failure, defined as progression to IMV, were high leukocyte counts at hospital admission (OR [95%CI]; 5.86 [4.83-7.10]), treatment in an LMIC (OR [95%CI]; 2.04 [1.97-2.11]), and tachypnoea at hospital admission (OR [95%CI]; 1.16 [1.14-1.18]). Patients who failed HFNC/NIV had a higher 28-day fatality ratio (OR [95%CI]; 1.27 [1.25-1.30]). CONCLUSIONS: In the present international cohort, the most frequently used advanced respiratory support was the HFNC. However, IMV was used more often in LMIC. Higher leucocyte count, tachypnoea, and treatment in LMIC were risk factors for HFNC/NIV failure. HFNC/NIV failure was related to worse clinical outcomes, such as 28-day mortality. Trial registration This is a prospective observational study; therefore, no health care interventions were applied to participants, and trial registration is not applicable.
Subject(s)
COVID-19 , Respiratory Insufficiency , COVID-19/therapy , Humans , Prospective Studies , Respiratory Insufficiency/therapy , SARS-CoV-2 , TachypneaABSTRACT
BACKGROUND: Periodic administration of anthelmintic drugs is a cost-effective intervention for morbidity control of soil-transmitted helminth (STH) infections. However, with programs expanding, drug pressure potentially selecting for drug-resistant parasites increases. While monitoring anthelmintic drug efficacy is crucial to inform country control program strategies, different factors must be taken into consideration that influence drug efficacy and make it difficult to standardize treatment outcome measures. We aimed to identify suitable approaches to assess and compare the efficacy of different anthelmintic treatments. METHODOLOGY: We built an individual participant-level database from 11 randomized controlled trials and two observational studies in which subjects received single-agent or combination therapy, or placebo. Eggs per gram of stool were calculated from egg counts at baseline and post-treatment. Egg reduction rates (ERR; based on mean group egg counts) and individual-patient ERR (iERR) were utilized to express drug efficacy and analyzed after log-transformation with a linear mixed effect model. The analyses were separated by follow-up duration (14-21 and 22-45 days) after drug administration. PRINCIPAL FINDINGS: The 13 studies enrolled 5,759 STH stool-positive individuals; 5,688 received active medication or placebo contributing a total of 11,103 STH infections (65% had two or three concurrent infections), of whom 3,904 (8,503 infections) and 1,784 (2,550 infections) had efficacy assessed at 14-21 days and 22-45 days post-treatment, respectively. Neither the number of helminth co-infections nor duration of follow-up affected ERR for any helminth species. The number of participants treated with single-dose albendazole was 689 (18%), with single-dose mebendazole 658 (17%), and with albendazole-based co-administrations 775 (23%). The overall mean ERR assessed by day 14-21 for albendazole and mebendazole was 94.5% and 87.4%, respectively on Ascaris lumbricoides, 86.8% and 40.8% on hookworm, and 44.9% and 23.8% on Trichuris trichiura. The World Health Organization (WHO) recommended criteria for efficacy were met in 50%, 62%, and 33% studies of albendazole for A. lumbricoides, T. trichiura, and hookworm, respectively and 25% of mebendazole studies. iERR analyses showed similar results, with cure achieved in 92% of A. lumbricoides-infected subjects treated with albendazole and 93% with mebendazole; corresponding figures for hookworm were 70% and 17%, and for T. trichiura 22% and 20%. CONCLUSIONS/SIGNIFICANCE: Combining the traditional efficacy assessment using group averages with individual responses provides a more complete picture of how anthelmintic treatments perform. Most treatments analyzed fail to meet the WHO minimal criteria for efficacy based on group means. Drug combinations (i.e., albendazole-ivermectin and albendazole-oxantel pamoate) are promising treatments for STH infections.
Subject(s)
Anthelmintics , Helminthiasis , Helminths , Hookworm Infections , Trichuriasis , Albendazole/therapeutic use , Ancylostomatoidea , Animals , Anthelmintics/therapeutic use , Helminthiasis/drug therapy , Hookworm Infections/drug therapy , Humans , Mebendazole/therapeutic use , Soil/parasitology , Trichuriasis/drug therapy , TrichurisABSTRACT
The International Severe Acute Respiratory and Emerging Infection Consortium (ISARIC) COVID-19 dataset is one of the largest international databases of prospectively collected clinical data on people hospitalized with COVID-19. This dataset was compiled during the COVID-19 pandemic by a network of hospitals that collect data using the ISARIC-World Health Organization Clinical Characterization Protocol and data tools. The database includes data from more than 705,000 patients, collected in more than 60 countries and 1,500 centres worldwide. Patient data are available from acute hospital admissions with COVID-19 and outpatient follow-ups. The data include signs and symptoms, pre-existing comorbidities, vital signs, chronic and acute treatments, complications, dates of hospitalization and discharge, mortality, viral strains, vaccination status, and other data. Here, we present the dataset characteristics, explain its architecture and how to gain access, and provide tools to facilitate its use.
Subject(s)
COVID-19 , Hospitalization , Humans , Pandemics , Prospective Studies , SARS-CoV-2ABSTRACT
Ribavirin is currently the standard of care for treating Lassa fever. However, the human clinical trial data supporting its use suffer from several serious flaws that render the results and conclusions unreliable. We performed a systematic review of available pre-clinical data and human pharmacokinetic data on ribavirin in Lassa. In in-vitro studies, the EC50 of ribavirin ranged from 0.6 µg/ml to 21.72 µg/ml and the EC90 ranged from 1.5 µg/ml to 29 µg/ml. The mean EC50 was 7 µg/ml and the mean EC90 was 15 µg/ml. Human PK data in patients with Lassa fever was sparse and did not allow for estimation of concentration profiles or pharmacokinetic parameters. Pharmacokinetic modelling based on healthy human data suggests that the concentration profiles of current ribavirin regimes only exceed the mean EC50 for less than 20% of the time and the mean EC90 for less than 10% of the time, raising the possibility that the current ribavirin regimens in clinical use are unlikely to reliably achieve serum concentrations required to inhibit Lassa virus replication. The results of this review highlight serious issues with the evidence, which, by today standards, would be unlikely to support the transition of ribavirin from pre-clinical studies to human clinical trials. Additional pre-clinical studies are needed before embarking on expensive and challenging clinical trials of ribavirin in Lassa fever.
Subject(s)
Lassa Fever , Ribavirin , Antiviral Agents/pharmacology , Humans , Lassa Fever/drug therapy , Lassa virus , Research Design , Virus ReplicationABSTRACT
BACKGROUND: Research is urgently needed to reduce the morbidity and mortality of Lassa fever (LF), including clinical trials to test new therapies and to verify the efficacy and safety of the only current treatment recommendation, ribavirin, which has a weak clinical evidence base. To help establish a basis for the development of an adaptable, standardised clinical trial methodology, we conducted a systematic review to identify the clinical characteristics and outcomes of LF and describe how LF has historically been defined and assessed in the scientific literature. METHODOLOGY: Primary clinical studies and reports of patients with suspected and confirmed diagnosis of LF published in the peer-reviewed literature before 15 April 2021 were included. Publications were selected following a two-stage screening of abstracts, then full-texts, by two independent reviewers at each stage. Data were extracted, verified, and summarised using descriptive statistics. RESULTS: 147 publications were included, primarily case reports (36%), case series (28%), and cohort studies (20%); only 2 quasi-randomised studies (1%) were found. Data are mostly from Nigeria (52% of individuals, 41% of publications) and Sierra Leone (42% of individuals, 31% of publications). The results corroborate the World Health Organisation characterisation of LF presentation. However, a broader spectrum of presenting symptoms is evident, such as gastrointestinal illness and other nervous system and musculoskeletal disorders that are not commonly included as indicators of LF. The overall case fatality ratio was 30% in laboratory-confirmed cases (1896/6373 reported in 109 publications). CONCLUSION: Systematic review is an important tool in the clinical characterisation of diseases with limited publications. The results herein provide a more complete understanding of the spectrum of disease which is relevant to clinical trial design. This review demonstrates the need for coordination across the LF research community to generate harmonised research methods that can contribute to building a strong evidence base for new treatments and foster confidence in their integration into clinical care.
Subject(s)
Clinical Trials as Topic , Lassa Fever/pathology , Research Design , Humans , Lassa virusABSTRACT
BACKGROUND: Among the many collaterals of the COVID-19 pandemic is the disruption of health services and vital clinical research. COVID-19 has magnified the challenges faced in research and threatens to slow research for urgently needed therapeutics for Neglected Tropical Diseases (NTDs) and diseases affecting the most vulnerable populations. Here we explore the impact of the pandemic on a clinical trial for plague therapeutics and strategies that have been considered to ensure research efforts continue. METHODS: To understand the impact of the COVID-19 pandemic on the trial accrual rate, we documented changes in patterns of all-cause consultations that took place before and during the pandemic at health centres in two districts of the Amoron'I Mania region of Madagascar where the trial is underway. We also considered trends in plague reporting and other external factors that may have contributed to slow recruitment. RESULTS: During the pandemic, we found a 27% decrease in consultations at the referral hospital, compared to an 11% increase at peripheral health centres, as well as an overall drop during the months of lockdown. We also found a nation-wide trend towards reduced number of reported plague cases. DISCUSSION: COVID-19 outbreaks are unlikely to dissipate in the near future. Declining NTD case numbers recorded during the pandemic period should not be viewed in isolation or taken as a marker of things to come. It is vitally important that researchers are prepared for a rebound in cases and, most importantly, that research continues to avoid NTDs becoming even more neglected.
Subject(s)
COVID-19 , Health Impact Assessment , Neglected Diseases/drug therapy , Plague/drug therapy , Randomized Controlled Trials as Topic , Research , Tropical Medicine/trends , Disease Notification , Epidemiological Monitoring , Humans , Madagascar/epidemiology , Pandemics , Patient Acceptance of Health Care , Patient Selection , Plague/epidemiology , Referral and Consultation/trendsABSTRACT
PURPOSE: To prospectively validate two risk scores to predict mortality (4C Mortality) and in-hospital deterioration (4C Deterioration) among adults hospitalised with COVID-19. METHODS: Prospective observational cohort study of adults (age ≥18 years) with confirmed or highly suspected COVID-19 recruited into the International Severe Acute Respiratory and emerging Infections Consortium (ISARIC) WHO Clinical Characterisation Protocol UK (CCP-UK) study in 306 hospitals across England, Scotland and Wales. Patients were recruited between 27 August 2020 and 17 February 2021, with at least 4 weeks follow-up before final data extraction. The main outcome measures were discrimination and calibration of models for in-hospital deterioration (defined as any requirement of ventilatory support or critical care, or death) and mortality, incorporating predefined subgroups. RESULTS: 76 588 participants were included, of whom 27 352 (37.4%) deteriorated and 12 581 (17.4%) died. Both the 4C Mortality (0.78 (0.77 to 0.78)) and 4C Deterioration scores (pooled C-statistic 0.76 (95% CI 0.75 to 0.77)) demonstrated consistent discrimination across all nine National Health Service regions, with similar performance metrics to the original validation cohorts. Calibration remained stable (4C Mortality: pooled slope 1.09, pooled calibration-in-the-large 0.12; 4C Deterioration: 1.00, -0.04), with no need for temporal recalibration during the second UK pandemic wave of hospital admissions. CONCLUSION: Both 4C risk stratification models demonstrate consistent performance to predict clinical deterioration and mortality in a large prospective second wave validation cohort of UK patients. Despite recent advances in the treatment and management of adults hospitalised with COVID-19, both scores can continue to inform clinical decision making. TRIAL REGISTRATION NUMBER: ISRCTN66726260.
Subject(s)
COVID-19 , Adolescent , Adult , COVID-19/therapy , Hospital Mortality , Humans , Observational Studies as Topic , Prognosis , SARS-CoV-2 , State Medicine , World Health OrganizationABSTRACT
Background: There is potentially considerable variation in the nature and duration of the care provided to hospitalised patients during an infectious disease epidemic or pandemic. Improvements in care and clinician confidence may shorten the time spent as an inpatient, or the need for admission to an intensive care unit (ICU) or high dependency unit (HDU). On the other hand, limited resources at times of high demand may lead to rationing. Nevertheless, these variables may be used as static proxies for disease severity, as outcome measures for trials, and to inform planning and logistics. Methods: We investigate these time trends in an extremely large international cohort of 142,540 patients hospitalised with COVID-19. Investigated are: time from symptom onset to hospital admission, probability of ICU/HDU admission, time from hospital admission to ICU/HDU admission, hospital case fatality ratio (hCFR) and total length of hospital stay. Results: Time from onset to admission showed a rapid decline during the first months of the pandemic followed by peaks during August/September and December 2020. ICU/HDU admission was more frequent from June to August. The hCFR was lowest from June to August. Raw numbers for overall hospital stay showed little variation, but there is clear decline in time to discharge for ICU/HDU survivors. Conclusions: Our results establish that variables of these kinds have limitations when used as outcome measures in a rapidly evolving situation. Funding: This work was supported by the UK Foreign, Commonwealth and Development Office and Wellcome [215091/Z/18/Z] and the Bill & Melinda Gates Foundation [OPP1209135]. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Subject(s)
Hospitalization/statistics & numerical data , Outcome Assessment, Health Care/statistics & numerical data , SARS-CoV-2/pathogenicity , Adolescent , Adult , Aged , Aged, 80 and over , COVID-19/therapy , Child , Child, Preschool , Female , Humans , Infant , Intensive Care Units/statistics & numerical data , Length of Stay/statistics & numerical data , Male , Middle Aged , Retrospective Studies , Young AdultABSTRACT
In this pilot comparative study, we investigated and compared the effects of existing vector control tools on sandfly densities and mortality to inform and support the National Kala-azar Elimination Program (NKEP). The interventions included insecticidal wall painting (IWP), reduced-coverage insecticidal durable wall lining (DWL), insecticide-impregnated bednets (ITN), and indoor residual spraying with deltamethrin (IRS). Sakhua union with seven villages was the study area, which was the most highly endemic visceral leishmaniasis union in Trishal upazila, Bangladesh. Each cluster containing the different interventions included approximately 50 households. Study methods included random selection of clusters, collection of sandfly by CDC light trap and manual aspirator to determine sandfly density, and sandfly mortality determined by WHO cone bioassay test. Trained field research assistants interviewed household heads using structured questionnaires for sociodemographic information, as well as safety and acceptability of the interventions. Descriptive and analytical statistical methods measured interventions' effect and its duration on sandfly density reduction and mortality. We measured the relative efficacy of IWP on sandfly control against DWL, ITN, and IRS by the difference-in-difference regression model. We found that existing interventions were effective and safe for sandfly control with different duration of effect and acceptability. The relative efficacy of IWP for sandfly reduction varied by -59% to -91%, -75% to -81%, and -30% to -104% compared with DWL, ITN, and IRS, respectively, at different time points during the 12-month follow-up. These study results will guide the NKEP for selection of sandfly control tool(s) in its subsequent consolidation and maintenance phases.
Subject(s)
Housing , Insect Control/methods , Insect Vectors/parasitology , Insecticides/therapeutic use , Leishmaniasis, Visceral/prevention & control , Nitriles/therapeutic use , Psychodidae/parasitology , Pyrethrins/therapeutic use , Animals , Bangladesh , Construction Materials , Insecticide-Treated Bednets , Phlebotomus/parasitology , Pilot ProjectsABSTRACT
BACKGROUND: Reports on the occurrence and outcome of Visceral Leishmaniasis (VL) in pregnant women is rare in published literature. The occurrence of VL in pregnancy is not systematically captured and cases are rarely followed-up to detect consequences of infection and treatment on the pregnant women and foetus. METHODS: A review of all published literature was undertaken to identify cases of VL infections among pregnant women by searching the following database: Ovid MEDLINE; Ovid Embase; Cochrane Database of Systematic Reviews; Cochrane Central Register of Controlled Trials; World Health Organization Global Index Medicus: LILACS (Americas); IMSEAR (South-East Asia); IMEMR (Eastern Mediterranean); WPRIM (Western Pacific); ClinicalTrials.gov; and the WHO International Clinical Trials Registry Platform. Selection criteria included any clinical reports describing the disease in pregnancy or vertical transmission of the disease in humans. Articles meeting pre-specified inclusion criteria and non-primary research articles such as textbook, chapters, letters, retrospective case description, or reports of accidental inclusion in trials were also considered. RESULTS: The systematic literature search identified 272 unique articles of which 54 records were included in this review; a further 18 records were identified from additional search of the references of the included studies or from personal communication leading to a total of 72 records (71 case reports/case series; 1 retrospective cohort study; 1926-2020) describing 451 cases of VL in pregnant women. The disease was detected during pregnancy in 398 (88.2%), retrospectively confirmed after giving birth in 52 (11.5%), and the time of identification was not clear in 1 (0.2%). Of the 398 pregnant women whose infection was identified during pregnancy, 346 (86.9%) received a treatment, 3 (0.8%) were untreated, and the treatment status was not clear in the remaining 49 (12.3%). Of 346 pregnant women, Liposomal amphotericin B (L-AmB) was administered in 202 (58.4%) and pentavalent antimony (PA) in 93 (26.9%). Outcomes were reported in 176 pregnant women treated with L-AmB with 4 (2.3%) reports of maternal deaths, 5 (2.8%) miscarriages, and 2 (1.1%) foetal death/stillbirth. For PA, outcomes were reported in 88 of whom 4 (4.5%) died, 24 (27.3%) had spontaneous abortion, 2 (2.3%) had miscarriages. A total of 26 cases of confirmed, probable or suspected cases of vertical transmission were identified with a median detection time of 6 months (range: 0-18 months). CONCLUSIONS: Outcomes of VL treatment during pregnancy is rarely reported and under-researched. The reported articles were mainly case reports and case series and the reported information was often incomplete. From the studies identified, it is difficult to derive a generalisable information on outcomes for pregnant women and babies, although reported data favours the usage of liposomal amphotericin B for the treatment of VL in pregnant women.
Subject(s)
Leishmaniasis, Visceral/complications , Leishmaniasis, Visceral/drug therapy , Pregnancy Complications, Parasitic/drug therapy , Pregnancy Outcome/epidemiology , Abortion, Spontaneous/epidemiology , Amphotericin B/therapeutic use , Antiprotozoal Agents/therapeutic use , Female , Humans , Infectious Disease Transmission, Vertical/prevention & control , Leishmaniasis, Visceral/mortality , Maternal Death , Pregnancy , Pregnancy Complications, Parasitic/mortality , Treatment OutcomeABSTRACT
With increasing geographic spread, frequency, and magnitude of outbreaks, dengue continues to pose a major public health threat worldwide. Dengvaxia, a dengue live-attenuated tetravalent vaccine, was licensed in 2015, but post hoc analyses of long-term data showed serostatus-dependent vaccine performance with an excess risk of hospitalized and severe dengue in seronegative vaccine recipients. The World Health Organization (WHO) recommended that only persons with evidence of past dengue infection should receive the vaccine. A test for pre-vaccination screening for dengue serostatus is needed. To develop the target product profile (TPP) for a dengue pre-vaccination screening test, face-to-face consultative meetings were organized with follow-up regional consultations. A technical working group was formed to develop consensus on a reference test against which candidate pre-vaccination screening tests could be compared. The group also reviewed current diagnostic landscape and the need to accelerate the evaluation, regulatory approval, and policy development of tests that can identify seropositive individuals and maximize public health impact of vaccination while avoiding the risk of hospitalization in dengue-naive individuals. Pre-vaccination screening strategies will benefit from rapid diagnostic tests (RDTs) that are affordable, sensitive, and specific and can be used at the point of care (POC). The TPP described the minimum and ideal characteristics of a dengue pre-vaccination screening RDT with an emphasis on high specificity. The group also made suggestions for accelerating access to these RDTs through streamlining regulatory approval and policy development. Risk and benefit based on what can be achieved with RDTs meeting minimal and optimal characteristics in the TPP across a range of seroprevalences were defined. The final choice of RDTs in each country will depend on the performance of the RDT, dengue seroprevalence in the target population, tolerance of risk, and cost-effectiveness.
Subject(s)
Dengue Vaccines/immunology , Dengue/diagnosis , Dengue/prevention & control , Point-of-Care Testing , Serologic Tests/methods , Vaccination , Antibodies, Viral/blood , Antibodies, Viral/immunology , Dengue/immunology , Humans , Mass Screening/methods , Reference Standards , Sensitivity and Specificity , Vaccines, AttenuatedABSTRACT
Background: Early identification of severe dengue patients is important regarding patient management and resource allocation. We investigated the association of 10 biomarkers (VCAM-1, SDC-1, Ang-2, IL-8, IP-10, IL-1RA, sCD163, sTREM-1, ferritin, CRP) with the development of severe/moderate dengue (S/MD). Methods: We performed a nested case-control study from a multi-country study. A total of 281 S/MD and 556 uncomplicated dengue cases were included. Results: On days 1-3 from symptom onset, higher levels of any biomarker increased the risk of developing S/MD. When assessing together, SDC-1 and IL-1RA were stable, while IP-10 changed the association from positive to negative; others showed weaker associations. The best combinations associated with S/MD comprised IL-1RA, Ang-2, IL-8, ferritin, IP-10, and SDC-1 for children, and SDC-1, IL-8, ferritin, sTREM-1, IL-1RA, IP-10, and sCD163 for adults. Conclusions: Our findings assist the development of biomarker panels for clinical use and could improve triage and risk prediction in dengue patients. Funding: This study was supported by the EU's Seventh Framework Programme (FP7-281803 IDAMS), the WHO, and the Bill and Melinda Gates Foundation.
Subject(s)
Dengue/blood , Dengue/metabolism , Inflammation/metabolism , Adolescent , Adult , Biomarkers/blood , Case-Control Studies , Child , Child, Preschool , Cytokines/blood , Cytokines/metabolism , Dengue/pathology , Female , Humans , Male , Young AdultABSTRACT
BACKGROUND: Despite a historical association with poor tolerability, a comprehensive review on safety of antileishmanial chemotherapies is lacking. We carried out an update of a previous systematic review of all published clinical trials in visceral leishmaniasis (VL) from 1980 to 2019 to document any reported serious adverse events (SAEs). METHODS: For this updated systematic review, we searched the following databases from 1st Jan 2016 through 2nd of May 2019: PUBMED, Embase, Scopus, Web of Science, Cochrane, clinicaltrials.gov, WHO ICTRP, and the Global Index Medicus. We included randomised and non-randomised interventional studies aimed at assessing therapeutic efficacy and extracted the number of SAEs reported within the first 30 days of treatment initiation. The incidence rate of death (IRD) from individual treatment arms were combined in a meta-analysis using random effects Poisson regression. RESULTS: We identified 157 published studies enrolling 35,376 patients in 347 treatment arms. Pentavalent antimony was administered in 74 (21.3%), multiple-dose liposomal amphotericin B (L-AmB) in 52 (15.0%), amphotericin b deoxycholate in 51 (14.7%), miltefosine in 33 (9.5%), amphotericin b fat/lipid/colloid/cholesterol in 31 (8.9%), and single-dose L-AmB in 17 (4.9%) arms. There was a total of 804 SAEs reported of which 793 (including 428 deaths) were extracted at study arm level (11 SAEs were reported at study level only). During the first 30 days, there were 285 (66.6%) deaths with the overall IRD estimated at 0.068 [95% confidence interval (CI): 0.041-0.114; I2 = 81.4%; 95% prediction interval (PI): 0.001-2.779] per 1,000 person-days at risk; the rate was 0.628 [95% CI: 0.368-1.021; I2 = 82.5%] in Eastern Africa, and 0.041 [95% CI: 0.021-0.081; I2 = 68.1%] in the Indian Subcontinent. In 21 study arms which clearly indicated allowing the inclusion of patients with HIV co-infections the IRD was 0.575 [95% CI: 0.244-1.355; I2 = 91.9%] compared to 0.043 [95% CI: 0.020-0.090; I2 = 62.5%] in 160 arms which excluded HIV co-infections. CONCLUSION: Mortality within the first 30 days of VL treatment initiation was a rarely reported event in clinical trials with an overall estimated rate of 0.068 deaths per 1,000 person-days at risk, though it varied across regions and patient populations. These estimates may serve as a benchmark for future trials against which mortality data from prospective and pharmacovigilance studies can be compared. The methodological limitations exposed by our review support the need to assemble individual patient data (IPD) to conduct robust IPD meta-analyses and generate stronger evidence from existing trials to support treatment guidelines and guide future research.
Subject(s)
Antiprotozoal Agents/adverse effects , Antiprotozoal Agents/therapeutic use , Leishmaniasis, Visceral/drug therapy , Leishmaniasis, Visceral/mortality , Amphotericin B/adverse effects , Amphotericin B/therapeutic use , Antimony/adverse effects , Antimony/therapeutic use , Deoxycholic Acid/adverse effects , Deoxycholic Acid/therapeutic use , Drug Combinations , Humans , Phosphorylcholine/adverse effects , Phosphorylcholine/analogs & derivatives , Phosphorylcholine/therapeutic useABSTRACT
BACKGROUND: A higher caseload of visceral leishmaniasis (VL) has been observed among males in community-based surveys. We carried out this review to investigate how the observed disparity in gender distribution is reflected in clinical trials of antileishmanial therapies. METHODS: We identified relevant studies by searching a database of all published clinical trials in VL from 1980 through 2019 indexed in the Infectious Diseases Data Observatory (IDDO) VL clinical trials library. The proportion of male participants enrolled in studies eligible for inclusion in this review were extracted and combined using random effects meta-analysis of proportion. Results were expressed as percentages and presented with respective 95% confidence intervals (95% CIs). Heterogeneity was quantified using I2 statistics and sub-group meta-analyses were carried out to explore the sources of heterogeneity. RESULTS: We identified 135 published studies (1980-2019; 32,177 patients) with 68.0% [95% CI: 65.9%-70.0%; I2 = 92.6%] of the enrolled participants being males. The corresponding estimates were 67.6% [95% CI: 65.5%-69.7%; n = 91 trials; I2 = 90.5%; 24,218 patients] in studies conducted in the Indian sub-continent and 74.1% [95% CI: 68.4%-79.1%; n = 24 trials; I2 = 94.4%; 6,716 patients] in studies from Eastern Africa. The proportion of male participants was 57.9% [95% CI: 54.2%-61.5%] in studies enrolling children aged <15 years, 78.2% [95% CI: 66.0%-86.9%] in studies that enrolled adults (≥15 years), and 68.1% [95% CI: 65.9%-70.0%] in studies that enrolled patients of all ages. There was a trend for decreased proportions of males enrolled over time: 77.1% [95% CI: 70.2%-82.8%; 1356 patients] in studies published prior to the 1990s whereas 64.3% [95% CI: 60.3%-68.2%; 15,611 patients] in studies published on or after 2010. In studies that allowed the inclusion of patients with HIV co-infections, 76.5% [95% CI: 63.8%-85.9%; 5,123 patients] were males and the corresponding estimate was 64.0% [95% CI: 61.4%-66.5% 17,500 patients] in studies which excluded patients with HIV co-infections. CONCLUSIONS: Two-thirds of the participants enrolled in clinical studies in VL conducted in the past 40 years were males, though the imbalance was less in children and in more recent trials. VL treatment guidelines are informed by the knowledge of treatment outcomes from a population that is heavily skewed towards adult males. Investigators planning future studies should consider this fact and ensure approaches for more gender-balanced inclusion.
Subject(s)
Antiprotozoal Agents/therapeutic use , Leishmaniasis, Visceral/drug therapy , Clinical Trials as Topic , Female , Humans , Male , Research Design , Sex FactorsABSTRACT
WHO recommends a minimum of 80% sensitivity and 97% specificity for antigen-detection rapid diagnostic tests (Ag-RDTs), which can be used for patients with symptoms consistent with COVID-19. However, after the acute phase when viral load decreases, use of Ag-RDTs might lead to high rates of false negatives, suggesting that the tests should be replaced by a combination of molecular and serological tests. When the likelihood of having COVID-19 is low, such as for asymptomatic individuals in low prevalence settings, for travel, return to schools, workplaces, and mass gatherings, Ag-RDTs with high negative predictive values can be used with confidence to rule out infection. For those who test positive in low prevalence settings, the high false positive rate means that mitigation strategies, such as molecular testing to confirm positive results, are needed. Ag-RDTs, when used appropriately, are promising tools for scaling up testing and ensuring that patient management and public health measures can be implemented without delay.