Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 56
Filter
1.
Prev Vet Med ; 228: 106213, 2024 May 08.
Article in English | MEDLINE | ID: mdl-38744092

ABSTRACT

The common liver fluke, Fasciola hepatica, is a trematode parasite found worldwide, typically with a focal distribution due to its requirement for suitable climatic and environmental conditions to complete its lifecycle. Bovine fasciolosis causes suboptimal production and economic losses, including liver condemnation at slaughter. The lack of reliable diagnostic methods is a disadvantage to the increasing demand for surveillance and control. The aim of this study was to evaluate the diagnostic accuracy of bulk tank milk (BTM) antibody testing and aggregated abattoir registrations (AAR) of liver fluke as herd-level tests for F. hepatica infection using Bayesian latent class models. Data from the abattoirs in 2019-2021 and BTM, sampled in the winter of 2020/2021, were collected from 437 herds on the southwest coast of Norway. The BTM samples were analysed with the SVANOVIR® F. hepatica-Ab ELISA test, with results given as an optical density ratio (ODR), and later dichotomized using the recommended cut-off value from the test manufacturer (ODR ≥0.3). Based on the BTM ELISA test, 47.8% of the herds tested positive. The AAR test was defined as the herd-level proportion of female slaughtered animals registered with liver fluke infection during the study period. For this test, three cut-offs were used (a proportion of 0.05, 0.1 and 0.2). The herds were split into two subpopulations ("Coastal" and "Inland"), which were expected to differ in true prevalence of F. hepatica infection based on climate-related and geographical factors. The diagnostic accuracies of both tests were estimated using Bayesian latent class models with minimally informative priors. Post-hoc analysis revealed that the maximum sum of sensitivity (Se) and specificity (Sp) of the tests was achieved with a herd-level proportion of ≥0.1 registered with liver fluke as the AAR test. Using this cut-off, the median estimate for the diagnostic accuracy of the BTM ELISA was 90.4% (84.0-96.2 95% Posterior Credible Interval (PCI)) for Se and 95.3% (90.6-100% PCI) for Sp, while the median estimate of Se for AAR was 87.5% (81.4-93.1% PCI) and the median estimate of Sp for AAR was 91.0% (85.2-96.5% PCI). The cut-off evaluation of the SVANOVIR® F. hepatica-Ab ELISA test for BTM confirmed the manufacturer's recommended cut-off of ODR ≥0.3 to denote positive and negative herds. This study suggests that AAR and BTM ELISA test can be used as herd-level tools to monitor liver fluke infection, so that appropriate interventions against infection can be implemented as necessary.

2.
Clin Infect Dis ; 78(Supplement_2): S153-S159, 2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38662699

ABSTRACT

BACKGROUND: Control of schistosomiasis (SCH) relies on the regular distribution of preventive chemotherapy (PC) over many years. For the sake of sustainable SCH control, a decision must be made at some stage to scale down or stop PC. These "stopping decisions" are based on population surveys that assess whether infection levels are sufficiently low. However, the limited sensitivity of the currently used diagnostic (Kato-Katz [KK]) to detect low-intensity infections is a concern. Therefore, the use of new, more sensitive, molecular diagnostics has been proposed. METHODS: Through statistical analysis of Schistosoma mansoni egg counts collected from Burundi and a simulation study using an established transmission model for schistosomiasis, we investigated the extent to which more sensitive diagnostics can improve decision making regarding stopping or continuing PC for the control of S. mansoni. RESULTS: We found that KK-based strategies perform reasonably well for determining when to stop PC at a local scale. Use of more sensitive diagnostics leads to a marginally improved health impact (person-years lived with heavy infection) and comes at a cost of continuing PC for longer (up to around 3 years), unless the decision threshold for stopping PC is adapted upward. However, if this threshold is set too high, PC may be stopped prematurely, resulting in a rebound of infection levels and disease burden (+45% person-years of heavy infection). CONCLUSIONS: We conclude that the potential value of more sensitive diagnostics lies more in the reduction of survey-related costs than in the direct health impact of improved parasite control.


Subject(s)
Cost-Benefit Analysis , Parasite Egg Count , Schistosoma mansoni , Schistosomiasis mansoni , Humans , Animals , Schistosoma mansoni/isolation & purification , Schistosomiasis mansoni/diagnosis , Schistosomiasis mansoni/prevention & control , Schistosomiasis mansoni/drug therapy , Schistosomiasis mansoni/epidemiology , Anthelmintics/therapeutic use , Anthelmintics/economics , Female , Male , Schistosomiasis/diagnosis , Schistosomiasis/prevention & control , Schistosomiasis/drug therapy , Schistosomiasis/epidemiology , Adult , Adolescent , Child , Chemoprevention/economics , Chemoprevention/methods , Young Adult , Sensitivity and Specificity
3.
Parasitology ; : 1-8, 2024 Apr 17.
Article in English | MEDLINE | ID: mdl-38629125

ABSTRACT

Equine strongylid parasites are ubiquitous around the world and are main targets of parasite control programmes. In recent years, automated fecal egg counting systems based on image analysis have become available allowing for collection and analysis of large-scale egg count data. This study aimed to evaluate equine strongylid fecal egg count (FEC) data generated with an automated system over three years in the US with specific attention to seasonal and regional trends in egg count magnitude and sampling activity. Five US regions were defined; North East, South East, North Central, South Central and West. The data set included state, region and zip code for each FEC. The number of FECs falling in each of the following categories were recorded: (1) 0 eggs per gram (EPG), (2) 1 ⩽ 200 EPG, (3) 201 ⩽ 500 EPG and (4) >500 EPG. The data included 58 329 FECs. A fixed effects model was constructed fitting the number of samples analysed per month, year and region, and a mixed effects model was constructed to fit the number of FECs falling in each of the 4 egg count categories defined above. The overall proportion of horses responsible for 80% of the total FEC output was 18.1%, and this was consistent across years, months and all regions except West, where the proportion was closer to 12%. Statistical analyses showed significant seasonal trends and regional differences of sampling frequency and FEC category. The data demonstrated that veterinarians tended to follow a biphasic pattern when monitoring strongylid FECs in horses, regardless of location.

4.
J Dairy Sci ; 106(8): 5696-5714, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37331876

ABSTRACT

Bovine mastitis is one of the most important diseases in modern dairy farming, as it leads to reduced welfare and milk production and increased need for antibiotic use. Clinical mastitis in Denmark is most often treated with a combination of local and systemic treatment with penicillin. The objective of this randomized clinical trial was to assess whether worse results could be expected with local intramammary treatment with penicillin compared with a combination of local and systemic treatment with penicillin in terms of the bacteriological cure of mild and moderate clinical mastitis cases caused by gram-positive bacteria. We carried out a noninferiority trial with a noninferiority margin set to a relative reduction in bacteriological cure of 15% between these 2 treatment groups to assess the effect of reducing the total antibiotic use by a factor of 16 for each treated case. Clinical mastitis cases from 12 Danish dairy farms were considered for enrollment. On-farm selection of gram-positive cases was carried out by the farm personnel within the first 24 h after a clinical mastitis case was detected. A single farm used bacterial culture results from the on-farm veterinarian, whereas the other 11 farms were provided with an on-farm test to distinguish gram-positive bacteria from gram-negative or samples without bacterial growth. Cases with suspected gram-positive bacteria were allocated to a treatment group: either local or combination. Bacteriological cure was assessed based on the bacterial species identified in the milk sample from the clinical mastitis case and 2 follow-up samples collected approximately 2 and 3 wk after ended treatment. Identification of bacteria was carried out using MALDI-TOF on bacterial culture growth. Noninferiority was assessed using unadjusted cure rates and adjusted cure rates from a multivariable mixed logistic regression model. Of the 1,972 clinical mastitis cases registered, 345 (18%) met all criteria for inclusion (full data). The data set was further reduced to 265 cases for the multivariable analysis to include only complete registrations. Streptococcus uberis was the most commonly isolated pathogen. Noninferiority was demonstrated for both unadjusted and adjusted cure rates. The unadjusted cure rates were 76.8% and 83.1% for the local and combined treatments, respectively (full data). The pathogen and somatic cell count before the clinical case had an effect on the efficacy of treatment; thus efficient treatment protocols should be herd- and case-specific. The effect of pathogen and somatic cell count on treatment efficacy was similar irrespective of the treatment protocol. We conclude that bacteriological cure of local penicillin treatment for mild and moderate clinical mastitis cases was noninferior to the combination of local and systemic treatment using a 15% noninferiority margin. This suggests that a potential 16-fold reduction in antimicrobial use per mastitis treatment can be achieved with no adverse effect on cure rate.


Subject(s)
Cattle Diseases , Mastitis, Bovine , Cattle , Female , Animals , Mastitis, Bovine/drug therapy , Mastitis, Bovine/microbiology , Anti-Bacterial Agents/therapeutic use , Anti-Bacterial Agents/pharmacology , Gram-Positive Bacteria , Bacteria , Penicillins/therapeutic use , Milk , Cattle Diseases/drug therapy
5.
Infect Dis Model ; 8(2): 484-490, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37234097

ABSTRACT

This manuscript introduces the convergence Epidemic Volatility Index (cEVI), a modification of the recently introduced Epidemic Volatility Index (EVI), as an early warning tool for emerging epidemic waves. cEVI has a similar architectural structure as EVI, but with an optimization process inspired by a Geweke diagnostic-type test. Our approach triggers an early warning based on a comparison of the most recently available window of data samples and a window based on the previous time frame. Application of cEVI to data from the COVID-19 pandemic data revealed steady performance in predicting early, intermediate epidemic waves and retaining a warning during an epidemic wave. Furthermore, we present two basic combinations of EVI and cEVI: (1) their disjunction cEVI + that respectively identifies waves earlier than the original index, (2) their conjunction cEVI- that results in higher accuracy. Combination of multiple warning systems could potentially create a surveillance umbrella that would result in early implementation of optimal outbreak interventions.

6.
PLoS Negl Trop Dis ; 17(5): e0011071, 2023 05.
Article in English | MEDLINE | ID: mdl-37196017

ABSTRACT

BACKGROUND: Soil-transmitted helminth (STH) control programs currently lack evidence-based recommendations for cost-efficient survey designs for monitoring and evaluation. Here, we present a framework to provide evidence-based recommendations, using a case study of therapeutic drug efficacy monitoring based on the examination of helminth eggs in stool. METHODS: We performed an in-depth analysis of the operational costs to process one stool sample for three diagnostic methods (Kato-Katz, Mini-FLOTAC and FECPAKG2). Next, we performed simulations to determine the probability of detecting a truly reduced therapeutic efficacy for different scenarios of STH species (Ascaris lumbricoides, Trichuris trichiura and hookworms), pre-treatment infection levels, survey design (screen and select (SS); screen, select and retest (SSR) and no selection (NS)) and number of subjects enrolled (100-5,000). Finally, we integrated the outcome of the cost assessment into the simulation study to estimate the total survey costs and determined the most cost-efficient survey design. PRINCIPAL FINDINGS: Kato-Katz allowed for both the highest sample throughput and the lowest cost per test, while FECPAKG2 required both the most laboratory time and was the most expensive. Counting of eggs accounted for 23% (FECPAKG2) or ≥80% (Kato-Katz and Mini-FLOTAC) of the total time-to-result. NS survey designs in combination with Kato-Katz were the most cost-efficient to assess therapeutic drug efficacy in all scenarios of STH species and endemicity. CONCLUSIONS/SIGNIFICANCE: We confirm that Kato-Katz is the fecal egg counting method of choice for monitoring therapeutic drug efficacy, but that the survey design currently recommended by WHO (SS) should be updated. Our generic framework, which captures laboratory time and material costs, can be used to further support cost-efficient choices for other important surveys informing STH control programs. In addition, it can be used to explore the value of alternative diagnostic techniques, like automated egg counting, which may further reduce operational costs. TRIAL REGISTRATION: ClinicalTrials.gov NCT03465488.


Subject(s)
Helminthiasis , Helminths , Animals , Humans , Ascaris lumbricoides , Feces , Helminthiasis/drug therapy , Helminthiasis/diagnosis , Sensitivity and Specificity , Soil , Trichuris
7.
Vet Parasitol ; 318: 109936, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37121092

ABSTRACT

The faecal egg count reduction test (FECRT) remains the method of choice for establishing the efficacy of anthelmintic compounds in the field, including the diagnosis of anthelmintic resistance. We present a guideline for improving the standardization and performance of the FECRT that has four sections. In the first section, we address the major issues relevant to experimental design, choice of faecal egg count (FEC) method, statistical analysis, and interpretation of the FECRT results. In the second section, we make a series of general recommendations that are applicable across all animals addressed in this guideline. In the third section, we provide separate guidance details for cattle, small ruminants (sheep and goats), horses and pigs to address the issues that are specific to the different animal types. Finally, we provide overviews of the specific details required to conduct an FECRT for each of the different host species. To address the issues of statistical power vs. practicality, we also provide two separate options for each animal species; (i) a version designed to detect small changes in efficacy that is intended for use in scientific studies, and (ii) a less resource-intensive version intended for routine use by veterinarians and livestock owners to detect larger changes in efficacy. Compared to the previous FECRT recommendations, four important differences are noted. First, it is now generally recommended to perform the FECRT based on pre- and post-treatment FEC of the same animals (paired study design), rather than on post-treatment FEC of both treated and untreated (control) animals (unpaired study design). Second, instead of requiring a minimum mean FEC (expressed in eggs per gram (EPG)) of the group to be tested, the new requirement is for a minimum total number of eggs to be counted under the microscope (cumulative number of eggs counted before the application of a conversion factor). Third, we provide flexibility in the required size of the treatment group by presenting three separate options that depend on the (expected) number of eggs counted. Finally, these guidelines address all major livestock species, and the thresholds for defining reduced efficacy are adapted and aligned to host species, anthelmintic drug and parasite species. In conclusion, these new guidelines provide improved methodology and standardization of the FECRT for all major livestock species.


Subject(s)
Anthelmintics , Ovum , Animals , Horses , Cattle , Sheep , Swine , Parasite Egg Count/veterinary , Parasite Egg Count/methods , Anthelmintics/pharmacology , Anthelmintics/therapeutic use , Feces/parasitology , Goats , Drug Resistance
8.
Lancet Glob Health ; 11(5): e740-e748, 2023 05.
Article in English | MEDLINE | ID: mdl-36972722

ABSTRACT

BACKGROUND: WHO recommends the implementation of control programmes for strongyloidiasis, a neglected tropical disease caused by Strongyloides stercoralis. Specific recommendations on the diagnostic test or tests to be used for such programmes have yet to be defined. The primary objective of this study was to estimate the accuracy of five tests for strongyloidiasis. Secondary objectives were to evaluate acceptability and feasibility of use in an endemic area. METHODS: The ESTRELLA study was a cross-sectional study for which we enrolled school-age children living in remote villages of Ecuador. Recruitment took place in two periods (Sept 9-19, 2021, and April 18-June 11, 2022). Children supplied one fresh stool sample and underwent blood collection via finger prick. Faecal tests were a modified Baermann method and an in-house real-time PCR test. Antibody assays were a recombinant antigen rapid diagnostic test; a crude antigen-based ELISA (Bordier ELISA); and an ELISA based on two recombinant antigens (Strongy Detect ELISA). A Bayesian latent class model was used to analyse the data. FINDINGS: 778 children were enrolled in the study and provided the required samples. Strongy Detect ELISA had the highest sensitivity at 83·5% (95% credible interval 73·8-91·8), while Bordier ELISA had the highest specificity (100%, 99·8-100). Bordier ELISA plus either PCR or Baermann had the best performance in terms of positive and negative predictive values. The procedures were well accepted by the target population. However, study staff found the Baermann method cumbersome and time-consuming and were concerned about the amount of plastic waste produced. INTERPRETATION: The combination of Bordier ELISA with either faecal test performed best in this study. Practical aspects (including costs, logistics, and local expertise) should, however, also be taken into consideration when selecting tests in different contexts. Acceptability might differ in other settings. FUNDING: Italian Ministry of Health. TRANSLATION: For the Spanish translation of the abstract see Supplementary Materials section.


Subject(s)
Strongyloides stercoralis , Strongyloidiasis , Child , Animals , Humans , Strongyloides stercoralis/genetics , Strongyloidiasis/diagnosis , Strongyloidiasis/epidemiology , Cross-Sectional Studies , Ecuador , Bayes Theorem , Feasibility Studies , Real-Time Polymerase Chain Reaction , Feces , Diagnostic Tests, Routine , Sensitivity and Specificity
9.
BMC Med Res Methodol ; 23(1): 55, 2023 02 27.
Article in English | MEDLINE | ID: mdl-36849911

ABSTRACT

Safe and effective vaccines are crucial for the control of Covid-19 and to protect individuals at higher risk of severe disease. The test-negative design is a popular option for evaluating the effectiveness of Covid-19 vaccines. However, the findings could be biased by several factors, including imperfect sensitivity and/or specificity of the test used for diagnosing the SARS-Cov-2 infection. We propose a simple Bayesian modeling approach for estimating vaccine effectiveness that is robust even when the diagnostic test is imperfect. We use simulation studies to demonstrate the robustness of our method to misclassification bias and illustrate the utility of our approach using real-world examples.


Subject(s)
COVID-19 , Humans , COVID-19/prevention & control , COVID-19 Vaccines , Bayes Theorem , Vaccine Efficacy , SARS-CoV-2
10.
Vet Parasitol ; 314: 109867, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36621042

ABSTRACT

The faecal egg count reduction test (FECRT) is the primary diagnostic tool used for detecting anthelmintic resistance at the farm level. It is therefore extremely important that the experimental design of a FECRT and the susceptibility classification of the result use standardised and statistically rigorous methods. Several different approaches for improving the analysis of FECRT data have been proposed, but little work has been published on how to address the issue of prospective sample size calculations. Here, we provide a complete and detailed overview of the quantitative issues relevant to a FECRT starting from basic statistical principles. We then present a new approach for determining sample size requirements for the FECRT that is built on a solid statistical framework, and provide a rigorous anthelminthic drug efficacy classification system for use with FECRT in livestock. Our approach uses two separate statistical tests, a one-sided inferiority test for resistance and a one-sided non-inferiority test for susceptibility, and determines a classification of resistant, susceptible or inconclusive based on the combined result. Since this approach is based on two independent one-sided tests, we recommend that a 90 % CI be used in place of the historically used 95 % CI. This maintains the desired Type I error rate of 5 %, and simultaneously reduces the required sample size. We demonstrate the use of this framework to provide sample size calculations that are rooted in the well-understood concept of statistical power. Tailoring to specific host/parasite systems is possible using typical values for expected pre-treatment and post-treatment variability in egg counts as well as within-animal correlation in egg counts. We provide estimates for these parameters for ruminants, horses and swine based on a re-examination of datasets that were available to us from a combination of published data and other sources. An illustrative example is provided to demonstrate the use of the framework, and parameter estimates are presented to estimate the required sample size for a hypothetical FECRT using ivermectin in cattle. The sample size calculation method and classification framework presented here underpin the sample size recommendations provided in the upcoming FECRT WAAVP guidelines for detection of anthelmintic resistance in ruminants, horses, and swine, and have also been made freely available as open-source software via our website (https://www.fecrt.com).


Subject(s)
Anthelmintics , Ovum , Animals , Cattle , Horses , Swine , Sample Size , Prospective Studies , Feces/parasitology , Anthelmintics/pharmacology , Anthelmintics/therapeutic use , Parasite Egg Count/veterinary , Parasite Egg Count/methods , Ruminants , Drug Resistance
12.
Anim Welf ; 32: e47, 2023.
Article in English | MEDLINE | ID: mdl-38487445

ABSTRACT

Animal welfare is of increasing public interest, and the pig industry in particular is subject to much attention. The aim of this study was to identify and compare areas of animal welfare concern for commercial pigs in four different production stages: (1) gestating sows and gilts; (2) lactating sows; (3) piglets; and (4) weaner-to-finisher pigs. One welfare assessment protocol was developed for each stage, comprising of between 20 and 29 animal welfare measures including resource-, management- and animal-based ones. Twenty-one Danish farms were visited once between January 2015 and February 2016 in a cross-sectional design. Experts (n = 26; advisors, scientists and animal welfare controllers) assessed the severity of the outcome measures. This was combined with the on-farm prevalence of each measure and the outcome was used to calculate areas of concern, defined as measures where the median of all farms fell below the value defined as 'acceptable welfare.' Between five and seven areas of concern were identified for each production stage. With the exception of carpal lesions in piglets, all areas of concern were resource- and management-based and mainly related to housing, with inadequate available space and the floor type in the resting area being overall concerns across all production stages. This means that animal-based measures were largely unaffected by perceived deficits in resource-based measures. Great variation existed for the majority of measures identified as areas of concern, demonstrating that achieving a high welfare score is possible in the Danish system.

13.
Nat Commun ; 13(1): 5760, 2022 09 30.
Article in English | MEDLINE | ID: mdl-36180438

ABSTRACT

SARS coronavirus 2 (SARS-CoV-2) continues to evolve and new variants emerge. Using nationwide Danish data, we estimate the transmission dynamics of SARS-CoV-2 Omicron subvariants BA.1 and BA.2 within households. Among 22,678 primary cases, we identified 17,319 secondary infections among 50,588 household contacts during a 1-7 day follow-up. The secondary attack rate (SAR) was 29% and 39% in households infected with Omicron BA.1 and BA.2, respectively. BA.2 was associated with increased susceptibility of infection for unvaccinated household contacts (Odds Ratio (OR) 1.99; 95%-CI 1.72-2.31), fully vaccinated contacts (OR 2.26; 95%-CI 1.95-2.62) and booster-vaccinated contacts (OR 2.65; 95%-CI 2.29-3.08), compared to BA.1. We also found increased infectiousness from unvaccinated primary cases infected with BA.2 compared to BA.1 (OR 2.47; 95%-CI 2.15-2.84), but not for fully vaccinated (OR 0.66; 95%-CI 0.57-0.78) or booster-vaccinated primary cases (OR 0.69; 95%-CI 0.59-0.82). Omicron BA.2 is inherently more transmissible than BA.1. Its immune-evasive properties also reduce the protective effect of vaccination against infection, but do not increase infectiousness of breakthrough infections from vaccinated individuals.


Subject(s)
COVID-19 , SARS-CoV-2 , COVID-19/epidemiology , COVID-19/prevention & control , Denmark/epidemiology , Family Characteristics , Humans , SARS-CoV-2/genetics
14.
Nat Commun ; 13(1): 5573, 2022 09 23.
Article in English | MEDLINE | ID: mdl-36151099

ABSTRACT

In late 2021, the Omicron SARS-CoV-2 variant overtook the previously dominant Delta variant, but the extent to which this transition was driven by immune evasion or a change in the inherent transmissibility is currently unclear. We estimate SARS-CoV-2 transmission within Danish households during December 2021. Among 26,675 households (8,568 with the Omicron VOC), we identified 14,140 secondary infections within a 1-7-day follow-up period. The secondary attack rate was 29% and 21% in households infected with Omicron and Delta, respectively. For Omicron, the odds of infection were 1.10 (95%-CI: 1.00-1.21) times higher for unvaccinated, 2.38 (95%-CI: 2.23-2.54) times higher for fully vaccinated and 3.20 (95%-CI: 2.67-3.83) times higher for booster-vaccinated contacts compared to Delta. We conclude that the transition from Delta to Omicron VOC was primarily driven by immune evasiveness and to a lesser extent an inherent increase in the basic transmissibility of the Omicron variant.


Subject(s)
COVID-19 , SARS-CoV-2 , COVID-19/epidemiology , Denmark/epidemiology , Family Characteristics , Humans
15.
PLoS Negl Trop Dis ; 16(8): e0010709, 2022 08.
Article in English | MEDLINE | ID: mdl-35984809

ABSTRACT

BACKGROUND: Infections with Ascaris lumbricoides and Trichuris trichiura remain significant contributors to the global burden of neglected tropical diseases. Infection may in particular affect child development as they are more likely to be infected with T. trichiura and/or A. lumbricoides and to carry higher worm burdens than adults. Whilst the impact of heavy infections are clear, the effects of moderate infection intensities on the growth and development of children remain elusive. Field studies are confounded by a lack of knowledge of infection history, nutritional status, presence of co-infections and levels of exposure to infective eggs. Therefore, animal models are required. Given the physiological similarities between humans and pigs but also between the helminths that infect them; A. suum and T. suis, growing pigs provide an excellent model to investigate the direct effects of Ascaris spp. and Trichuris spp. on weight gain. METHODS AND RESULTS: We employed a trickle infection protocol to mimic natural co-infection to assess the effect of infection intensity, determined by worm count (A. suum) or eggs per gram of faeces (A. suum and T. suis), on weight gain in a large pig population (n = 195) with variable genetic susceptibility. Pig body weights were assessed over 14 weeks. Using a post-hoc statistical approach, we found a negative association between weight gain and T. suis infection. For A. suum, this association was not significant after adjusting for other covariates in a multivariable analysis. Estimates from generalized linear mixed effects models indicated that a 1 kg increase in weight gain was associated with 4.4% (p = 0.00217) decrease in T. suis EPG and a 2.8% (p = 0.02297) or 2.2% (p = 0.0488) decrease in A. suum EPG or burden, respectively. CONCLUSIONS: Overall this study has demonstrated a negative association between STH and weight gain in growing pigs but also that T. suis infection may be more detrimental that A. suum on growth.


Subject(s)
Ascariasis , Swine Diseases , Trichuriasis , Animals , Ascariasis/complications , Ascariasis/epidemiology , Ascariasis/veterinary , Child , Feces/parasitology , Humans , Swine , Swine Diseases/epidemiology , Swine Diseases/parasitology , Trichuriasis/complications , Trichuriasis/epidemiology , Trichuriasis/veterinary , Trichuris/physiology , Weight Gain
16.
J Med Virol ; 94(10): 4754-4761, 2022 10.
Article in English | MEDLINE | ID: mdl-35713189

ABSTRACT

Polymerase chain reaction (PCR) and antigen tests have been used extensively for screening during the severe acute respiratory syndrome coronavirus 2 pandemics. However, the real-world sensitivity and specificity of the two testing procedures in the field have not yet been estimated without assuming that the PCR constitutes a gold standard test. We use latent class models to estimate the in situ performance of both tests using data from the Danish national registries. We find that the specificity of both tests is very high (>99.7%), while the sensitivities are 95.7% (95% confidence interval [CI]: 92.8%-98.4%) and 53.8% (95% CI: 49.8%-57.9%) for the PCR and antigen tests, respectively. These findings have implications for the use of confirmatory PCR tests following a positive antigen test result: we estimate that serial testing is counterproductive at higher prevalence levels.


Subject(s)
COVID-19 , SARS-CoV-2 , COVID-19/diagnosis , COVID-19 Testing , Diagnostic Tests, Routine , Humans , Latent Class Analysis , Pandemics , SARS-CoV-2/genetics , Sensitivity and Specificity
17.
Prev Vet Med ; 201: 105606, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35286870

ABSTRACT

Toxoplasma gondii infection in pigs is commonly diagnosed using serological tests that detect IgG antibodies targeted against the parasite. Such tests include enzyme-linked immunosorbent assay (ELISA), modified agglutination test (MAT), and western blot (WB), which are commercially available as rapid test kits. In this study, we evaluated the manufacturer recommended cut-off of ELISA-PrioCHECK test kit and determined a new optimal cut-off for identifying T. gondii infections in pigs. Assessment of the commercial ELISA kit was done by including data from two additional serological tests, MAT, and WB, applied to seven pig population categories with varying prevalences. A total of 233 plasma samples that were previously used in other studies for investigating T. gondii seroprevalence in pigs in Denmark were randomly selected for inclusion, including 95 samples that had previously been analysed with all three tests and an additional 138 samples that were analysed using the three serological tests for this study. In the absence of a gold standard test, a latent class model was fit to the data to obtain estimates of sensitivity and specificity for each of the tests along with prevalence in each of the populations. A cut-off that maximized the sensitivity and specificity of the ELISA test was then selected. The optimal cut-off value for percent of positive control (PP) in ELISA-PrioCHECK was estimated to be 27.7 PP, which is higher than the cut-off value of 20 PP that is recommended by the manufacturer. At this cut-off, the estimated sensitivities of ELISA, MAT and WB were 99.2% (96.3-100.0%), 96.3% (88.0-100.0%), and 89.8% (80.0-98.0%), respectively. The estimated specificities of ELISA, MAT and WB were 95.2% (92.5-97.6%), 99.6% (97.5-100.0%), and 98.2% (95.9-100.0%), respectively. Our findings have broad relevance to the use of the ELISA-PrioCHECK test kit for detecting Toxoplasma gondii infection in pigs.


Subject(s)
Toxoplasma , Toxoplasmosis, Animal , Agglutination Tests/veterinary , Animals , Antibodies, Protozoan , Bayes Theorem , Diagnostic Tests, Routine , Enzyme-Linked Immunosorbent Assay/veterinary , Seroepidemiologic Studies , Swine , Toxoplasmosis, Animal/diagnosis , Toxoplasmosis, Animal/epidemiology
18.
Sci Rep ; 11(1): 23775, 2021 12 10.
Article in English | MEDLINE | ID: mdl-34893634

ABSTRACT

Early warning tools are crucial for the timely application of intervention strategies and the mitigation of the adverse health, social and economic effects associated with outbreaks of epidemic potential such as COVID-19. This paper introduces, the Epidemic Volatility Index (EVI), a new, conceptually simple, early warning tool for oncoming epidemic waves. EVI is based on the volatility of newly reported cases per unit of time, ideally per day, and issues an early warning when the volatility change rate exceeds a threshold. Data on the daily confirmed cases of COVID-19 are used to demonstrate the use of EVI. Results from the COVID-19 epidemic in Italy and New York State are presented here, based on the number of confirmed cases of COVID-19, from January 22, 2020, until April 13, 2021. Live daily updated predictions for all world countries and each of the United States of America are publicly available online. For Italy, the overall sensitivity for EVI was 0.82 (95% Confidence Intervals: 0.75; 0.89) and the specificity was 0.91 (0.88; 0.94). For New York, the corresponding values were 0.55 (0.47; 0.64) and 0.88 (0.84; 0.91). Consecutive issuance of early warnings is a strong indicator of main epidemic waves in any country or state. EVI's application to data from the current COVID-19 pandemic revealed a consistent and stable performance in terms of detecting new waves. The application of EVI to other epidemics and syndromic surveillance tasks in combination with existing early warning systems will enhance our ability to act swiftly and thereby enhance containment of outbreaks.


Subject(s)
COVID-19/epidemiology , Pandemics , Humans , Italy/epidemiology , New York/epidemiology , Predictive Value of Tests , Time Factors
19.
Animals (Basel) ; 11(11)2021 Oct 20.
Article in English | MEDLINE | ID: mdl-34827750

ABSTRACT

Control of infectious diseases in livestock has often been motivated by food safety concerns and the economic impact on livestock production. However, diseases may also affect animal welfare. We present an approach to quantify the effect of five infectious diseases on animal welfare in cattle (three diseases) and pigs (two diseases). We grouped clinical manifestations that often occur together into lists of clinical entities for each disease based on literature reviews, and subsequently estimated "suffering scores" based on an aggregation of duration, frequency, and severity. The duration and severity were based on literature reviews and expert knowledge elicitation, while frequency was based mainly on estimates from the literature. The resulting suffering scores were compared to scores from common welfare hazards found under Danish conditions. Most notably, the suffering scores for cattle diseases were ranked as: bovine viral diarrhoea and infection with Mycobacterium avium subsp. paratuberculosis > infectious bovine rhinotracheitis, and for pigs as: porcine respiratory and reproductive syndrome > Aujeszky's disease. The approach has limitations due to the limited data available in literature and uncertainties associated with expert knowledge, but it can provide decision makers with a tool to quantify the impact of infections on animal welfare given these uncertainties.

20.
Front Vet Sci ; 8: 674771, 2021.
Article in English | MEDLINE | ID: mdl-34113678

ABSTRACT

Bovine respiratory disease (BRD) results from interactions between pathogens, environmental stressors, and host factors. Obtaining a diagnosis of the causal pathogens is challenging but the use of high-throughput real-time PCR (rtPCR) may help target preventive and therapeutic interventions. The aim of this study was to improve the interpretation of rtPCR results by analysing their associations with clinical observations. The objective was to develop and illustrate a field-data driven statistical method to guide the selection of relevant quantification cycle cut-off values for pathogens associated with BRD for the high-throughput rtPCR system "Fluidigm BioMark HD" based on nasal swabs from calves. We used data from 36 herds enrolled in a Danish field study where 340 calves within pre-determined age-groups were subject to clinical examination and nasal swabs up to four times. The samples were analysed with the rtPCR system. Each of the 1,025 observation units were classified as sick with BRD or healthy, based on clinical scores. The optimal rtPCR results to predict BRD were investigated for Pasteurella multocida, Mycoplasma bovis, Histophilus somni, Mannheimia haemolytica, and Trueperella pyogenes by interpreting scatterplots and results of mixed effects logistic regression models. The clinically relevant rtPCR cut-off suggested for P. multocida and M. bovis was ≤ 21.3. For H. somni it was ≤ 17.4, while no cut-off could be determined for M. haemolytica and T. pyogenes. The demonstrated approach can provide objective support in the choice of clinically relevant cut-offs. However, for robust performance of the regression model sufficient amounts of suitable data are required.

SELECTION OF CITATIONS
SEARCH DETAIL
...