Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 99
Filter
1.
Glob Epidemiol ; 7: 100130, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38188038

ABSTRACT

Drawing sound causal inferences from observational data is often challenging for both authors and reviewers. This paper discusses the design and application of an Artificial Intelligence Causal Research Assistant (AIA) that seeks to help authors improve causal inferences and conclusions drawn from epidemiological data in health risk assessments. The AIA-assisted review process provides structured reviews and recommendations for improving the causal reasoning, analyses and interpretations made in scientific papers based on epidemiological data. Causal analysis methodologies range from earlier Bradford-Hill considerations to current causal directed acyclic graph (DAG) and related models. AIA seeks to make these methods more accessible and useful to researchers. AIA uses an external script (a "Causal AI Booster" (CAB) program based on classical AI concepts of slot-filling in frames organized into task hierarchies to complete goals) to guide Large Language Models (LLMs), such as OpenAI's ChatGPT or Google's LaMDA (Bard), to systematically review manuscripts and create both (a) recommendations for what to do to improve analyses and reporting; and (b) explanations and support for the recommendations. Review tables and summaries are completed systematically by the LLM in order. For example, recommendations for how to state and caveat causal conclusions in the Abstract and Discussion sections reflect previous analyses of the Study Design and Data Analysis sections. This work illustrates how current AI can contribute to reviewing and providing constructive feedback on research documents. We believe that such AI-assisted review shows promise for enhancing the quality of causal reasoning and exposition in epidemiological studies. It suggests the potential for effective human-AI collaboration in scientific authoring and review processes.

2.
Glob Epidemiol ; 6: 100114, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37637716

ABSTRACT

Exposure-response curves are among the most widely used tools of quantitative health risk assessment. However, we propose that exactly what they mean is usually left ambiguous, making it impossible to answer such fundamental questions as whether and by how much reducing exposure by a stated amount would change average population risks and distributions of individual risks. Recent concepts and computational methods from causal artificial intelligence (CAI) and machine learning (ML) can be applied to clarify what an exposure-response curve means; what other variables are held fixed (and at what levels) in estimating it; and how much inter-individual variability there is around population average exposure-response curves. These advances in conceptual clarity and practical computational methods not only enable epidemiologists and risk analysis practitioners to better quantify population and individual exposure-response curves but also challenge them to specify exactly what exposure-response relationships they seek to quantify and communicate to risk managers and how to use the resulting information to improve risk management decisions.

3.
Glob Epidemiol ; 5: 100104, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37638367

ABSTRACT

Several recent news stories have alarmed many politicians and members of the public by reporting that indoor air pollution from gas stoves causes about 13% of childhood asthma in the United States. Research on the reproducibility and trustworthiness of epidemiological risk assessments has identified a number of common questionable research practices (QRPs) that should be avoided to draw sound causal conclusions from epidemiological data. Examples of such QRPs include claiming causation without using study designs or data analyses that allow valid causal inferences; generalizing or transporting risk estimates based on data for specific populations, time periods, and locations to different ones without accounting for differences in the study and target populations; claiming causation without discussing or quantitatively correcting for confounding, external validity bias, or other biases; and not mentioning or resolving contradictory evidence. We examine the recently estimated gas stove-childhood asthma associations from the perspective of these QRPs and conclude that it exemplifies all of them. The quantitative claim that about 13% of childhood asthma in the United States could be prevented by reducing exposure to gas stove pollution is not supported by the data collected or by the measures of association (Population Attributable Fractions) used to analyze the data. The qualitative finding that reducing exposure to gas stove pollution would reduce the burden of childhood asthma in the United States has no demonstrated validity. Systematically checking how and whether QRPs have been addressed before reporting or responding to claims that everyday exposures cause substantial harm to health might reduce social amplification of perceived risks based on QRPs and help to improve the credibility and trustworthiness of published epidemiological risk assessments.

4.
Glob Epidemiol ; 5: 100102, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37638368

ABSTRACT

We present a Socratic dialogue with ChatGPT, a large language model (LLM), on the causal interpretation of epidemiological associations between fine particulate matter (PM2.5) and human mortality risks. ChatGPT, reflecting probable patterns of human reasoning and argumentation in the sources on which it has been trained, initially holds that "It is well-established that exposure to ambient levels of PM2.5 does increase mortality risk" and adds the unsolicited remark that "Reducing exposure to PM2.5 is an important public health priority." After patient questioning, however, it concludes that "It is not known with certainty that current ambient levels of PM2.5 increase mortality risk. While there is strong evidence of an association between PM2.5 and mortality risk, the causal nature of this association remains uncertain due to the possibility of omitted confounders." This revised evaluation of the evidence suggests the potential value of sustained questioning in refining and improving both the types of human reasoning and argumentation imitated by current LLMs and the reliability of the initial conclusions expressed by current LLMs.

5.
Crit Rev Toxicol ; 53(5): 311-325, 2023 05.
Article in English | MEDLINE | ID: mdl-37489873

ABSTRACT

In 2022, the US EPA published an important risk assessment concluding that "Compared to the current annual standard, meeting a revised annual standard with a lower level is estimated to reduce PM2.5-associated health risks in the 30 annually-controlled study areas by about 7-9% for a level of 11.0 µg/m3… and 30-37% for a level of 8.0 µg/m3." These are interventional causal predictions: they predict percentage reductions in mortality risks caused by different counterfactual reductions in fine particulate (PM2.5) levels. Valid causal predictions are possible if: (1) Study designs are used that can support valid causal inferences about the effects of interventions (e.g., quasi-experiments with appropriate control groups); (2) Appropriate causal models and methods are used to analyze the data; (3) Model assumptions are satisfied (at least approximately); and (4) Non-causal sources of exposure-response associations such as confounding, measurement error, and model misspecification are appropriately modeled and adjusted for. This paper examines two long-term mortality studies selected by the EPA to predict reductions in PM2.5-associated risk. Both papers use Cox proportional hazards (PH) models. For these models, none of these four conditions is satisfied, making it difficult to interpret or validate their causal predictions. Scientists, reviewers, regulators, and members of the public can benefit from more trustworthy and credible risk assessments and causal predictions by insisting that risk assessments supporting interventional causal conclusions be based on study designs, methods, and models that are appropriate for predicting effects caused by interventions.


Subject(s)
Air Pollutants , Air Pollution , Particulate Matter , Causality , Risk Assessment , Environmental Exposure
6.
Environ Res ; 230: 115607, 2023 08 01.
Article in English | MEDLINE | ID: mdl-36965793

ABSTRACT

This paper summarizes recent insights into causal biological mechanisms underlying the carcinogenicity of asbestos. It addresses their implications for the shapes of exposure-response curves and considers recent epidemiologic trends in malignant mesotheliomas (MMs) and lung fiber burden studies. Since the commercial amphiboles crocidolite and amosite pose the highest risk of MMs and contain high levels of iron, endogenous and exogenous pathways of iron injury and repair are discussed. Some practical implications of recent developments are that: (1) Asbestos-cancer exposure-response relationships should be expected to have non-zero background rates; (2) Evidence from inflammation biology and other sources suggests that there are exposure concentration thresholds below which exposures do not increase inflammasome-mediated inflammation or resulting inflammation-mediated cancer risks above background risk rates; and (3) The size of the suggested exposure concentration threshold depends on both the detailed time patterns of exposure on a time scale of hours to days and also on the composition of asbestos fibers in terms of their physiochemical properties. These conclusions are supported by complementary strands of evidence including biomathematical modeling, cell biology and biochemistry of asbestos-cell interactions in vitro and in vivo, lung fiber burden analyses and epidemiology showing trends in human exposures and MM rates.


Subject(s)
Asbestos , Lung Neoplasms , Mesothelioma , Humans , Asbestos/toxicity , Mesothelioma/chemically induced , Mesothelioma/epidemiology , Lung Neoplasms/chemically induced , Lung Neoplasms/epidemiology , Lung/pathology , Asbestos, Amphibole/toxicity , Inflammation/metabolism
7.
Environ Res ; 223: 115311, 2023 04 15.
Article in English | MEDLINE | ID: mdl-36731597

ABSTRACT

How can and should epidemiologists and risk assessors assemble and present evidence for causation of mortality or morbidities by identified agents such as fine particulate matter or other air pollutants? As a motivating example, some scientists have warned recently that ammonia from the production of meat significantly increases human mortality rates in exposed populations by increasing the ambient concentration of fine particulate matter (PM2.5) in air. We reexamine the support for such conclusions, including quantitative calculations that attribute deaths to PM2.5 air pollution by applying associational results such as relative risks, odds ratios, or slope coefficients from regression models to predict the effects on mortality or morbidity of reducing PM2.5 exposures. Taking an outside perspective from the field of causal artificial intelligence (CAI), we conclude that these attribution calculations are methodologically unsound. They produce unreliable conclusions because they ignore an essential distinction between differences in outcomes observed at different levels of exposure and changes in outcomes caused by changing exposure. We find that multiple studies that have examined associations between changes over time in particulate exposure and mortality risk instead of differences in exposures and corresponding mortality risks have found no clear evidence that observed changes in exposure help to predict or explain subsequent changes in mortality risks. We conclude that there is no sound theoretical or empirical reason to believe that reducing ammonia emissions from farms has reduced or would reduce human mortality risks. More generally, applying CAI principles and methods can potentially improve current widespread practices of unsound causal inferences and policy-relevant causal claims that are made without the benefit of formal causal analysis in air pollution health effects research and in other areas of applied epidemiology and public health risk assessment.


Subject(s)
Air Pollutants , Air Pollution , Humans , Ammonia/toxicity , Artificial Intelligence , Air Pollutants/toxicity , Air Pollutants/analysis , Air Pollution/adverse effects , Air Pollution/analysis , Particulate Matter/toxicity , Particulate Matter/analysis , Environmental Exposure/analysis , Mortality
8.
Entropy (Basel) ; 23(5)2021 May 13.
Article in English | MEDLINE | ID: mdl-34068183

ABSTRACT

For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain why its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link potential courses of action to resulting outcome probabilities. They reflect an understanding of possible actions, preferred outcomes, the effects of action on outcome probabilities, and acceptable risks and trade-offs-the standard ingredients of normative theories of decision-making under uncertainty, such as expected utility theory. Competent AI advisory systems should also notice changes that might affect a user's plans and goals. In response, they should apply both learned patterns for quick response (analogous to fast, intuitive "System 1" decision-making in human psychology) and also slower causal inference and simulation, decision optimization, and planning algorithms (analogous to deliberative "System 2" decision-making in human psychology) to decide how best to respond to changing conditions. Concepts of conditional independence, conditional probability tables (CPTs) or models, causality, heuristic search for optimal plans, uncertainty reduction, and value of information (VoI) provide a rich, principled framework for recognizing and responding to relevant changes and features of decision problems via both learned and calculated responses. This paper reviews how these and related concepts can be used to identify probabilistic causal dependencies among variables, detect changes that matter for achieving goals, represent them efficiently to support responses on multiple time scales, and evaluate and update causal models and plans in light of new data. The resulting causally explainable decisions make efficient use of available information to achieve goals in uncertain environments.

9.
Risk Anal ; 41(12): 2186-2195, 2021 12.
Article in English | MEDLINE | ID: mdl-33864291

ABSTRACT

Applying risk assessment and management tools to plutonium disposition is a long-standing challenge for the U.S. government. The science is complicated, which has helped push risk assessment and management tools in new creative directions. Yet, communicating effectively about increasingly complicated risk-science issues like plutonium disposition requires careful planning and speakers who can address why specific tools are selected, the past record of applying these tools, why assumptions sometimes are applied instead of reliable data, and how uncertainty is characterized. Speakers addressing risk issues must also overcome obstacles in communication arising from expert-audience differences in knowledge and legal restrictions on disclosing information. This perspective seeks to highlight and illustrate five key risk questions, about probabilistic risk assessment (PRA) and performance assessment (PA) in the context of managing plutonium defense nuclear waste: objectives, experience, gaps, transparency, and difficulty of applying and communicating using each tool. While the general public needs to be involved, some issues require a level of expertise that is typically beyond local communities and therefore an expert panel should support community access.

10.
Poult Sci ; 100(2): 635-642, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33518117

ABSTRACT

Do faster slaughter line speeds for young chickens increase risk of Salmonella contamination? We analyze data collected in 2018-2019 from 97 slaughter establishments processing young chickens to examine the extent to which differences in slaughter line speeds across establishments operating under the same inspection system explain observed differences in their microbial quality, specifically frequencies of positive Salmonella samples. A variety of off-the-shelf statistical and machine learning techniques applied to the data to identify and visualize correlations and potential causal relationships among variables showed that the presence of Salmonella or other indicators of process control, such as noncompliance records for regulations associated with process control and food safety, are not significantly increased in establishments with higher line speeds (e.g., above 140 birds per min) compared with establishments with lower line speeds when establishments are operating under the conditions present in this study. This included some establishments operating under specific criteria to obtain a waiver for line speed. A null hypothesis advanced over 30 yr ago by the National Research Council that increased line speeds result in a product that is not contaminated more often than before line speeds were increased, appears to be fully consistent with these recent data.


Subject(s)
Abattoirs , Chickens , Food Contamination , Food Safety , Salmonella Food Poisoning/etiology , Salmonella/growth & development , Abattoirs/standards , Abattoirs/trends , Animals , Food Microbiology , Risk Factors , Time Factors
12.
Glob Epidemiol ; 3: 100064, 2021 Nov.
Article in English | MEDLINE | ID: mdl-37635719

ABSTRACT

We argue that population attributable fractions, probabilities of causation, burdens of disease, and similar association-based measures often do not provide valid estimates or surrogates for the fraction or number of disease cases that would be prevented by eliminating or reducing an exposure because their calculations do not include crucial mechanistic information. We use a thought experiment with a cascade of dominos to illustrate the need for mechanistic information when answering questions about how changing exposures changes risk. We suggest that modern methods of causal artificial intelligence (CAI) can fill this gap: they can complement and extend traditional epidemiological attribution calculations to provide information useful for risk management decisions.

13.
Glob Epidemiol ; 3: 100052, 2021 Nov.
Article in English | MEDLINE | ID: mdl-37635718

ABSTRACT

Causal inference regarding exposures to ambient fine particulate matter (PM2.5) and mortality estimated from observational studies is limited by confounding, among other factors. In light of a variety of causal inference frameworks and methods that have been developed over the past century to specifically quantify causal effects, three research teams were selected in 2016 to evaluate the causality of PM2.5-mortality association among Medicare beneficiaries, using their own selections of causal inference methods and study designs but the same data sources. With a particular focus on controlling for unmeasured confounding, two research teams adopted an instrumental variables approach under a quasi-experiment or natural experiment study design, whereas one team adopted a structural nested mean model under the traditional cohort study design. All three research teams reported results supporting an estimated counterfactual causal relationship between ambient PM2.5 and all-cause mortality, and their estimated causal relationships are largely of similar magnitudes to recent epidemiological studies based on regression analyses with omitted potential confounders. The causal methods used by all three research teams were built upon the potential outcomes framework. This framework has marked conceptual advantages over regression-based methods in addressing confounding and yielding unbiased estimates of average treatment effect in observational epidemiological studies. However, potential violations of the unverifiable assumptions underlying each causal method leave the results from all three studies subject to biases. We also note that the studies are not immune to some other common sources of bias, including exposure measurement errors, ecological study design, model uncertainty and specification errors, and irrelevant exposure windows, that can undermine the validity of causal inferences in observational studies. As a result, despite some apparent consistency of study results from the three research teams with the wider epidemiological literature on PM2.5-mortality statistical associations, caution seems warranted in drawing causal conclusions from the results. A possible way forward is to improve study design and reduce dependence of conclusions on untested assumptions by complementing potential outcomes methods with structural causal modeling and information-theoretic methods that emphasize empirically tested and validated relationships.

14.
Glob Epidemiol ; 3: 100065, 2021 Nov.
Article in English | MEDLINE | ID: mdl-37635727

ABSTRACT

Population attributable fraction (PAF), probability of causation, burden of disease, and related quantities derived from relative risk ratios are widely used in applied epidemiology and health risk analysis to quantify the extent to which reducing or eliminating exposures would reduce disease risks. This causal interpretation conflates association with causation. It has sometimes led to demonstrably mistaken predictions and ineffective risk management recommendations. Causal artificial intelligence (CAI) methods developed at the intersection of many scientific disciplines over the past century instead use quantitative high-level descriptions of networks of causal mechanisms (typically represented by conditional probability tables or structural equations) to predict the effects caused by interventions. We summarize these developments and discuss how CAI methods can be applied to realistically imperfect data and knowledge - e.g., with unobserved (latent) variables, missing data, measurement errors, interindividual heterogeneity in exposure-response functions, and model uncertainty. We recommend that CAI methods can help to improve the conceptual foundations and practical value of epidemiological calculations by replacing association-based attributions of risk to exposures or other risk factors with causal predictions of the changes in health effects caused by interventions.

15.
Risk Anal ; 40(S1): 2144-2177, 2020 Nov.
Article in English | MEDLINE | ID: mdl-33000494

ABSTRACT

Decision analysis and risk analysis have grown up around a set of organizing questions: what might go wrong, how likely is it to do so, how bad might the consequences be, what should be done to maximize expected utility and minimize expected loss or regret, and how large are the remaining risks? In probabilistic causal models capable of representing unpredictable and novel events, probabilities for what will happen, and even what is possible, cannot necessarily be determined in advance. Standard decision and risk analysis questions become inherently unanswerable ("undecidable") for realistically complex causal systems with "open-world" uncertainties about what exists, what can happen, what other agents know, and how they will act. Recent artificial intelligence (AI) techniques enable agents (e.g., robots, drone swarms, and automatic controllers) to learn, plan, and act effectively despite open-world uncertainties in a host of practical applications, from robotics and autonomous vehicles to industrial engineering, transportation and logistics automation, and industrial process control. This article offers an AI/machine learning perspective on recent ideas for making decision and risk analysis (even) more useful. It reviews undecidability results and recent principles and methods for enabling intelligent agents to learn what works and how to complete useful tasks, adjust plans as needed, and achieve multiple goals safely and reasonably efficiently when possible, despite open-world uncertainties and unpredictable events. In the near future, these principles could contribute to the formulation and effective implementation of more effective plans and policies in business, regulation, and public policy, as well as in engineering, disaster management, and military and civil defense operations. They can extend traditional decision and risk analysis to deal more successfully with open-world novelty and unpredictable events in large-scale real-world planning, policymaking, and risk management.

16.
Crit Rev Toxicol ; 50(7): 539-550, 2020 08.
Article in English | MEDLINE | ID: mdl-32903110

ABSTRACT

We examine how Bayesian network (BN) learning and analysis methods can help to meet several methodological challenges that arise in interpreting significant regression coefficients in exposure-response regression modeling. As a motivating example, we consider the challenge of interpreting positive regression coefficients for blood lead level (BLL) as a predictor of mortality risk for nonsmoking men. We first note that practices such as dichotomizing or categorizing continuous confounders (e.g. income), omitting potentially important socioeconomic confounders (e.g. education), and assuming specific parametric regression model forms leave unclear to what extent a positive regression coefficient reflects these modeling choices, rather than a direct dependence of mortality risk on exposure. Therefore, significant exposure-response coefficients in parametric regression models do not necessarily reveal the extent to which reducing exposure-related variables (e.g. BLL) alone, while leaving fixed other correlates of exposure and mortality risks (e.g. education, income, etc.) would reduce adverse outcome risks (e.g. mortality risks). We then consider how BN structure-learning and inference algorithms and nonparametric estimation methods (partial dependence plots) can be used to clarify dependencies between variables, variable selection, confounding, and quantification of joint effects of multiple factors on risk, including possible high-order interactions and nonlinearities. We conclude that these details must be carefully modeled to determine whether a data set provides evidence that exposure itself directly affects risks; and that BN and nonparametric effect estimation and uncertainty quantification methods can complement regression modeling and help to improve the scientific basis for risk management decisions and policy-making by addressing these issues.


Subject(s)
Environmental Exposure/statistics & numerical data , Environmental Pollution/statistics & numerical data , Lead , Bayes Theorem , Humans
17.
Glob Epidemiol ; 2: 100033, 2020 Nov.
Article in English | MEDLINE | ID: mdl-32905083

ABSTRACT

In the first half of 2020, much excitement in news media and some peer reviewed scientific articles was generated by the discovery that fine particulate matter (PM2.5) concentrations and COVID-19 mortality rates are statistically significantly positively associated in some regression models. This article points out that they are non-significantly negatively associated in other regression models, once omitted confounders (such as latitude and longitude) are included. More importantly, positive regression coefficients can and do arise when (generalized) linear regression models are applied to data with strong nonlinearities, including data on PM2.5, population density, and COVID-19 mortality rates, due to model specification errors. In general, statistical modeling accompanied by judgments about causal interpretations of statistical associations and regression coefficients - the current weight-of-evidence (WoE) approach favored in much current regulatory risk analysis for air pollutants - is not a valid basis for determining whether or to what extent risk of harm to human health would be reduced by reducing exposure. The traditional scientific method based on testing predictive generalizations against data remains a more reliable paradigm for risk analysis and risk management.

18.
Environ Res ; 187: 109638, 2020 08.
Article in English | MEDLINE | ID: mdl-32450424

ABSTRACT

Recent advances in understanding of biological mechanisms and adverse outcome pathways for many exposure-related diseases show that certain common mechanisms involve thresholds and nonlinearities in biological exposure concentration-response (C-R) functions. These range from ultrasensitive molecular switches in signaling pathways, to assembly and activation of inflammasomes, to rupture of lysosomes and pyroptosis of cells. Realistic dose-response modeling and risk analysis must confront the reality of nonlinear C-R functions. This paper reviews several challenges for traditional statistical regression modeling of C-R functions with thresholds and nonlinearities, together with methods for overcoming them. Statistically significantly positive exposure-response regression coefficients can arise from many non-causal sources such as model specification errors, incompletely controlled confounding, exposure estimation errors, attribution of interactions to factors, associations among explanatory variables, or coincident historical trends. If so, the unadjusted regression coefficients do not necessarily predict how or whether reducing exposure would reduce risk. We discuss statistical options for controlling for such threats, and advocate causal Bayesian networks and dynamic simulation models as potentially valuable complements to nonparametric regression modeling for assessing causally interpretable nonlinear C-R functions and understanding how time patterns of exposures affect risk. We conclude that these approaches are promising for extending the great advances made in statistical C-R modeling methods in recent decades to clarify how to design regulations that are more causally effective in protecting human health.


Subject(s)
Air Pollution , Bayes Theorem , Environmental Exposure/analysis , Humans , Regression Analysis , Risk
19.
Regul Toxicol Pharmacol ; 114: 104663, 2020 Jul.
Article in English | MEDLINE | ID: mdl-32330641

ABSTRACT

Inflammasomes are a family of pro-inflammatory signaling complexes that orchestrate inflammatory responses in many tissues. The NLRP3 inflammasome has been implicated in several diseases associated with chronic inflammation. In this paper, we present an Adverse Outcome Pathway (AOP) for NLRP3-induced chronic inflammatory diseases that demonstrates how NLRP3 can cause a transition from acute to chronic inflammation, and ultimately the onset of disease. We present a simple graphical description of the main features of internal dose time courses that are important when pharmacodynamics are governed by an activation threshold. Similar considerations hold for other AOPs that are rate-limited by processes with activation thresholds. The risk analysis implications of AOPs with threshold or threshold-like pharmacodynamic responses include the need to consider how cumulative dose per unit time is distributed over time and the possibility that safe, or virtually safe, exposure concentrations can be defined for such processes.


Subject(s)
Inflammation/metabolism , Chronic Disease , Humans , Inflammasomes/metabolism , NLR Family, Pyrin Domain-Containing 3 Protein/metabolism , Risk Assessment
20.
Risk Anal ; 40(6): 1244-1257, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32315459

ABSTRACT

Virginiamycin (VM), a streptogramin antibiotic, has been used to promote healthy growth and treat illnesses in farm animals in the United States and other countries. The combination streptogramin Quinupristin-Dalfopristin (QD) was approved in the United States in 1999 for treating patients with vancomycin-resistant Enterococcus faecium (VREF) infections. Many chickens and swine test positive for QD-resistant E. faecium, raising concerns that using VM in food animals might select for streptogramin-resistant strains of E. faecium that could compromise QD effectiveness in treating human VREF infections. Such concerns have prompted bans and phase-outs of VM as growth promoters in the United States and Europe. This study quantitatively estimates potential human health risks from QD-resistant VREF infections due to VM use in food animals in China. Plausible conservative (risk-maximizing) quantitative risk estimates are derived for future uses, assuming 100% resistance to linezolid and daptomycin and 100% prescription rate of QD to high-level (VanA) VREF-infected patients. Up to one shortened life every few decades to every few thousand years might occur in China from VM use in animals, although the most likely risk is zero (e.g., if resistance is not transferred from bacteria in food animals to bacteria infecting human patients). Sensitivity and probabilistic uncertainty analyses suggest that this conclusion is robust to several data gaps and uncertainties. Potential future human health risks from VM use in animals in China appear to be small or zero, even if QD is eventually approved for use in human patients.


Subject(s)
Anti-Bacterial Agents/toxicity , Vancomycin-Resistant Enterococci/drug effects , Virginiamycin/toxicity , Animals , Anti-Bacterial Agents/administration & dosage , Anti-Bacterial Agents/pharmacology , Chickens , China , Humans , Meat Products/microbiology , Microbial Sensitivity Tests , Virginiamycin/administration & dosage
SELECTION OF CITATIONS
SEARCH DETAIL
...