Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
1.
Water Res ; 259: 121877, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-38870891

ABSTRACT

When assessing risk posed by waterborne pathogens in drinking water, it is common to use Monte Carlo simulations in Quantitative Microbial Risk Assessment (QMRA). This method accounts for the variables that affect risk and their different values in a given system. A common underlying assumption in such analyses is that all random variables are independent (i.e., one is not associated in any way with another). Although the independence assumption simplifies the analysis, it is not always correct. For example, treatment efficiency can depend on microbial concentrations if changes in microbial concentrations either affect treatment themselves or are associated with water quality changes that affect treatment (e.g., during/after climate shocks like extreme precipitation events or wildfires). Notably, the effects of erroneous assumptions of independence in QMRA have not been widely discussed. Due to the implications of drinking water safety decisions on public health protection, it is critical that risk models accurately reflect the context being studied to meaningfully support decision-making. This work illustrates how dependence between pathogen concentration and either treatment efficiency or water consumption can impact risk estimates using hypothetical scenarios of relevance to drinking water QMRA. It is shown that the mean and variance of risk estimates can change substantially with different degrees of correlation. Data from a water supply system in Calgary, Canada are also used to illustrate the effect of dependence on risk. Recognizing the difficulty of obtaining data to empirically assess dependence, a framework to guide evaluation of the effect of dependence is presented to enhance support for decision making. This work emphasizes the importance of acknowledging and discussing assumptions implicit to models.


Subject(s)
Decision Making , Drinking Water , Monte Carlo Method , Drinking Water/microbiology , Risk Assessment , Water Microbiology , Water Supply , Models, Theoretical , Water Purification
2.
Anal Chem ; 96(16): 6245-6254, 2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38593420

ABSTRACT

Wastewater treatment plants (WWTPs) serve a pivotal role in transferring microplastics (MPs) from wastewater to sludge streams, thereby exerting a significant influence on their release into the environment and establishing wastewater and biosolids as vectors for MP transport and delivery. Hence, an accurate understanding of the fate and transport of MPs in WWTPs is vital. Enumeration is commonly used to estimate concentrations of MPs in performance evaluations of treatment processes, and risk assessment also typically involves MP enumeration. However, achieving high accuracy in concentration estimates is challenging due to inherent uncertainty in the analytical workflow to collect and process samples and count MPs. Here, sources of random error in MP enumeration in wastewater and other matrices were investigated using a modeling approach that addresses the sources of error associated with each step of the analysis. In particular, losses are reflected in data analysis rather than merely being measured as a validation step for MP extraction methods. A model for addressing uncertainty in the enumeration of microorganisms in water was adapted to include key assumptions relevant to the enumeration of MPs in wastewater. Critically, analytical recovery, the capacity to successfully enumerate particles considering losses and counting error, may be variable among MPs due to differences in size, shape, and type (differential analytical recovery) in addition to random variability between samples (nonconstant analytical recovery). Accordingly, differential analytical recovery among the categories of MPs was added to the existing model. This model was illustratively applied to estimate MP concentrations from simulated data and quantify uncertainty in the resulting estimates. Increasing the number of replicates, counting categories of MPs separately, and accounting for both differential and nonconstant analytical recovery improved the accuracy of MP enumeration. This work contributes to developing guidelines for analytical procedures quantifying MPs in diverse types of samples and provides a framework for enhanced interpretation of enumeration data, thereby facilitating the collection of more accurate and reliable MP data in environmental studies.

3.
J Clin Med ; 12(15)2023 Jul 26.
Article in English | MEDLINE | ID: mdl-37568301

ABSTRACT

BACKGROUND: The demand for transvenous lead extraction (TLE) has increased. In line with this, the safety of such procedures has also increased. Traditionally, TLE is performed under resource-intensive general anaesthesia. This study aims to evaluate the safety and outcomes of Cardiologist-lead deep sedation for TLE. METHODS: We retrospectively analysed 328 TLE procedures performed under deep sedation from 2016 to 2019. TLE procedures were performed by experienced electrophysiologists. Sedation was administered by a specifically trained cardiologist (bolus midazolam/fentanyl and propofol infusion). Procedural sedation data including blood pressure, medication administration and sedation time were collected. Complications related to sedation and the operative component of the procedure were analysed retrospectively. RESULTS: The sedation-associated complication rate during TLE was 22.0%. The most common complication (75% of complications) was hypotension requiring noradrenaline, followed by bradycardia requiring atropine (13% of complications). Additionally, the unplanned presence of an anaesthesiologist was needed in one case (0.3%). Deep sedation was achieved with midazolam (mean dose 42.9 ± 26.5 µg/kg), fentanyl (mean dose 0.4 ± 0.6 µg/kg) and propofol (mean dose 3.5 ± 1.2 mg/kg/h). There was no difference in medication dosage between those with a sedation-associated complication and those without. Sedation-associated complications appeared significantly more in patients with reduced LVEF (p = 0.01), renal impairment (p = 0.01) and a higher American Society of Anaesthesiologists (ASA) class (p = 0.01). CONCLUSION: Deep sedation for TLE can be safely performed by a specifically trained cardiologist, with a transition to general anaesthesia required in only 0.3% of cases. We continue to recommend the on-call availability of an anaesthesiologist and cardiac surgeon in case of major complications.

4.
Front Microbiol ; 14: 1048661, 2023.
Article in English | MEDLINE | ID: mdl-36937263

ABSTRACT

The real-time polymerase chain reaction (PCR), commonly known as quantitative PCR (qPCR), is increasingly common in environmental microbiology applications. During the COVID-19 pandemic, qPCR combined with reverse transcription (RT-qPCR) has been used to detect and quantify SARS-CoV-2 in clinical diagnoses and wastewater monitoring of local trends. Estimation of concentrations using qPCR often features a log-linear standard curve model calibrating quantification cycle (Cq) values obtained from underlying fluorescence measurements to standard concentrations. This process works well at high concentrations within a linear dynamic range but has diminishing reliability at low concentrations because it cannot explain "non-standard" data such as Cq values reflecting increasing variability at low concentrations or non-detects that do not yield Cq values at all. Here, fundamental probabilistic modeling concepts from classical quantitative microbiology were integrated into standard curve modeling approaches by reflecting well-understood mechanisms for random error in microbial data. This work showed that data diverging from the log-linear regression model at low concentrations as well as non-detects can be seamlessly integrated into enhanced standard curve analysis. The newly developed model provides improved representation of standard curve data at low concentrations while converging asymptotically upon conventional log-linear regression at high concentrations and adding no fitting parameters. Such modeling facilitates exploration of the effects of various random error mechanisms in experiments generating standard curve data, enables quantification of uncertainty in standard curve parameters, and is an important step toward quantifying uncertainty in qPCR-based concentration estimates. Improving understanding of the random error in qPCR data and standard curve modeling is especially important when low concentrations are of particular interest and inappropriate analysis can unduly affect interpretation, conclusions regarding lab performance, reported concentration estimates, and associated decision-making.

5.
Am J Cardiol ; 176: 51-57, 2022 08 01.
Article in English | MEDLINE | ID: mdl-35613955

ABSTRACT

The rate of transvenous lead extraction (TLE) is increasing, with an increasing rate of complex devices being implanted. TLE is now a routine part of cardiac device management and up-to-date data on the safety and efficacy of TLE with modern tools and techniques is essential to management decisions regarding noninfectious indications for lead extraction. We present a contemporary, prospective review of TLE at our high-volume cardiac center. All patients who underwent TLE from June 2016 to June 2019 were enrolled in our local database, and baseline clinical data, procedural information, and outcome data were collected. In total, 561 leads were explanted (n = 153) or extracted (n = 408) from 341 patients over the study period. Patients were predominantly male (71%), with a mean age of 65 ± 17 years. The most common indication for lead removal was lead failure (45.2%, n = 154), followed by infection of the pocket or device (29.3%, n = 100). In total, complete success was achieved in 96.4% (n = 541) leads, clinical success in a further 2.1% (n = 12), and failure only in 1.4% (n = 8). There was an overall complication rate of 0.9% (3/341) for major complications and 1.5% (5/341) for minor complications. There were no deaths. In conclusion, our data suggest that there are ongoing improvements in the safety profile and success rates of lead extraction undertaken by experienced operators. The major complication rate now is <1%.


Subject(s)
Defibrillators, Implantable , Pacemaker, Artificial , Aged , Aged, 80 and over , Device Removal/methods , Equipment Failure , Female , Humans , Longitudinal Studies , Male , Middle Aged , Prospective Studies , Retrospective Studies , Treatment Outcome
6.
Front Microbiol ; 13: 728146, 2022.
Article in English | MEDLINE | ID: mdl-35300475

ABSTRACT

Diversity analysis of amplicon sequencing data has mainly been limited to plug-in estimates calculated using normalized data to obtain a single value of an alpha diversity metric or a single point on a beta diversity ordination plot for each sample. As recognized for count data generated using classical microbiological methods, amplicon sequence read counts obtained from a sample are random data linked to source properties (e.g., proportional composition) by a probabilistic process. Thus, diversity analysis has focused on diversity exhibited in (normalized) samples rather than probabilistic inference about source diversity. This study applies fundamentals of statistical analysis for quantitative microbiology (e.g., microscopy, plating, and most probable number methods) to sample collection and processing procedures of amplicon sequencing methods to facilitate inference reflecting the probabilistic nature of such data and evaluation of uncertainty in diversity metrics. Following description of types of random error, mechanisms such as clustering of microorganisms in the source, differential analytical recovery during sample processing, and amplification are found to invalidate a multinomial relative abundance model. The zeros often abounding in amplicon sequencing data and their implications are addressed, and Bayesian analysis is applied to estimate the source Shannon index given unnormalized data (both simulated and experimental). Inference about source diversity is found to require knowledge of the exact number of unique variants in the source, which is practically unknowable due to library size limitations and the inability to differentiate zeros corresponding to variants that are actually absent in the source from zeros corresponding to variants that were merely not detected. Given these problems with estimation of diversity in the source even when the basic multinomial model is valid, diversity analysis at the level of samples with normalized library sizes is discussed.

7.
Sci Rep ; 12(1): 1608, 2022 Jan 31.
Article in English | MEDLINE | ID: mdl-35102197

ABSTRACT

Nano-electromechanical systems implement the opto-mechanical interaction combining electromagnetic circuits and mechanical elements. We investigate an inductively coupled nano-electromechanical system, where a superconducting quantum interference device (SQUID) realizes the coupling. We show that the resonance frequency of the mechanically compliant string embedded into the SQUID loop can be controlled in two different ways: (1) the bias magnetic flux applied perpendicular to the SQUID loop, (2) the magnitude of the in-plane bias magnetic field contributing to the nano-electromechanical coupling. These findings are quantitatively explained by the inductive interaction contributing to the effective spring constant of the mechanical resonator. In addition, we observe a residual field dependent shift of the mechanical resonance frequency, which we attribute to the finite flux pinning of vortices trapped in the magnetic field biased nanostring.

8.
Sci Rep ; 11(1): 22302, 2021 11 16.
Article in English | MEDLINE | ID: mdl-34785722

ABSTRACT

Amplicon sequencing has revolutionized our ability to study DNA collected from environmental samples by providing a rapid and sensitive technique for microbial community analysis that eliminates the challenges associated with lab cultivation and taxonomic identification through microscopy. In water resources management, it can be especially useful to evaluate ecosystem shifts in response to natural and anthropogenic landscape disturbances to signal potential water quality concerns, such as the detection of toxic cyanobacteria or pathogenic bacteria. Amplicon sequencing data consist of discrete counts of sequence reads, the sum of which is the library size. Groups of samples typically have different library sizes that are not representative of biological variation; library size normalization is required to meaningfully compare diversity between them. Rarefaction is a widely used normalization technique that involves the random subsampling of sequences from the initial sample library to a selected normalized library size. This process is often dismissed as statistically invalid because subsampling effectively discards a portion of the observed sequences, yet it remains prevalent in practice and the suitability of rarefying, relative to many other normalization approaches, for diversity analysis has been argued. Here, repeated rarefying is proposed as a tool to normalize library sizes for diversity analyses. This enables (i) proportionate representation of all observed sequences and (ii) characterization of the random variation introduced to diversity analyses by rarefying to a smaller library size shared by all samples. While many deterministic data transformations are not tailored to produce equal library sizes, repeatedly rarefying reflects the probabilistic process by which amplicon sequencing data are obtained as a representation of the amplified source microbial community. Specifically, it evaluates which data might have been obtained if a particular sample's library size had been smaller and allows graphical representation of the effects of this library size normalization process upon diversity analysis results.

9.
Viruses ; 12(9)2020 08 28.
Article in English | MEDLINE | ID: mdl-32872283

ABSTRACT

Human noroviruses (HuNoVs) are the leading causative agents of epidemic and sporadic acute gastroenteritis that affect people of all ages worldwide. However, very few dose-response studies have been carried out to determine the median infectious dose of HuNoVs. In this study, we evaluated the median infectious dose (ID50) and diarrhea dose (DD50) of the GII.4/2003 variant of HuNoV (Cin-2) in the gnotobiotic pig model of HuNoV infection and disease. Using various mathematical approaches (Reed-Muench, Dragstedt-Behrens, Spearman-Karber, exponential, approximate beta-Poisson dose-response models, and area under the curve methods), we estimated the ID50 and DD50 to be between 2400-3400 RNA copies, and 21,000-38,000 RNA copies, respectively. Contemporary dose-response models offer greater flexibility and accuracy in estimating ID50. In contrast to classical methods of endpoint estimation, dose-response modelling allows seamless analyses of data that may include inconsistent dilution factors between doses or numbers of subjects per dose group, or small numbers of subjects. Although this investigation is consistent with state-of-the-art ID50 determinations and offers an advancement in clinical data analysis, it is important to underscore that such analyses remain confounded by pathogen aggregation. Regardless, challenging virus strain ID50 determination is crucial for identifying the true infectiousness of HuNoVs and for the accurate evaluation of protective efficacies in pre-clinical studies of therapeutics, vaccines and other prophylactics using this reliable animal model.


Subject(s)
Caliciviridae Infections/virology , Norovirus/physiology , Virology/methods , Animals , Disease Models, Animal , Female , Gastroenteritis/virology , Germ-Free Life , Humans , Male , Norovirus/genetics , Norovirus/pathogenicity , Swine , Virulence
11.
Water Res ; 176: 115702, 2020 Jun 01.
Article in English | MEDLINE | ID: mdl-32247998

ABSTRACT

The degree to which a technology used for drinking water treatment physically removes or inactivates pathogenic microorganisms is commonly expressed as a log-reduction (or log-removal) and is of central importance to the provision of microbiologically safe drinking water. Many evaluations of water treatment process performance generate or compile multiple values of microorganism log-reduction, and it is common to report the average of these log-reduction values as a summary statistic. This work provides a cautionary note against misinterpretation and misuse of averaged log-reduction values by mathematically proving that the average of a set of log-reduction values characteristically overstates the average performance of which the set of log-reduction values is believed to be representative. This has two important consequences for drinking water and food safety as well as other applications of log-reduction: 1) a technology with higher average log-reduction does not necessarily have higher average performance, and 2) risk analyses using averaged log-reduction values as point estimates of treatment efficiency will underestimate average risk-sometimes by well over an order of magnitude. When analyzing a set of log-reduction values, a summary statistic called the effective log-reduction (which averages reduction or passage rates and expresses this as a log-reduction) provides a better representation of average performance of a treatment technology.


Subject(s)
Drinking Water , Water Purification
12.
Risk Anal ; 40(2): 352-369, 2020 02.
Article in English | MEDLINE | ID: mdl-31441953

ABSTRACT

In the quest to model various phenomena, the foundational importance of parameter identifiability to sound statistical modeling may be less well appreciated than goodness of fit. Identifiability concerns the quality of objective information in data to facilitate estimation of a parameter, while nonidentifiability means there are parameters in a model about which the data provide little or no information. In purely empirical models where parsimonious good fit is the chief concern, nonidentifiability (or parameter redundancy) implies overparameterization of the model. In contrast, nonidentifiability implies underinformativeness of available data in mechanistically derived models where parameters are interpreted as having strong practical meaning. This study explores illustrative examples of structural nonidentifiability and its implications using mechanistically derived models (for repeated presence/absence analyses and dose-response of Escherichia coli O157:H7 and norovirus) drawn from quantitative microbial risk assessment. Following algebraic proof of nonidentifiability in these examples, profile likelihood analysis and Bayesian Markov Chain Monte Carlo with uniform priors are illustrated as tools to help detect model parameters that are not strongly identifiable. It is shown that identifiability should be considered during experimental design and ethics approval to ensure generated data can yield strong objective information about all mechanistic parameters of interest. When Bayesian methods are applied to a nonidentifiable model, the subjective prior effectively fabricates information about any parameters about which the data carry no objective information. Finally, structural nonidentifiability can lead to spurious models that fit data well but can yield severely flawed inferences and predictions when they are interpreted or used inappropriately.

13.
Sensors (Basel) ; 19(19)2019 Sep 20.
Article in English | MEDLINE | ID: mdl-31547220

ABSTRACT

Affect recognition is an interdisciplinary research field bringing together researchers from natural and social sciences. Affect recognition research aims to detect the affective state of a person based on observables, with the goal to, for example, provide reasoning for the person's decision making or to support mental wellbeing (e.g., stress monitoring). Recently, beside of approaches based on audio, visual or text information, solutions relying on wearable sensors as observables, recording mainly physiological and inertial parameters, have received increasing attention. Wearable systems enable an ideal platform for long-term affect recognition applications due to their rich functionality and form factor, while providing valuable insights during everyday life through integrated sensors. However, existing literature surveys lack a comprehensive overview of state-of-the-art research in wearable-based affect recognition. Therefore, the aim of this paper is to provide a broad overview and in-depth understanding of the theoretical background, methods and best practices of wearable affect and stress recognition. Following a summary of different psychological models, we detail the influence of affective states on the human physiology and the sensors commonly employed to measure physiological changes. Then, we outline lab protocols eliciting affective states and provide guidelines for ground truth generation in field studies. We also describe the standard data processing chain and review common approaches related to the preprocessing, feature extraction and classification steps. By providing a comprehensive summary of the state-of-the-art and guidelines to various aspects, we would like to enable other researchers in the field to conduct and evaluate user studies and develop wearable systems.


Subject(s)
Wearable Electronic Devices , Humans , Machine Learning , Mental Health
14.
Sensors (Basel) ; 19(14)2019 Jul 12.
Article in English | MEDLINE | ID: mdl-31336894

ABSTRACT

Photoplethysmography (PPG)-based continuous heart rate monitoring is essential in a number of domains, e.g., for healthcare or fitness applications. Recently, methods based on time-frequency spectra emerged to address the challenges of motion artefact compensation. However, existing approaches are highly parametrised and optimised for specific scenarios of small, public datasets. We address this fragmentation by contributing research into the robustness and generalisation capabilities of PPG-based heart rate estimation approaches. First, we introduce a novel large-scale dataset (called PPG-DaLiA), including a wide range of activities performed under close to real-life conditions. Second, we extend a state-of-the-art algorithm, significantly improving its performance on several datasets. Third, we introduce deep learning to this domain, and investigate various convolutional neural network architectures. Our end-to-end learning approach takes the time-frequency spectra of synchronised PPG- and accelerometer-signals as input, and provides the estimated heart rate as output. Finally, we compare the novel deep learning approach to classical methods, performing evaluation on four public datasets. We show that on large datasets the deep learning model significantly outperforms other methods: The mean absolute error could be reduced by 31 % on the new dataset PPG-DaLiA, and by 21 % on the dataset WESAD.


Subject(s)
Algorithms , Datasets as Topic , Heart Rate/physiology , Neural Networks, Computer , Photoplethysmography/methods , Adolescent , Adult , Artifacts , Databases, Factual , Deep Learning , Exercise/physiology , Female , Humans , Male , Middle Aged , Young Adult
15.
Nat Commun ; 10(1): 2144, 2019 05 13.
Article in English | MEDLINE | ID: mdl-31086185

ABSTRACT

Pathogens face varying microenvironments in vivo, but suitable experimental systems and analysis tools to dissect how three-dimensional (3D) tissue environments impact pathogen spread are lacking. Here we develop an Integrative method to Study Pathogen spread by Experiment and Computation within Tissue-like 3D cultures (INSPECT-3D), combining quantification of pathogen replication with imaging to study single-cell and cell population dynamics. We apply INSPECT-3D to analyze HIV-1 spread between primary human CD4 T-lymphocytes using collagen as tissue-like 3D-scaffold. Measurements of virus replication, infectivity, diffusion, cellular motility and interactions are combined by mathematical analyses into an integrated spatial infection model to estimate parameters governing HIV-1 spread. This reveals that environmental restrictions limit infection by cell-free virions but promote cell-associated HIV-1 transmission. Experimental validation identifies cell motility and density as essential determinants of efficacy and mode of HIV-1 spread in 3D. INSPECT-3D represents an adaptable method for quantitative time-resolved analyses of 3D pathogen spread.


Subject(s)
CD4-Positive T-Lymphocytes/virology , HIV-1/pathogenicity , Models, Biological , Primary Cell Culture/methods , Virus Physiological Phenomena , CD4-Positive T-Lymphocytes/physiology , Cell Movement , Cells, Cultured , Computer Simulation , HEK293 Cells , HIV-1/physiology , Healthy Volunteers , Humans
16.
Front Microbiol ; 9: 2304, 2018.
Article in English | MEDLINE | ID: mdl-30344512

ABSTRACT

Accurate estimation of microbial concentrations is necessary to inform many important environmental science and public health decisions and regulations. Critically, widespread misconceptions about laboratory-reported microbial non-detects have led to their erroneous description and handling as "censored" values. This ultimately compromises their interpretation and undermines efforts to describe and model microbial concentrations accurately. Herein, these misconceptions are dispelled by (1) discussing the critical differences between discrete microbial observations and continuous data acquired using analytical chemistry methodologies and (2) demonstrating the bias introduced by statistical approaches tailored for chemistry data and misapplied to discrete microbial data. Notably, these approaches especially preclude the accurate representation of low concentrations and those estimated using microbial methods with low or variable analytical recovery, which can be expected to result in non-detects. Techniques that account for the probabilistic relationship between observed data and underlying microbial concentrations have been widely demonstrated, and their necessity for handling non-detects (in a way which is consistent with the handling of positive observations) is underscored herein. Habitual reporting of raw microbial observations and sample sizes is proposed to facilitate accurate estimation and analysis of microbial concentrations.

18.
Mayo Clin Proc ; 91(8): 1056-65, 2016 08.
Article in English | MEDLINE | ID: mdl-27492912

ABSTRACT

OBJECTIVE: To identify factors underlying heart failure hospitalization. METHODS: Between January 1, 2012, and May 31, 2012, we combined medical record reviews and cross-sectional qualitative interviews of multiple patients with heart failure, their clinicians, and their caregivers from a large academic medical center in the Midwestern United States. The interview data were analyzed using a 3-step grounded theory-informed process and constant comparative methods. Qualitative data were compared and contrasted with results from the medical record review. RESULTS: Patient nonadherence to the care plan was the most important contributor to hospital admission; however, reasons for nonadherence were complex and multifactorial. The data highlight the importance of patient education for the purposes of condition management, timeliness of care, and effective communication between providers and patients. CONCLUSION: To improve the consistency and quality of care for patients with heart failure, more effective relationships among patients, providers, and caregivers are needed. Providers must be pragmatic when educating patients and their caregivers about heart failure, its treatment, and its prognosis.


Subject(s)
Caregivers/psychology , Heart Failure/psychology , Inpatients/psychology , Insurance, Health/standards , Patient Compliance/psychology , Physicians/psychology , Attitude of Health Personnel , Cross-Sectional Studies , Female , Heart Failure/therapy , Humans , Inpatients/education , Insurance, Health/economics , Interviews as Topic , Male , Medical Records , Middle Aged , Midwestern United States , Patient Compliance/statistics & numerical data , Patient Education as Topic/methods , Patient Education as Topic/standards , Patient Readmission/economics , Patient Readmission/standards , Patient Readmission/statistics & numerical data , Physician-Patient Relations , Qualitative Research , Risk Factors , Self Care/psychology , Self Care/statistics & numerical data
20.
Risk Anal ; 35(7): 1364-83, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25522208

ABSTRACT

Two forms of single-hit infection dose-response models have previously been developed to assess available data from human feeding trials and estimate the norovirus dose-response relationship. The mechanistic interpretations of these models include strong assumptions that warrant reconsideration: the first study includes an implicit assumption that there is no immunity to Norwalk virus among the specific study population, while the recent second study includes assumptions that such immunity could exist and that the nonimmune have no defensive barriers to prevent infection from exposure to just one virus. Both models addressed unmeasured virus aggregation in administered doses. In this work, the available data are reanalyzed using a generalization of the first model to explore these previous assumptions. It was hypothesized that concurrent estimation of an unmeasured degree of virus aggregation and important dose-response parameters could lead to structural nonidentifiability of the model (i.e., that a diverse range of alternative mechanistic interpretations yield the same optimal fit), and this is demonstrated using the profile likelihood approach and by algebraic proof. It is also demonstrated that omission of an immunity parameter can artificially inflate the estimated degree of aggregation and falsely suggest high susceptibility among the nonimmune. The currently available data support the assumption of immunity within the specific study population, but provide only weak information about the degree of aggregation and susceptibility among the nonimmune. The probability of infection at low and moderate doses may be much lower than previously asserted, but more data from strategically designed dose-response experiments are needed to provide adequate information.


Subject(s)
Norovirus/pathogenicity , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...