Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
Biophys J ; 122(8): 1517-1525, 2023 04 18.
Article in English | MEDLINE | ID: mdl-36926695

ABSTRACT

The stress-free state (SFS) of red blood cells (RBCs) is a fundamental reference configuration for the calibration of computational models, yet it remains unknown. Current experimental methods cannot measure the SFS of cells without affecting their mechanical properties, whereas computational postulates are the subject of controversial discussions. Here, we introduce data-driven estimates of the SFS shape and the visco-elastic properties of RBCs. We employ data from single-cell experiments that include measurements of the equilibrium shape of stretched cells and relaxation times of initially stretched RBCs. A hierarchical Bayesian model accounts for these experimental and data heterogeneities. We quantify, for the first time, the SFS of RBCs and use it to introduce a transferable RBC (t-RBC) model. The effectiveness of the proposed model is shown on predictions of unseen experimental conditions during the inference, including the critical stress of transitions between tumbling and tank-treading cells in shear flow. Our findings demonstrate that the proposed t-RBC model provides predictions of blood flows with unprecedented accuracy and quantified uncertainties.


Subject(s)
Erythrocytes , Humans , Bayes Theorem , Computer Simulation , Erythrocytes/physiology , Viscosity
2.
J Med Ethics ; 48(3): 175-183, 2022 03.
Article in English | MEDLINE | ID: mdl-33687916

ABSTRACT

Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to support decision-making around cardiopulmonary resuscitation and the determination of a patient's Do Not Attempt to Resuscitate status (also known as code status). The COVID-19 pandemic has made us keenly aware of the difficulties physicians encounter when they have to act quickly in stressful situations without knowing what their patient would have wanted. We discuss the results of an interview study conducted with healthcare professionals in a university hospital aimed at understanding the status quo of resuscitation decision processes while exploring a potential role for AI systems in decision-making around code status. Our data suggest that (1) current practices are fraught with challenges such as insufficient knowledge regarding patient preferences, time pressure and personal bias guiding care considerations and (2) there is considerable openness among clinicians to consider the use of AI-based decision support. We suggest a model for how AI can contribute to improve decision-making around resuscitation and propose a set of ethically relevant preconditions-conceptual, methodological and procedural-that need to be considered in further development and implementation efforts.


Subject(s)
Artificial Intelligence , COVID-19 , Humans , Pandemics , Resuscitation Orders , SARS-CoV-2
3.
R Soc Open Sci ; 8(1): 200531, 2021 Jan.
Article in English | MEDLINE | ID: mdl-33614060

ABSTRACT

Effective intervention strategies for epidemics rely on the identification of their origin and on the robustness of the predictions made by network disease models. We introduce a Bayesian uncertainty quantification framework to infer model parameters for a disease spreading on a network of communities from limited, noisy observations; the state-of-the-art computational framework compensates for the model complexity by exploiting massively parallel computing architectures. Using noisy, synthetic data, we show the potential of the approach to perform robust model fitting and additionally demonstrate that we can effectively identify the disease origin via Bayesian model selection. As disease-related data are increasingly available, the proposed framework has broad practical relevance for the prediction and management of epidemics.

4.
Environ Sci Pollut Res Int ; 28(6): 7043-7067, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33025432

ABSTRACT

A novel index-based method (RIVA) for the assessment of intrinsic groundwater vulnerability is proposed, based on the successful concept of the European approach (Zwahlen 2003) and by incorporating additional elements that provide more realistic and representative results. Its concept includes four main factors: accounting for the recharge to the system (R), the infiltration conditions (I), the protection offered by the vadose zone (V), and the aquifer characteristics (A). Several sub-factors and parameters are involved in calculation of the final intrinsic vulnerability index. However, even though RIVA is a comprehensive method that produces reliable results, it is not data intensive, does not require advanced skills in data preparation and processing, and may safely be applied regardless of aquifer type, prevalent porosity, geometric and geo-tectonic setup, and site-specific conditions. Its development has incorporated careful consideration of all key existing groundwater vulnerability methods and their critical aspects (factors, parameters, rating, etc.). It has studied and endorsed their virtues while avoiding or modifying factors and approaches that are either difficult to quantify, ambiguous to assess, or non-uniformly applicable to every hydrogeological setup. RIVA has been successfully demonstrated in the intensively cultivated area of Kopaida plain, Central Greece, which is characterized by a complex and heterogeneous geological background. Its validation was performed by a joint compilation of ground-truth monitoring-based data, in-depth knowledge of the geological structure, hydrogeological setup, regional hydrodynamic evolution mechanisms, and also by the dominant driving pressures. Results of the performed validation clearly demonstrated the validity of the proposed methodology to capture the spatially distributed zones of different vulnerability classes accurately and reliably, as these are shaped by the considered factors. RIVA method proved that it may be safely considered to be a fair trade-off between succeeded accuracy, and data intensity and investment to reach highly accurate results. As such, it is envisaged to become an efficient method of performing reliable groundwater vulnerability assessments of complex environments when neither resources occur nor time to generate intensive data is available, and ultimately be valorized for further risk assessment and decision-making processes related to groundwater resource management.


Subject(s)
Environmental Monitoring , Groundwater , Greece , Hydrodynamics , Porosity
5.
Swiss Med Wkly ; 150: w20445, 2020 12 14.
Article in English | MEDLINE | ID: mdl-33327002

ABSTRACT

The systematic identification of infected individuals is critical for the containment of the COVID-19 pandemic. Currently, the spread of the disease is mostly quantified by the reported numbers of infections, hospitalisations, recoveries and deaths; these quantities inform epidemiology models that provide forecasts for the spread of the epidemic and guide policy making. The veracity of these forecasts depends on the discrepancy between the numbers of reported, and unreported yet infectious, individuals. We combine Bayesian experimental design with an epidemiology model and propose a methodology for the optimal allocation of limited testing resources in space and time, which maximises the information gain for such unreported infections. The proposed approach is applicable at the onset and spread of the epidemic and can forewarn of a possible recurrence of the disease after relaxation of interventions. We examine its application in Switzerland; the open source software is, however, readily adaptable to countries around the world. We find that following the proposed methodology can lead to vastly less uncertain predictions for the spread of the disease, thus improving estimates of the effective reproduction number and the future number of unreported infections. This information can provide timely and systematic guidance for the effective identification of infectious individuals and for decision-making regarding lockdown measures and the distribution of vaccines.


Subject(s)
COVID-19 Testing/methods , COVID-19/epidemiology , Communicable Disease Control/methods , Epidemiological Monitoring , Health Policy , Resource Allocation/methods , Bayes Theorem , COVID-19/diagnosis , COVID-19/prevention & control , COVID-19/transmission , Diagnostic Services/supply & distribution , Forecasting , Humans , Random Allocation , SARS-CoV-2 , Switzerland/epidemiology
6.
Swiss Med Wkly ; 150: w20313, 2020 07 13.
Article in English | MEDLINE | ID: mdl-32677705

ABSTRACT

The reproduction number is broadly considered as a key indicator for the spreading of the COVID-19 pandemic. Its estimated value is a measure of the necessity and, eventually, effectiveness of interventions imposed in various countries. Here we present an online tool for the data-driven inference and quantification of uncertainties for the reproduction number, as well as the time points of interventions for 51 European countries. The study relied on the Bayesian calibration of the SIR model with data from reported daily infections from these countries. The model fitted the data, for most countries, without individual tuning of parameters. We also compared the results of SIR and SEIR models, which give different estimates of the reproduction number, and provided an analytical relationship between the respective numbers. We deployed a Bayesian inference framework with efficient sampling algorithms, to present a publicly available graphical user interface (https://cse-lab.ethz.ch/coronavirus) that allows the user to assess and compare predictions for pairs of European countries. The results quantified the rate of the disease’s spread before and after interventions, and provided a metric for the effectiveness of non-pharmaceutical interventions in different countries. They also indicated how geographic proximity and the times of interventions affected the progression of the epidemic.


Subject(s)
Basic Reproduction Number/statistics & numerical data , Coronavirus Infections , Disease Transmission, Infectious/statistics & numerical data , Epidemiological Monitoring , Pandemics , Pneumonia, Viral , Bayes Theorem , Betacoronavirus/isolation & purification , COVID-19 , Communicable Disease Control/methods , Communicable Disease Control/statistics & numerical data , Coronavirus Infections/epidemiology , Coronavirus Infections/prevention & control , Coronavirus Infections/transmission , Disease Transmission, Infectious/prevention & control , Epidemiologic Measurements , Europe/epidemiology , Humans , Pandemics/prevention & control , Pandemics/statistics & numerical data , Pneumonia, Viral/epidemiology , Pneumonia, Viral/prevention & control , Pneumonia, Viral/transmission , SARS-CoV-2 , Uncertainty
7.
Biomimetics (Basel) ; 5(1)2020 Mar 09.
Article in English | MEDLINE | ID: mdl-32182929

ABSTRACT

Fish schooling implies an awareness of the swimmers for their companions. In flow mediated environments, in addition to visual cues, pressure and shear sensors on the fish body are critical for providing quantitative information that assists the quantification of proximity to other fish. Here we examine the distribution of sensors on the surface of an artificial swimmer so that it can optimally identify a leading group of swimmers. We employ Bayesian experimental design coupled with numerical simulations of the two-dimensional Navier Stokes equations for multiple self-propelled swimmers. The follower tracks the school using information from its own surface pressure and shear stress. We demonstrate that the optimal sensor distribution of the follower is qualitatively similar to the distribution of neuromasts on fish. Our results show that it is possible to identify accurately the center of mass and the number of the leading swimmers using surface only information.

8.
Sci Rep ; 9(1): 99, 2019 01 14.
Article in English | MEDLINE | ID: mdl-30643172

ABSTRACT

The necessity for accurate and computationally efficient representations of water in atomistic simulations that can span biologically relevant timescales has born the necessity of coarse-grained (CG) modeling. Despite numerous advances, CG water models rely mostly on a-priori specified assumptions. How these assumptions affect the model accuracy, efficiency, and in particular transferability, has not been systematically investigated. Here we propose a data driven comparison and selection for CG water models through a Hierarchical Bayesian framework. We examine CG water models that differ in their level of coarse-graining, structure, and number of interaction sites. We find that the importance of electrostatic interactions for the physical system under consideration is a dominant criterion for the model selection. Multi-site models are favored, unless the effects of water in electrostatic screening are not relevant, in which case the single site model is preferred due to its computational savings. The charge distribution is found to play an important role in the multi-site model's accuracy while the flexibility of the bonds/angles may only slightly improve the models. Furthermore, we find significant variations in the computational cost of these models. We present a data informed rationale for the selection of CG water models and provide guidance for future water model designs.

9.
Bull Math Biol ; 81(8): 3074-3096, 2019 08.
Article in English | MEDLINE | ID: mdl-29992453

ABSTRACT

We propose the S-leaping algorithm for the acceleration of Gillespie's stochastic simulation algorithm that combines the advantages of the two main accelerated methods; the [Formula: see text]-leaping and R-leaping algorithms. These algorithms are known to be efficient under different conditions; the [Formula: see text]-leaping is efficient for non-stiff systems or systems with partial equilibrium, while the R-leaping performs better in stiff system thanks to an efficient sampling procedure. However, even a small change in a system's set up can critically affect the nature of the simulated system and thus reduce the efficiency of an accelerated algorithm. The proposed algorithm combines the efficient time step selection from the [Formula: see text]-leaping with the effective sampling procedure from the R-leaping algorithm. The S-leaping is shown to maintain its efficiency under different conditions and in the case of large and stiff systems or systems with fast dynamics, the S-leaping outperforms both methods. We demonstrate the performance and the accuracy of the S-leaping in comparison with the [Formula: see text]-leaping and R-leaping on a number of benchmark systems involving biological reaction networks.


Subject(s)
Algorithms , Models, Biological , Bacillus subtilis/genetics , Bacillus subtilis/metabolism , Biochemical Phenomena , Computer Simulation , Dimerization , Escherichia coli/genetics , Escherichia coli/metabolism , Escherichia coli Proteins/genetics , Escherichia coli Proteins/metabolism , Kinetics , Lac Operon , Markov Chains , Mathematical Concepts , Monosaccharide Transport Proteins/genetics , Monosaccharide Transport Proteins/metabolism , Stochastic Processes , Symporters/genetics , Symporters/metabolism , Systems Biology
10.
J Mech Behav Biomed Mater ; 90: 256-263, 2019 02.
Article in English | MEDLINE | ID: mdl-30388509

ABSTRACT

We investigate the capacity of tendons to bear substantial loads by exploiting their hierarchical structure and the viscous nature of their subunits. We model and analyze two successive tendon scales: the fibril and fiber subunits. We present a novel method for bridging intra-scale experimental observations by combining a homogenization analysis technique with a Bayesian inference method. This allows us to infer elastic and viscoelastic moduli at the embedded fibril scale that are mechanically compatible with the experimental data observed at the fiber scale. We identify the rather narrow range of moduli values at the fibrillar scale that can reproduce the mechanical behavior of the fiber, while we quantify the viscoelastic contribution of the embedding, non-collagenous matrix substance. The computed viscoelastic moduli suggest that a great part of the stress relaxation capacity of tendons needs to be attributed to the embedding matrix substance of its inner components, classifying it as a primal load relaxation constituent.


Subject(s)
Elasticity , Models, Biological , Tendons/physiology , Bayes Theorem , Biomechanical Phenomena , Uncertainty , Viscosity , Weight-Bearing
11.
Fetal Diagn Ther ; 44(3): 228-235, 2018.
Article in English | MEDLINE | ID: mdl-29045943

ABSTRACT

BACKGROUND: The diagnostic assessment of fetal arrhythmias relies on the measurements of atrioventricular (AV) and ventriculoatrial (VA) time intervals. Pulsed Doppler over in- and outflow of the left ventricle and tissue Doppler imaging are well-described methods, while Doppler measurements between the left brachiocephalic vein and the aortic arch are less investigated. The aim of this study was to compare these methods of measurement, to find influencing factors on AV and VA times and their ratio, and to create reference ranges. METHODS: Echocardiography was performed between 16 and 40 weeks of gestation in normal singleton pregnancies. Nomograms for the individual measurements were created using quantile regression with Matlab Data Analytics. Statistical analyses were performed with GraphPad version 5.0 for Windows. RESULTS: A total of 329 pregnant women were enrolled. A significant correlation exists between AV and VA times and gestational age (GA) (p = 0.0104 to <0.0001, σ = 0.1412 to 0.3632). No correlation was found between the AV:VA ratio and GA (p = 0.08 to 0.60). All measurements differed significantly amongst the studied methods (p < 0.0001). CONCLUSIONS: AV and VA intervals increase proportionally with GA; no other independent influencing factors could be identified. As significant differences exist between the three methods of assessment, it is crucial to use appropriate reference ranges to diagnose pathologies.


Subject(s)
Arrhythmias, Cardiac/diagnostic imaging , Fetal Heart/diagnostic imaging , Heart Rate, Fetal/physiology , Echocardiography , Female , Humans , Pregnancy , Prospective Studies , Reference Values
12.
Sci Rep ; 7(1): 16576, 2017 11 29.
Article in English | MEDLINE | ID: mdl-29185461

ABSTRACT

The Lennard-Jones (LJ) potential is a cornerstone of Molecular Dynamics (MD) simulations and among the most widely used computational kernels in science. The LJ potential models atomistic attraction and repulsion with century old prescribed parameters (q = 6, p = 12, respectively), originally related by a factor of two for simplicity of calculations. We propose the inference of the repulsion exponent through Hierarchical Bayesian uncertainty quantification We use experimental data of the radial distribution function and dimer interaction energies from quantum mechanics simulations. We find that the repulsion exponent p ≈ 6.5 provides an excellent fit for the experimental data of liquid argon, for a range of thermodynamic conditions, as well as for saturated argon vapour. Calibration using the quantum simulation data did not provide a good fit in these cases. However, values p ≈ 12.7 obtained by dimer quantum simulations are preferred for the argon gas while lower values are promoted by experimental data. These results show that the proposed LJ 6-p potential applies to a wider range of thermodynamic conditions, than the classical LJ 6-12 potential. We suggest that calibration of the repulsive exponent in the LJ potential widens the range of applicability and accuracy of MD simulations.

13.
J Chem Phys ; 144(10): 104107, 2016 Mar 14.
Article in English | MEDLINE | ID: mdl-26979681

ABSTRACT

We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

14.
PLoS One ; 10(7): e0130825, 2015.
Article in English | MEDLINE | ID: mdl-26161544

ABSTRACT

Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters.


Subject(s)
Algorithms , Biostatistics/methods , Models, Biological , Stochastic Processes , ErbB Receptors/metabolism , Feedback, Physiological , Heat-Shock Proteins/metabolism , Homeostasis , Humans , Mathematical Computing , Reproducibility of Results , Tumor Suppressor Protein p53/metabolism
15.
J Chem Phys ; 140(12): 124108, 2014 Mar 28.
Article in English | MEDLINE | ID: mdl-24697425

ABSTRACT

In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.


Subject(s)
Molecular Dynamics Simulation , Algorithms , Kinetics , Monte Carlo Method , Stochastic Processes
SELECTION OF CITATIONS
SEARCH DETAIL
...