Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
Add more filters










Publication year range
1.
Eur J Oper Res ; 304(1): 9-24, 2023 Jan 01.
Article in English | MEDLINE | ID: mdl-34803213

ABSTRACT

Operations researchers worldwide rely extensively on quantitative simulations to model alternative aspects of the COVID-19 pandemic. Proper uncertainty quantification and sensitivity analysis are fundamental to enrich the modeling process and communicate correctly informed insights to decision-makers. We develop a methodology to obtain insights on key uncertainty drivers, trend analysis and interaction quantification through an innovative combination of probabilistic sensitivity techniques and machine learning tools. We illustrate the approach by applying it to a representative of the family of susceptible-infectious-recovered (SIR) models recently used in the context of the COVID-19 pandemic. We focus on data of the early pandemic progression in Italy and the United States (the U.S.). We perform the analysis for both cases of correlated and uncorrelated inputs. Results show that quarantine rate and intervention time are the key uncertainty drivers, have opposite effects on the number of total infected individuals and are involved in the most relevant interactions.

2.
PLoS One ; 17(4): e0266823, 2022.
Article in English | MEDLINE | ID: mdl-35452469

ABSTRACT

In this contribution, we present an innovative data-driven model to reconstruct a reliable temporal pattern for time-lagged statistical monetary figures. Our research cuts across several domains regarding the production of robust economic inferences and the bridging of top-down aggregated information from central databases with disaggregated information obtained from local sources or national statistical offices. Our test bed case study is the European Regional Development Fund (ERDF). The application we discuss deals with the reported time lag between the local expenditures of ERDF by beneficiaries in Italian regions and the corresponding payments reported in the European Commission database. Our model reconstructs the timing of these local expenditures by back-dating the observed European Commission reimbursements. The inferred estimates are then validated against the expenditures reported from the Italian National Managing Authorities (NMAs) in terms of cumulative monetary difference. The lower cumulative yearly distance of our modelled expenditures compared to the official European Commission payments confirms the robustness of our model. Using sensitivity analysis, we also analyse the relative importance of the modelling parameters on the cumulative distance between the modelled and reported expenditures. The parameters with the greatest influence on the uncertainty of this distance are the following: first, how the non-clearly regionalised expenditures are attributed to individual regions; and second, the number of backward years that the residuals of the yearly payments are spread onto. In general, the distance between the modelled and reported expenditures can be further reduced by fixing these parameters. However, the gain is only marginal for some regions. The present study paves the way for modelling exercises that are aimed at more reliable estimates of the expenditures on the ground by the ultimate beneficiaries of European funds. Additionally, the output databases can contribute to enhancing the reliability of econometric studies on the effectiveness of European Union (EU) funds.


Subject(s)
Health Expenditures , Policy , European Union , Italy , Reproducibility of Results
3.
Risk Anal ; 42(2): 304-333, 2022 Feb.
Article in English | MEDLINE | ID: mdl-35274350

ABSTRACT

This work investigates aspects of the global sensitivity analysis of computer codes when alternative plausible distributions for the model inputs are available to the analyst. Analysts may decide to explore results under each distribution or to aggregate the distributions, assigning, for instance, a mixture. In the first case, we lose uniqueness of the sensitivity measures, and in the second case, we lose independence even if the model inputs are independent under each of the assigned distributions. Removing the unique distribution assumption impacts the mathematical properties at the basis of variance-based sensitivity analysis and has consequences on result interpretation as well. We analyze in detail the technical aspects. From this investigation, we derive corresponding recommendations for the risk analyst. We show that an approach based on the generalized functional ANOVA expansion remains theoretically grounded in the presence of a mixture distribution. Numerically, we base the construction of the generalized function ANOVA effects on the diffeomorphic modulation under observable response preserving homotopy regression. Our application addresses the calculation of variance-based sensitivity measures for the well-known Nordhaus' DICE model, when its inputs are assigned a mixture distribution. A discussion of implications for the risk analyst and future research perspectives closes the work.

4.
Nat Commun ; 12(1): 4525, 2021 07 26.
Article in English | MEDLINE | ID: mdl-34312386

ABSTRACT

A sustainable management of global freshwater resources requires reliable estimates of the water demanded by irrigated agriculture. This has been attempted by the Food and Agriculture Organization (FAO) through country surveys and censuses, or through Global Models, which compute irrigation water withdrawals with sub-models on crop types and calendars, evapotranspiration, irrigation efficiencies, weather data and irrigated areas, among others. Here we demonstrate that these strategies err on the side of excess complexity, as the values reported by FAO and outputted by Global Models are largely conditioned by irrigated areas and their uncertainty. Modelling irrigation water withdrawals as a function of irrigated areas yields almost the same results in a much parsimonious way, while permitting the exploration of all model uncertainties. Our work offers a robust and more transparent approach to estimate one of the most important indicators guiding our policies on water security worldwide.

5.
Risk Anal ; 40(12): 2639-2660, 2020 Dec.
Article in English | MEDLINE | ID: mdl-32722850

ABSTRACT

Quantitative models support investigators in several risk analysis applications. The calculation of sensitivity measures is an integral part of this analysis. However, it becomes a computationally challenging task, especially when the number of model inputs is large and the model output is spread over orders of magnitude. We introduce and test a new method for the estimation of global sensitivity measures. The new method relies on the intuition of exploiting the empirical cumulative distribution function of the simulator output. This choice allows the estimators of global sensitivity measures to be based on numbers between 0 and 1, thus fighting the curse of sparsity. For density-based sensitivity measures, we devise an approach based on moving averages that bypasses kernel-density estimation. We compare the new method to approaches for calculating popular risk analysis global sensitivity measures as well as to approaches for computing dependence measures gathering increasing interest in the machine learning and statistics literature (the Hilbert-Schmidt independence criterion and distance covariance). The comparison involves also the number of operations needed to obtain the estimates, an aspect often neglected in global sensitivity studies. We let the estimators undergo several tests, first with the wing-weight test case, then with a computationally challenging code with up to k = 30 , 000 inputs, and finally with the traditional Level E benchmark code.

6.
Risk Anal ; 38(11): 2459-2477, 2018 11.
Article in English | MEDLINE | ID: mdl-29924879

ABSTRACT

In probabilistic risk assessment, attention is often focused on the expected value of a risk metric. The sensitivity of this expectation to changes in the parameters of the distribution characterizing uncertainty in the inputs becomes of interest. Approaches based on differentiation encounter limitations when (i) distributional parameters are expressed in different units or (ii) the analyst wishes to transfer sensitivity insights from individual parameters to parameter groups, when alternating between different levels of a probabilistic safety assessment model. Moreover, the analyst may also wish to examine the effect of assuming independence among inputs. This work proposes an approach based on the differential importance measure, which solves these issues. Estimation aspects are discussed in detail, in particular the problem of obtaining all sensitivity measures from a single Monte Carlo sample, thus avoiding potentially costly model runs. The approach is illustrated through an analytical example, highlighting how it can be used to assess the impact of removing the independence assumption. An application to the probabilistic risk assessment model of the Advanced Test Reactor large loss of coolant accident sequence concludes the work.

7.
Risk Anal ; 38(8): 1541-1558, 2018 Aug.
Article in English | MEDLINE | ID: mdl-29384208

ABSTRACT

Risk analysts are often concerned with identifying key safety drivers, that is, the systems, structures, and components (SSCs) that matter the most to safety. SSCs importance is assessed both in the design phase (i.e., before a system is built) and in the implementation phase (i.e., when the system has been built) using the same importance measures. However, in a design phase, it would be necessary to appreciate whether the failure/success of a given SSC can cause the overall decision to change from accept to reject (decision significance). This work addresses the search for the conditions under which SSCs that are safety significant are also decision significant. To address this issue, the work proposes the notion of a θ-importance measure. We study in detail the relationships among risk importance measures to determine which properties guarantee that the ranking of SSCs does not change before and after the decision is made. An application to a probabilistic safety assessment model developed at NASA illustrates the risk management implications of our work.

8.
Risk Anal ; 37(10): 1828-1848, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28095589

ABSTRACT

Risk-informed decision making is often accompanied by the specification of an acceptable level of risk. Such target level is compared against the value of a risk metric, usually computed through a probabilistic safety assessment model, to decide about the acceptability of a given design, the launch of a space mission, etc. Importance measures complement the decision process with information about the risk/safety significance of events. However, importance measures do not tell us whether the occurrence of an event can change the overarching decision. By linking value of information and importance measures for probabilistic risk assessment models, this work obtains a value-of-information-based importance measure that brings together the risk metric, risk importance measures, and the risk threshold in one expression. The new importance measure does not impose additional computational burden because it can be calculated from our knowledge of the risk achievement and risk reduction worth, and complements the insights delivered by these importance measures. Several properties are discussed, including the joint decision worth of basic event groups. The application to the large loss of coolant accident sequence of the Advanced Test Reactor helps us in illustrating the risk analysis insights.

9.
Risk Anal ; 36(10): 1871-1895, 2016 Oct.
Article in English | MEDLINE | ID: mdl-26857789

ABSTRACT

Measures of sensitivity and uncertainty have become an integral part of risk analysis. Many such measures have a conditional probabilistic structure, for which a straightforward Monte Carlo estimation procedure has a double-loop form. Recently, a more efficient single-loop procedure has been introduced, and consistency of this procedure has been demonstrated separately for particular measures, such as those based on variance, density, and information value. In this work, we give a unified proof of single-loop consistency that applies to any measure satisfying a common rationale. This proof is not only more general but invokes less restrictive assumptions than heretofore in the literature, allowing for the presence of correlations among model inputs and of categorical variables. We examine numerical convergence of such an estimator under a variety of sensitivity measures. We also examine its application to a published medical case study.

10.
Risk Anal ; 34(2): 271-93, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24111855

ABSTRACT

Integrated assessment models offer a crucial support to decisionmakers in climate policy making. For a full understanding and corroboration of model results, analysts ought to identify the exogenous variables that influence the model results the most (key drivers), appraise the relevance of interactions, and the direction of change associated with the simultaneous variation of uncertain variables. We show that such information can be directly extracted from the data set produced by Monte Carlo simulations. Our discussion is guided by the application to the well-known DICE model of William Nordhaus. The proposed methodology allows analysts to draw robust insights into the dependence of future atmospheric temperature, global emissions, and carbon costs and taxes on the model's exogenous variables.


Subject(s)
Climate Change , Models, Theoretical , Risk Assessment/methods , Uncertainty , Computer Simulation , Monte Carlo Method
11.
Risk Anal ; 31(3): 404-28, 2011 Mar.
Article in English | MEDLINE | ID: mdl-21070300

ABSTRACT

Moment independent methods for the sensitivity analysis of model output are attracting growing attention among both academics and practitioners. However, the lack of benchmarks against which to compare numerical strategies forces one to rely on ad hoc experiments in estimating the sensitivity measures. This article introduces a methodology that allows one to obtain moment independent sensitivity measures analytically. We illustrate the procedure by implementing four test cases with different model structures and model input distributions. Numerical experiments are performed at increasing sample size to check convergence of the sensitivity estimates to the analytical values.

12.
Risk Anal ; 30(3): 385-99, 2010 Mar.
Article in English | MEDLINE | ID: mdl-20199656

ABSTRACT

In risk analysis problems, the decision-making process is supported by the utilization of quantitative models. Assessing the relevance of interactions is an essential information in the interpretation of model results. By such knowledge, analysts and decisionmakers are able to understand whether risk is apportioned by individual factor contributions or by their joint action. However, models are oftentimes large, requiring a high number of input parameters, and complex, with individual model runs being time consuming. Computational complexity leads analysts to utilize one-parameter-at-a-time sensitivity methods, which prevent one from assessing interactions. In this work, we illustrate a methodology to quantify interactions in probabilistic safety assessment (PSA) models by varying one parameter at a time. The method is based on a property of the functional ANOVA decomposition of a finite change that allows to exactly determine the relevance of factors when considered individually or together with their interactions with all other factors. A set of test cases illustrates the technique. We apply the methodology to the analysis of the core damage frequency of the large loss of coolant accident of a nuclear reactor. Numerical results reveal the nonadditive model structure, allow to quantify the relevance of interactions, and to identify the direction of change (increase or decrease in risk) implied by individual factor variations and by their cooperation.


Subject(s)
Models, Biological , Safety , Humans , Risk Assessment/methods
13.
Risk Anal ; 28(3): 667-80, 2008 Jun.
Article in English | MEDLINE | ID: mdl-18643824

ABSTRACT

In this work, we introduce a generalized rationale for local sensitivity analysis (SA) methods that allows to solve the problems connected with input constraints. Several models in use in the risk analysis field are characterized by the presence of deterministic relationships among the input parameters. However, SA issues related to the presence of constraints have been mainly dealt with in a heuristic fashion. We start with a systematic analysis of the effects of constraints. The findings can be summarized in the following three effects. (i) Constraints make it impossible to vary one parameter while keeping all others fixed. (ii) The model output becomes insensitive to a parameter if a constraint is solved for that parameter. (iii) Sensitivity analysis results depend on which parameter is selected as dependent. The explanation of these effects is found by proposing a result that leads to a natural extension of the local SA rationale introduced in Helton (1993). We then extend the definitions of the Birnbaum, criticality, and the differential importance measures to the constrained case. In addition, a procedure is introduced that allows to obtain constrained sensitivity results at the same cost as in the absence of constraints. The application to a nonbinary event tree concludes the article, providing a numerical illustration of the above findings.

14.
Risk Anal ; 28(4): 983-1001, 2008 Aug.
Article in English | MEDLINE | ID: mdl-18554270

ABSTRACT

In this work, we study the effect of epistemic uncertainty in the ranking and categorization of elements of probabilistic safety assessment (PSA) models. We show that, while in a deterministic setting a PSA element belongs to a given category univocally, in the presence of epistemic uncertainty, a PSA element belongs to a given category only with a certain probability. We propose an approach to estimate these probabilities, showing that their knowledge allows to appreciate "the sensitivity of component categorizations to uncertainties in the parameter values" (U.S. NRC Regulatory Guide 1.174). We investigate the meaning and utilization of an assignment method based on the expected value of importance measures. We discuss the problem of evaluating changes in quality assurance, maintenance activities prioritization, etc. in the presence of epistemic uncertainty. We show that the inclusion of epistemic uncertainly in the evaluation makes it necessary to evaluate changes through their effect on PSA model parameters. We propose a categorization of parameters based on the Fussell-Vesely and differential importance (DIM) measures. In addition, issues in the calculation of the expected value of the joint importance measure are present when evaluating changes affecting groups of components. We illustrate that the problem can be solved using DIM. A numerical application to a case study concludes the work.


Subject(s)
Knowledge , Models, Theoretical , Probability , Safety , Uncertainty
15.
Risk Anal ; 26(5): 1349-61, 2006 Oct.
Article in English | MEDLINE | ID: mdl-17054536

ABSTRACT

Uncertainty importance measures are quantitative tools aiming at identifying the contribution of uncertain inputs to output uncertainty. Their application ranges from food safety (Frey & Patil (2002)) to hurricane losses (Iman et al. (2005a, 2005b)). Results and indications an analyst derives depend on the method selected for the study. In this work, we investigate the assumptions at the basis of various indicator families to discuss the information they convey to the analyst/decisionmaker. We start with nonparametric techniques, and then present variance-based methods. By means of an example we show that output variance does not always reflect a decisionmaker state of knowledge of the inputs. We then examine the use of moment-independent approaches to global sensitivity analysis, i.e., techniques that look at the entire output distribution without a specific reference to its moments. Numerical results demonstrate that both moment-independent and variance-based indicators agree in identifying noninfluential parameters. However, differences in the ranking of the most relevant factors show that inputs that influence variance the most are not necessarily the ones that influence the output uncertainty distribution the most.


Subject(s)
Risk Assessment , Uncertainty , Analysis of Variance , Decision Making , Humans , Models, Statistical , Models, Theoretical , Regression Analysis , Research Design , Sensitivity and Specificity , Weights and Measures
SELECTION OF CITATIONS
SEARCH DETAIL
...