Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
WIREs Water ; 6(4): e1353, 2019.
Article in English | MEDLINE | ID: mdl-31423301

ABSTRACT

A wide variety of processes controls the time of occurrence, duration, extent, and severity of river floods. Classifying flood events by their causative processes may assist in enhancing the accuracy of local and regional flood frequency estimates and support the detection and interpretation of any changes in flood occurrence and magnitudes. This paper provides a critical review of existing causative classifications of instrumental and preinstrumental series of flood events, discusses their validity and applications, and identifies opportunities for moving toward more comprehensive approaches. So far no unified definition of causative mechanisms of flood events exists. Existing frameworks for classification of instrumental and preinstrumental series of flood events adopt different perspectives: hydroclimatic (large-scale circulation patterns and atmospheric state at the time of the event), hydrological (catchment scale precipitation patterns and antecedent catchment state), and hydrograph-based (indirectly considering generating mechanisms through their effects on hydrograph characteristics). All of these approaches intend to capture the flood generating mechanisms and are useful for characterizing the flood processes at various spatial and temporal scales. However, uncertainty analyses with respect to indicators, classification methods, and data to assess the robustness of the classification are rarely performed which limits the transferability across different geographic regions. It is argued that more rigorous testing is needed. There are opportunities for extending classification methods to include indicators of space-time dynamics of rainfall, antecedent wetness, and routing effects, which will make the classification schemes even more useful for understanding and estimating floods. This article is categorized under:Science of Water > Water ExtremesScience of Water > Hydrological ProcessesScience of Water > Methods.

2.
Environ Sci Technol ; 51(13): 7502-7510, 2017 Jul 05.
Article in English | MEDLINE | ID: mdl-28613841

ABSTRACT

This paper demonstrates a maximum likelihood (ML)-based approach to derive representative ("best guess") contaminant concentrations from data with censored values (e.g., less than the detection limit). The method represents an advancement over existing techniques because it is capable of estimating the proportion of measurements that are true zeros and incorporating varying levels of censorship (e.g., sample specific detection limits, changes through time in method detection). The ability of the method to estimate the proportion of true zeros is validated using precipitation data. The stability and flexibility of the method are demonstrated with stochastic simulation, a sensitivity analysis, and unbiasedness analysis including varying numbers of significant digits. A key aspect of this paper is the application of the statistical analysis to real site rock core contaminant concentration data collected within a plume at two locations using high resolution depth-discrete sampling. Comparison of the representative values for concentrations at each location along the plume center-line shows a larger number of true zeros and generally lower concentrations at the downgradient location according to the conceptual site model, leading to improved estimates of attenuation with distance and/or time and associated confidence; this is not achievable using deterministic methods. The practical relevance of the proposed method is that it provides an improved basis for evaluating change (spatial, temporal, or both) in environmental systems.


Subject(s)
Data Interpretation, Statistical , Environmental Pollutants , Limit of Detection
3.
Artif Intell Med ; 60(2): 113-21, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24503486

ABSTRACT

OBJECTIVE: The paper presents a diagnostic algorithm for classifying cardiac tachyarrhythmias for implantable cardioverter defibrillators (ICDs). The main aim was to develop an algorithm that could reduce the rate of occurrence of inappropriate therapies, which are often observed in existing ICDs. To achieve low energy consumption, which is a critical factor for implantable medical devices, very low computational complexity of the algorithm was crucial. The study describes and validates such an algorithm and estimates its clinical value. METHODOLOGY: The algorithm was based on the heart rate variability (HRV) analysis. The input data for our algorithm were: RR-interval (I), as extracted from raw intracardiac electrogram (EGM), and in addition two other features of HRV called here onset (ONS) and instability (INST). 6 diagnostic categories were considered: ventricular fibrillation (VF), ventricular tachycardia (VT), sinus tachycardia (ST), detection artifacts and irregularities (including extrasystoles) (DAI), atrial tachyarrhythmias (ATF) and no tachycardia (i.e. normal sinus rhythm) (NT). The initial set of fuzzy rules based on the distributions of I, ONS and INST in the 6 categories was optimized by means of a software tool for automatic rule assessment using simulated annealing. A training data set with 74 EGM recordings was used during optimization, and the algorithm was validated with a validation data set with 58 EGM recordings. Real life recordings stored in defibrillator memories were used. Additionally the algorithm was tested on 2 sets of recordings from the PhysioBank databases: MIT-BIH Arrhythmia Database and MIT-BIH Supraventricular Arrhythmia Database. A custom CMOS integrated circuit implementing the diagnostic algorithm was designed in order to estimate the power consumption. A dedicated Web site, which provides public online access to the algorithm, has been created and is available for testing it. RESULTS: The total number of events in our training and validation sets was 132. In total 57 shocks and 28 antitachycardia pacing (ATP) therapies were delivered by ICDs. 25 out of 57 shocks were unjustified: 7 for ST, 12 for DAI, 6 for ATF. Our fuzzy rule-based diagnostic algorithm correctly recognized all episodes of VF and VT, except for one case where VT was recognized as VF. In four cases short lasting, spontaneously ending VT episodes were not detected (in these cases no therapy was needed and they were not detected by ICDs either). In other words, a fuzzy logic algorithm driven ICD would deliver one unjustified shock and deliver correct therapies in all other cases. In the tests, no adjustments of our algorithm to individual patients were needed. The sensitivity and specificity calculated from the results were 100% and 98%, respectively. In 126 ECG recordings from PhysioBank (about 30min each) our algorithm incorrectly detected 4 episodes of VT, which should rather be classified as fast supraventricular tachycardias. The estimated power consumption of the dedicated integrated circuit implementing the algorithm was below 120nW. CONCLUSION: The paper presents a fuzzy logic-based control algorithm for ICD. Its main advantages are: simplicity and ability to decrease the rate of occurrence of inappropriate therapies. The algorithm can work in real time (i.e. update the diagnosis after every RR-interval) with very limited computational resources.


Subject(s)
Algorithms , Defibrillators, Implantable , Fuzzy Logic , Arrhythmias, Cardiac/diagnosis , Arrhythmias, Cardiac/physiopathology , Arrhythmias, Cardiac/therapy , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...