Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
Add more filters










Publication year range
1.
J Expo Sci Environ Epidemiol ; 34(2): 345-355, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38145997

ABSTRACT

BACKGROUND: For healthcare workers, surface disinfections are daily routine tasks. An assessment of the inhalation exposure to hazardous substances, in this case the disinfectant´s active ingredients, is necessary to ensure workers safety. However, deciding which exposure model is best for exposure assessment remains difficult. OBJECTIVE: The aim of the study was to evaluate the applicability of different exposure models for disinfection of small surfaces in healthcare settings. METHODS: Measurements of the air concentration of active ingredients in disinfectants (ethanol, formaldehyde, glutaraldehyde, hydrogen peroxide, peroxyacetic acid) together with other exposure parameters were recorded in a test chamber. The measurements were performed using personal and stationary air sampling. In addition, exposure modelling was performed using three deterministic models (unsteady 1-zone, ConsExpo and 2-component) and one modifying-factor model (Stoffenmanager®). Their estimates were compared with the measured values using various methods to assess model quality (like accuracy and level of conservatism). RESULTS: The deterministic models showed overestimation predominantly in the range of two- to fivefold relative to the measured data and high conservatism for all active ingredients of disinfectants with the exception of ethanol. With Stoffenmanager® an exposure distribution was estimated for ethanol, which was in good accordance with the measured data. IMPACT STATEMENT: To date, workplace exposure assessments often involve expensive and time consuming air measurements. Reliable exposure models can be used to assess occupational inhalation exposure to hazardous substances, in this case surface disinfectants. This study describes the applicability of three deterministic and one modifying-factor model for disinfection of small surfaces in healthcare settings, in direct comparison to measurements performed and will facilitate future exposure assessments at these workplaces.


Subject(s)
Disinfectants , Disinfection , Inhalation Exposure , Occupational Exposure , Occupational Exposure/analysis , Humans , Inhalation Exposure/analysis , Disinfectants/analysis , Disinfection/methods , Models, Theoretical , Air Pollutants, Occupational/analysis , Environmental Monitoring/methods
2.
Front Bioeng Biotechnol ; 11: 1104445, 2023.
Article in English | MEDLINE | ID: mdl-36741754

ABSTRACT

One of the most common sources of information in Synthetic Biology is the data coming from plate reader fluorescence measurements. These experiments provide a measure of the light emitted by a certain fluorescent molecule, such as the Green Fluorescent Protein (GFP). However, these measurements are generally expressed in arbitrary units and are affected by the measurement device gain. This limits the range of measurements in a single experiment and hampers the comparison of results among experiments. In this work, we describe PLATERO, a calibration protocol to express fluorescence measures in concentration units of a reference fluorophore. The protocol removes the gain effect of the measurement device on the acquired data. In addition, the fluorescence intensity values are transformed into units of concentration using a Fluorescein calibration model. Both steps are expressed in a single mathematical expression that returns normalized, gain-independent, and comparable data, even if the acquisition was done at different device gain levels. Most important, the PLATERO embeds a Linearity and Bias Analysis that provides an assessment of the uncertainty of the model estimations, and a Reproducibility and Repeatability analysis that evaluates the sources of variability originating from the measurements and the equipment. All the functions used to build the model, exploit it with new data, and perform the uncertainty and variability assessment are available in an open access repository.

3.
Sensors (Basel) ; 23(3)2023 Jan 28.
Article in English | MEDLINE | ID: mdl-36772497

ABSTRACT

In this paper, the useability of feedforward and recurrent neural networks for fusion of data from impulse-radar sensors and depth sensors, in the context of healthcare-oriented monitoring of elderly persons, is investigated. Two methods of data fusion are considered, viz., one based on a multilayer perceptron and one based on a nonlinear autoregressive network with exogenous inputs. These two methods are compared with a reference method with respect to their capacity for decreasing the uncertainty of estimation of a monitored person's position and uncertainty of estimation of several parameters enabling medical personnel to make useful inferences on the health condition of that person, viz., the number of turns made during walking, the travelled distance, and the mean walking speed. Both artificial neural networks were trained on the synthetic data. The numerical experiments show the superiority of the method based on a nonlinear autoregressive network with exogenous inputs. This may be explained by the fact that for this type of network, the prediction of the person's position at each time instant is based on the position of that person at the previous time instants.


Subject(s)
Neural Networks, Computer , Radar , Humans , Aged , Gait , Walking , Delivery of Health Care
4.
Sensors (Basel) ; 22(16)2022 Aug 10.
Article in English | MEDLINE | ID: mdl-36015730

ABSTRACT

Human-centered applications using wearable sensors in combination with machine learning have received a great deal of attention in the last couple of years. At the same time, wearable sensors have also evolved and are now able to accurately measure physiological signals and are, therefore, suitable for detecting body reactions to stress. The field of machine learning, or more precisely, deep learning, has been able to produce outstanding results. However, in order to produce these good results, large amounts of labeled data are needed, which, in the context of physiological data related to stress detection, are a great challenge to collect, as they usually require costly experiments or expert knowledge. This usually results in an imbalanced and small dataset, which makes it difficult to train a deep learning algorithm. In recent studies, this problem is tackled with data augmentation via a Generative Adversarial Network (GAN). Conditional GANs (cGAN) are particularly suitable for this as they provide the opportunity to feed auxiliary information such as a class label into the training process to generate labeled data. However, it has been found that during the training process of GANs, different problems usually occur, such as mode collapse or vanishing gradients. To tackle the problems mentioned above, we propose a Long Short-Term Memory (LSTM) network, combined with a Fully Convolutional Network (FCN) cGAN architecture, with an additional diversity term to generate synthetic physiological data, which are used to augment the training dataset to improve the performance of a binary classifier for stress detection. We evaluated the methodology on our collected physiological measurement dataset, and we were able to show that using the method, the performance of an LSTM and an FCN classifier could be improved. Further, we showed that the generated data could not be distinguished from the real data any longer.


Subject(s)
Machine Learning , Wearable Electronic Devices , Algorithms , Humans , Time Factors
5.
Sensors (Basel) ; 22(14)2022 Jul 15.
Article in English | MEDLINE | ID: mdl-35890979

ABSTRACT

Large spontaneous leakages in district heating networks (DHNs) require a separation of the affected network part, as interruption of the heat supply is imminent. Measurement data of 22 real events was analyzed for localization, but suitable results were not always achieved. In this paper, the reasons are investigated and a model for data evaluation (MoFoDatEv) is developed for further insights. This contains prior knowledge and a simplified physical model for the reaction of the DHN in the case of a large spontaneous leakage. A model like this does not exist so far. It determines the time point and the duration of the pressure drop of the pressure wave which is caused by such leakages. Both parameters and the evaluation time frame are optimized for each event separately. The quality assessment leads to a categorization of the events based on several parameters, and correlations between the pressure and the refill mass flow are found. A minimum leakage size is deduced for successful evaluation. Furthermore, MoFoDatEv can also be used for leakage localization directly, combining two steps from previous publications. Therefore, more data contribute to the result. The application is conducted with artificial data to prove the model concept, and also with real measurement data.


Subject(s)
Heating , Hot Temperature
6.
Front Chem ; 10: 926330, 2022.
Article in English | MEDLINE | ID: mdl-35665064

ABSTRACT

[This corrects the article DOI: 10.3389/fchem.2022.818974.].

7.
Life (Basel) ; 12(5)2022 May 11.
Article in English | MEDLINE | ID: mdl-35629386

ABSTRACT

Quantitative and binary results are ubiquitous in biology. Inasmuch as an underlying genetic basis for the observed variation in these observations can be assumed, it is pertinent to infer the evolutionary relationships among the entities being measured. I present a computer program, PhyloM, that takes measurement data or binary data as input, using which, it directly generates a pairwise distance matrix that can then be subjected to the popular neighbor-joining (NJ) algorithm to produce a phylogenetic tree. PhyloM also has the option of nonparametric bootstrapping for testing the level of support for the inferred phylogeny. Finally, PhyloM also allows the user to root the tree on any desired branch. PhyloM was tested on Biolog Gen III growth data from isolates within the genus Chromobacterium and the closely related Aquitalea sp. This allowed a comparison with the genotypic tree inferred from whole-genome sequences for the same set of isolates. From this comparison, it was possible to infer parallel evolution. PhyloM is a stand-alone and easy-to-use computer program with a user-friendly graphical user interface that computes pairwise distances from measurement or binary data, which can then be used to infer phylogeny using NJ using a utility in the same program. Alternatively, the distance matrix can be downloaded for use in another program for phylogenetic inference or other purposes. It does not require any software to be installed or computer code written and is open source. The executable and computer code are available on GitHub.

8.
Front Chem ; 10: 818974, 2022.
Article in English | MEDLINE | ID: mdl-35372286

ABSTRACT

Hyperspectral imaging has recently gained increasing attention from academic and industrial world due to its capability of providing both spatial and physico-chemical information about the investigated objects. While this analytical approach is experiencing a substantial success and diffusion in very disparate scenarios, far less exploited is the possibility of collecting sequences of hyperspectral images over time for monitoring dynamic scenes. This trend is mainly justified by the fact that these so-called hyperspectral videos usually result in BIG DATA sets, requiring TBs of computer memory to be both stored and processed. Clearly, standard chemometric techniques do need to be somehow adapted or expanded to be capable of dealing with such massive amounts of information. In addition, hyperspectral video data are often affected by many different sources of variations in sample chemistry (for example, light absorption effects) and sample physics (light scattering effects) as well as by systematic errors (associated, e.g., to fluctuations in the behaviour of the light source and/or of the camera). Therefore, identifying, disentangling and interpreting all these distinct sources of information represents undoubtedly a challenging task. In view of all these aspects, the present work describes a multivariate hybrid modelling framework for the analysis of hyperspectral videos, which involves spatial, spectral and temporal parametrisations of both known and unknown chemical and physical phenomena underlying complex real-world systems. Such a framework encompasses three different computational steps: 1) motions ongoing within the inspected scene are estimated by optical flow analysis and compensated through IDLE modelling; 2) chemical variations are quantified and separated from physical variations by means of Extended Multiplicative Signal Correction (EMSC); 3) the resulting light scattering and light absorption data are subjected to the On-The-Fly Processing and summarised spectrally, spatially and over time. The developed methodology was here tested on a near-infrared hyperspectral video of a piece of wood undergoing drying. It led to a significant reduction of the size of the original measurements recorded and, at the same time, provided valuable information about systematic variations generated by the phenomena behind the monitored process.

9.
Contemp Clin Trials Commun ; 23: 100827, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34430754

ABSTRACT

INTRODUCTION: Longitudinal tumor measurements (TM) are commonly recorded in cancer clinical trials of solid tumors. To define patient response to treatment, the Response Evaluation Criteria in Solid Tumors (RECIST) categorizes the otherwise continuous measurements, which results in substantial information loss. We investigated two modeling approaches to incorporate all available cycle-by-cycle (continuous) TM to predict overall survival (OS) and compare the predictive accuracy of these two approaches to RECIST. MATERIAL AND METHODS: Joint modeling (JM) for longitudinal TM and OS and two-stage modeling with potential time-varying coefficients were utilized to predict OS using data from three trials with cycle-by-cycle TM. The JM approach incorporates TM data collected throughout the course of the clinical trial. The two-stage modeling approach incorporates information from early assessments (before 12 weeks) to predict subsequent OS outcome. The predictive accuracy was quantified by c-indices. RESULTS: Data from 577, 337, and 126 patients were included for the analysis (from two stage IV colorectal cancer trials (N9741, N9841) and an advanced non-small cell lung cancer trial (N0026), respectively). Both the JM and two-stage modeling reached a similar conclusion, i.e. the baseline covariates (age, gender, and race) were mostly not predictive of OS (p-value > 0.05). Quantities derived from TM were strong predictors of OS in the two colorectal cancer trials (p < 0.001 for both association in JM and two-stage modeling parameters); but less so in the lung cancer trial (p = 0.053 for association in JM and p = 0.024 and 0.160 for two-stage modeling parameters). The c-indices from the two-stage modeling were higher than those from a model using RECIST (range: 0.611-0.633 versus 0.586-0.590). The dynamic c-indices from the JM were in the range of 0.627-0.683 indicating good predictive accuracy. CONCLUSION: Both modeling approaches provide highly interpretable and clinical meaningful results; the improved predictive performance compared with RECIST indicates the possibility of deriving better trial endpoints from these approaches.

10.
Entropy (Basel) ; 23(2)2021 Feb 11.
Article in English | MEDLINE | ID: mdl-33670098

ABSTRACT

Trend prediction based on sensor data in a multi-sensor system is an important topic. As the number of sensors increases, we can measure and store more and more data. However, the increase in data has not effectively improved prediction performance. This paper focuses on this problem and presents a distributed predictor that can overcome unrelated data and sensor noise: First, we define the causality entropy to calculate the measurement's causality. Then, the series causality coefficient (SCC) is proposed to select the high causal measurement as the input data. To overcome the traditional deep learning network's over-fitting to the sensor noise, the Bayesian method is used to obtain the weight distribution characteristics of the sub-predictor network. A multi-layer perceptron (MLP) is constructed as the fusion layer to fuse the results from different sub-predictors. The experiments were implemented to verify the effectiveness of the proposed method by meteorological data from Beijing. The results show that the proposed predictor can effectively model the multi-sensor system's big measurement data to improve prediction performance.

11.
Ann Clin Biochem ; 58(4): 377-383, 2021 07.
Article in English | MEDLINE | ID: mdl-33730870

ABSTRACT

BACKGROUND: The Spearman rank correlation test under classical statistics cannot be applied when the paired data is in interval or indeterminacy is presented in the paired data. METHODS: In this paper, the Spearman rank correlation test under neutrosophic statistics will be introduced. The proposed Spearman rank correlation test will be a generalization of the existing Spearman rank correlation test. RESULTS: The proposed test is supposed to be more informative, flexible, and adequate to apply for the analysis of the measurement data. The application of the proposed test is given using the measurement of luteotropichormone data obtained from the clinical laboratory. Based on the information, the probability of accepting the null hypothesis H0N is 0.95, the chance of committing a type-I error is 0.05 and the chance of indeterminacy about the acceptance of H0N is 69%. CONCLUSIONS: From the analysis, it is noted that the proposed test is more efficient in terms of the measure of indeterminacy as compared with the existing test. From the study, it is concluded that the proposed test is more informative, applicable and useable under an indeterminate environment as compared with the existing test under classical statistics. Therefore, it is recommended to apply the proposed test in clinical laboratories for testing the correlation between instruments.


Subject(s)
Biochemistry/standards , Laboratories/standards , Statistics as Topic , Uncertainty , Algorithms , Humans , Models, Statistical , Reproducibility of Results
12.
Article in English | MEDLINE | ID: mdl-32408689

ABSTRACT

In most occupational settings, several chemical agents are commonly found, and the associated exposure risk for workers must be assessed. For this purpose, air samples can be collected and analyzed. AltrexChimie is a web application that helps industrial hygienists in the organization of the air sampling strategy and in the subsequent phases of data management, analysis, and communication. AltrexChimie contains a database of more than 550 chemical substances and their respective French Occupational Exposure Limit Values (OELV): Custom OELVs can also be defined by the user. AltrexChimie helps with the definition of key features of the sampling strategy, in particular by promoting a methodology for the design of Similar Exposure Groups (SEGs). Once measurement data are entered, they can be analyzed to obtain exposure diagnostics. Data management features allow for the easy storage and retrieval of measurements, and comprehensive dashboards help industrial hygienists (IHs) in the communication of results. Finally, with AltrexChimie it is also possible to assess exposure to multiple chemical substances and their additive effects. While most free software applications for the assessment of chemical exposure focus on the statistical computation of specific indicators, AltrexChimie offers several tools to assist IHs in the exposure assessment workflow. AltrexChimie is available without registration from INRS at https://altrex.inrs.fr.


Subject(s)
Air Pollutants, Occupational , Hazardous Substances , Occupational Exposure , Risk Assessment , Databases, Factual , Humans , Industry , Software
13.
Sensors (Basel) ; 20(6)2020 Mar 12.
Article in English | MEDLINE | ID: mdl-32178345

ABSTRACT

The fluctuation of the oil price and the growing requirement to reduce greenhouse gas emissions have forced ship builders and shipping companies to improve the energy efficiency of the vessels. The accurate prediction of the required propulsion power at various operating condition is essential to evaluate the energy-saving potential of a vessel. Currently, a new ship is expected to use the ISO15016 method in estimating added resistance induced by external environmental factors in power prediction. However, since ISO15016 usually assumes static water conditions, it may result in low accuracy when it is applied to various operating conditions. Moreover, it is time consuming to apply the ISO15016 method because it is computationally expensive and requires many input data. To overcome this limitation, we propose a data-driven approach to predict the propulsion power of a vessel. In this study, support vector regression (SVR) is used to learn from big data obtained from onboard measurement and the National Oceanic and Atmospheric Administration (NOAA) database. As a result, we show that our data-driven approach shows superior performance compared to the ISO15016 method if the big data of the solid line are secured.

14.
Data Brief ; 27: 104637, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31667326

ABSTRACT

In this paper, we present data from measurements made in the textured fibers bobbins in two different conditions, presenting critical quality characteristics such as diameter, mass and density. In order to obtain a significant amount of information, in each of the two conditions, 270 measurements were obtained for each of the quality characteristics. Three different equipments (Automatic Package Analyzer - APA) were used in ten different parts, replicated three times for each of them. Considering the two measurement data collection, an amount of 540 bobbins measurements were obtained. Almeida et al., (2019) applied these measurement data in his study. Taking into account the multicorrelated nature of the information, we also have the representation of the principal components' scores for these measurements, besides the eigenvalues and eigenvectors of the data.

15.
Sensors (Basel) ; 19(15)2019 Jul 28.
Article in English | MEDLINE | ID: mdl-31357713

ABSTRACT

This paper presents a novel method for tomographic measurement and data analysis based on crowdsourcing. X-ray radiography imaging was initially applied to determine silo flow parameters. We used traced particles immersed in the bulk to investigate gravitational silo flow. The reconstructed images were not perfect, due to inhomogeneous silo filling and nonlinear attenuation of the X-rays on the way to the detector. Automatic processing of such data is not feasible. Therefore, we used crowdsourcing for human-driven annotation of the trace particles. As we aimed to extract meaningful flow parameters, we developed a modified crowdsourcing annotation method, focusing on selected important areas of the silo pictures only. We call this method "targeted crowdsourcing", and it enables more efficient crowd work, as it is focused on the most important areas of the image that allow determination of the flow parameters. The results show that it is possible to analyze volumetric material structure movement based on 2D radiography data showing the location and movement of tiny metal trace particles. A quantitative description of the flow obtained from the horizontal and vertical velocity components was derived for different parts of the model silo volume. Targeting the attention of crowd workers towards either a specific zone or a particular particle speeds up the pre-processing stage while preserving the same quality of the output, quantified by important flow parameters.

16.
J Med Syst ; 43(7): 193, 2019 May 22.
Article in English | MEDLINE | ID: mdl-31115780

ABSTRACT

The classification of recurrence and non recurrence of Hepato Cellular carcinoma (HCC) outcome after Radio Frequency Ablation therapy is a critical task. Multiple time series clinical liver cancer dataset is collected from different dataset and time interval. A merging algorithm is used to merge all attributes collected from different sources in multiple time periods. In order to preserve the originality of information, statistical measures of each attribute is calculated and considered them as additional attributes for accurate prediction. However the merged dataset is unbalanced, in which, the number of samples from HCC recurrence class is much smaller than from HCC non recurrence. The feature weighting scheme select optimal features and parameter of classifiers are sequentially obtained from multiple iterations which causes higher computation time. In this paper, an efficient sampling approach is proposed using Inverse Random under Sampling (IRUS) to overcome class imbalance issue. IRUS under sample the majority class which creates a number of distinct partitions with a boundary separated minority and majority class samples. Additionally an optimization approach is proposed using Artificial Plant Optimization (APO) algorithm to select optimal features and parameters of classifiers to improve the effectiveness and efficiency of classification. The optimization approach reduces the number of iteration and computation time for feature selection and parameter selection for classifiers which classify the recurrence and non recurrence of HCC. Classify patients with HCC and without HCC based on optimal features and parameters by Support Vector Machine (SVM) and Random Forest(RF) classifiers. Finally, the experimental results are conducted to prove the effectiveness of the proposed method over existing method in terms of accuracy, specificity, sensitivity and balanced accuracy.


Subject(s)
Algorithms , Carcinoma, Hepatocellular , Datasets as Topic , Neoplasm Recurrence, Local , Carcinoma, Hepatocellular/physiopathology , Humans , Support Vector Machine , Time Factors
17.
Sensors (Basel) ; 18(6)2018 May 28.
Article in English | MEDLINE | ID: mdl-29843385

ABSTRACT

As the application of a coal mine Internet of Things (IoT), mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH) is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.

18.
Int J Med Inform ; 107: 18-29, 2017 11.
Article in English | MEDLINE | ID: mdl-29029688

ABSTRACT

PURPOSE: This study applied open source technology to establish a subject-enabled analytics model that can enhance measurement statistics of case studies with the public health data in cloud computing. METHODS: The infrastructure of the proposed model comprises three domains: 1) the health measurement data warehouse (HMDW) for the case study repository, 2) the self-developed modules of online health risk information statistics (HRIStat) for cloud computing, and 3) the prototype of a Web-based process automation system in statistics (PASIS) for the health risk assessment of case studies with subject-enabled evaluation. The system design employed freeware including Java applications, MySQL, and R packages to drive a health risk expert system (HRES). In the design, the HRIStat modules enforce the typical analytics methods for biomedical statistics, and the PASIS interfaces enable process automation of the HRES for cloud computing. The Web-based model supports both modes, step-by-step analysis and auto-computing process, respectively for preliminary evaluation and real time computation. RESULTS: The proposed model was evaluated by computing prior researches in relation to the epidemiological measurement of diseases that were caused by either heavy metal exposures in the environment or clinical complications in hospital. The simulation validity was approved by the commercial statistics software. The model was installed in a stand-alone computer and in a cloud-server workstation to verify computing performance for a data amount of more than 230K sets. Both setups reached efficiency of about 105 sets per second. CONCLUSIONS: The Web-based PASIS interface can be used for cloud computing, and the HRIStat module can be flexibly expanded with advanced subjects for measurement statistics. The analytics procedure of the HRES prototype is capable of providing assessment criteria prior to estimating the potential risk to public health.


Subject(s)
Cloud Computing , Disease , Expert Systems , Internet/statistics & numerical data , Models, Theoretical , Public Health Informatics , Software , Aged , Female , Health Status , Humans , Male
19.
Ophthalmologe ; 113(6): 469-77, 2016 Jun.
Article in German | MEDLINE | ID: mdl-27222127

ABSTRACT

BACKGROUND: Smart Data means intelligent data accumulation and the evaluation of large data sets. This is particularly important in ophthalmology as more and more data are being created. Increasing knowledge and personalized therapies are expected by combining clinical data from electronic health records (EHR) with measurement data. OBJECTIVE: In this study we investigated the possibilities to consolidate data from measurement devices and clinical data in a data warehouse (DW). MATERIAL AND METHODS: An EHR was adjusted to the needs of ophthalmology and the contents of referral letters were extracted. The data were imported into a DW overnight. Measuring devices were connected to the EHR by an HL7 standard interface and the use of a picture archiving and communications system (PACS). Data were exported from the review software using a self-developed software. For data analysis the software was modified to the specific requirements of ophthalmology. RESULTS: In the EHR 12 graphical user interfaces were created and the data from 32,234 referral letters were extracted. A total of 23 diagnostic devices could be linked to the PACS and 85,114 optical coherence tomography (OCT) scans, 19,098 measurements from IOLMaster as well as 5,425 pentacam examinations were imported into the DW including over 300,000 patients. Data discovery software was modified providing filtering methods. CONCLUSION: By building a DW a foundation for clinical and epidemiological studies could be implemented. In the future, decision support systems and strategies for personalized therapies can be based on such a database.


Subject(s)
Datasets as Topic/statistics & numerical data , Electronic Health Records/statistics & numerical data , Eye Diseases/diagnosis , Eye Diseases/epidemiology , Information Storage and Retrieval/methods , Radiology Information Systems/statistics & numerical data , Registries/statistics & numerical data , Germany/epidemiology , Humans , Medical Record Linkage/methods , Prevalence , Risk Factors
20.
Sensors (Basel) ; 16(5)2016 Apr 27.
Article in English | MEDLINE | ID: mdl-27128918

ABSTRACT

The Web of Things aims to make physical world objects and their data accessible through standard Web technologies to enable intelligent applications and sophisticated data analytics. Due to the amount and heterogeneity of the data, it is challenging to perform data analysis directly; especially when the data is captured from a large number of distributed sources. However, the size and scope of the data can be reduced and narrowed down with search techniques, so that only the most relevant and useful data items are selected according to the application requirements. Search is fundamental to the Web of Things while challenging by nature in this context, e.g., mobility of the objects, opportunistic presence and sensing, continuous data streams with changing spatial and temporal properties, efficient indexing for historical and real time data. The research community has developed numerous techniques and methods to tackle these problems as reported by a large body of literature in the last few years. A comprehensive investigation of the current and past studies is necessary to gain a clear view of the research landscape and to identify promising future directions. This survey reviews the state-of-the-art search methods for the Web of Things, which are classified according to three different viewpoints: basic principles, data/knowledge representation, and contents being searched. Experiences and lessons learned from the existing work and some EU research projects related to Web of Things are discussed, and an outlook to the future research is presented.

SELECTION OF CITATIONS
SEARCH DETAIL
...