Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Int J Med Inform ; 180: 105248, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37866276

ABSTRACT

BACKGROUND: Within modern health systems, the possibility of accessing a large amount and a variety of data related to patients' health has increased significantly over the years. The source of this data could be mobile and wearable electronic systems used in everyday life, and specialized medical devices. In this study we aim to investigate the use of modern Machine Learning (ML) techniques for preclinical health assessment based on data collected from questionnaires filled out by patients. METHOD: To identify the health conditions of pregnant women, we developed a questionnaire that was distributed in three maternity hospitals in the Mureș County, Romania. In this work we proposed and developed an ML model for pattern detection in common risk assessment based on data extracted from questionnaires. RESULTS: Out of the 1278 women who answered the questionnaire, 381 smoked before pregnancy and only 216 quit smoking during the period in which they became pregnant. The performance of the model indicates the feasibility of the solution, with an accuracy of 98 % confirmed for the considered case study. CONCLUSION: The proposed solution offers a simple and efficient way to digitize questionnaire data and to analyze the data through a reduced computational effort, both in terms of memory and computing power used.


Subject(s)
Machine Learning , Smoking , Female , Humans , Pregnancy , Risk Assessment , Surveys and Questionnaires , Tobacco Smoking , Pregnancy Complications
2.
Sensors (Basel) ; 23(2)2023 Jan 04.
Article in English | MEDLINE | ID: mdl-36679360

ABSTRACT

Big data pipelines are developed to process data characterized by one or more of the three big data features, commonly known as the three Vs (volume, velocity, and variety), through a series of steps (e.g., extract, transform, and move), making the ground work for the use of advanced analytics and ML/AI techniques. Computing continuum (i.e., cloud/fog/edge) allows access to virtually infinite amount of resources, where data pipelines could be executed at scale; however, the implementation of data pipelines on the continuum is a complex task that needs to take computing resources, data transmission channels, triggers, data transfer methods, integration of message queues, etc., into account. The task becomes even more challenging when data storage is considered as part of the data pipelines. Local storage is expensive, hard to maintain, and comes with several challenges (e.g., data availability, data security, and backup). The use of cloud storage, i.e., storage-as-a-service (StaaS), instead of local storage has the potential of providing more flexibility in terms of scalability, fault tolerance, and availability. In this article, we propose a generic approach to integrate StaaS with data pipelines, i.e., computation on an on-premise server or on a specific cloud, but integration with StaaS, and develop a ranking method for available storage options based on five key parameters: cost, proximity, network performance, server-side encryption, and user weights/preferences. The evaluation carried out demonstrates the effectiveness of the proposed approach in terms of data transfer performance, utility of the individual parameters, and feasibility of dynamic selection of a storage option based on four primary user scenarios.


Subject(s)
Algorithms , Big Data , Software , Computers , Computer Security
3.
Sensors (Basel) ; 21(24)2021 Dec 08.
Article in English | MEDLINE | ID: mdl-34960302

ABSTRACT

The emergence of the edge computing paradigm has shifted data processing from centralised infrastructures to heterogeneous and geographically distributed infrastructures. Therefore, data processing solutions must consider data locality to reduce the performance penalties from data transfers among remote data centres. Existing big data processing solutions provide limited support for handling data locality and are inefficient in processing small and frequent events specific to the edge environments. This article proposes a novel architecture and a proof-of-concept implementation for software container-centric big data workflow orchestration that puts data locality at the forefront. The proposed solution considers the available data locality information, leverages long-lived containers to execute workflow steps, and handles the interaction with different data sources through containers. We compare the proposed solution with Argo workflows and demonstrate a significant performance improvement in the execution speed for processing the same data units. Finally, we carry out experiments with the proposed solution under different configurations and analyze individual aspects affecting the performance of the overall solution.


Subject(s)
Big Data , Computational Biology , Information Storage and Retrieval , Software , Workflow
SELECTION OF CITATIONS
SEARCH DETAIL
...