Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Soft comput ; 26(19): 10075-10083, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35966350

RESUMO

Coronavirus disease 19 (COVID-19) is an infectious disease caused by the SARS-CoV-2 virus, which is responsible for the ongoing global pandemic. Stringent measures have been adopted to face the pandemic, such as complete lockdown, shutting down businesses and trade, as well as travel restrictions. Nevertheless, such solutions have had a tremendous economic impact. Although the use of recent vaccines seems to reduce the scale of the problem, the pandemic does not appear to finish soon. Therefore, having a forecasting model about the COVID-19 spread is of paramount importance to plan interventions and, then, to limit the economic and social damage. In this paper, we use Genetic Programming to evidence dependences of the SARS-CoV-2 spread from past data in a given Country. Namely, we analyze real data of the Campania Region, in Italy. The resulting models prove their effectiveness in forecasting the number of new positives 10/15 days before, with quite a high accuracy. The developed models have been integrated into the context of SVIMAC-19, an analytical-forecasting system for the containment, contrast, and monitoring of Covid-19 within the Campania Region.

2.
Soft comput ; 25(24): 15335-15343, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34421340

RESUMO

Huge quantities of pollutants are released into the atmosphere of many cities every day. These emissions, due to physicochemical conditions, can interact with each other, resulting in additional pollutants such as ozone. The resulting accumulation of pollutants can be dangerous for human health. To date, urban pollution is recognized as one of the main environmental risk factors. This research aims to correlate, through soft computing techniques, namely Artificial Neural Networks and Genetic Programming, the data of the tumours recorded by the Local Health Authority of the city of Benevento, in Italy, with those of the pollutants detected in the air monitoring stations. Such stations can monitor many pollutants, i.e. NO2, CO, PM10, PM2.5, O3 and Benzene (C6H6). Assuming possible effects on human health in the medium term, in this work we treat the data relating to pollutants from the 2012-2014 period while, the tumour data, provided by local hospitals, refer to the time interval 2016-2018. The results show a high correlation between the cases of lung tumours and the exceedance of atmospheric particulate matter and ozone. The explicit genetic programming knowledge representation allows also to measure the relevance of each considered pollutant on human health, evidencing the major role of PM10, NO2 and O3.

3.
Comput Methods Programs Biomed ; 200: 105820, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33168272

RESUMO

BACKGROUND: The complications associated with infections from pathogens increasingly resistant to traditional drugs lead to a constant increase in the mortality rate among those affected. In such cases the fundamental purpose of the microbiology laboratory is to determine the sensitivity profile of pathogens to antimicrobial agents. This is an intense and complex work often not facilitated by the test's characteristics. Despite the evolution of the Antimicrobial Susceptibility Testing (AST) technologies, the technological breakthrough that could guide and facilitate the search for new antimicrobial agents is still missing. METHODS: In this work, we propose the experimental use of in silico instruments, particularly feedforward Multi-Layer Perceptron (MLP) Artificial Neural Network, and Genetic Programming (GP), to verify, but also to predict, the effectiveness of natural and experimental mixtures of polyphenols against several microbial strains. RESULTS: We value the results in predicting the antimicrobial sensitivity profile from the mixture data. Trained MLP shows very high correlations coefficients (0,93 and 0,97) and mean absolute errors (110,70 and 56,60) in determining the Minimum Inhibitory Concentration and Minimum Microbicidal Concentration, respectively, while GP not only evidences very high correlation coefficients (0,89 and 0,96) and low mean absolute errors (6,99 and 5,60) in the same tasks, but also gives an explicit representation of the acquired knowledge about the polyphenol mixtures. CONCLUSIONS: In silico tools can help to predict phytobiotics antimicrobial efficacy, providing an useful strategy to innovate and speed up the extant classic microbiological techniques.


Assuntos
Antibacterianos , Anti-Infecciosos , Anti-Infecciosos/farmacologia , Simulação por Computador , Testes de Sensibilidade Microbiana , Compostos Fitoquímicos/farmacologia
4.
Sci Rep ; 10(1): 3287, 2020 02 25.
Artigo em Inglês | MEDLINE | ID: mdl-32098970

RESUMO

Phytoplankton play key roles in the oceans by regulating global biogeochemical cycles and production in marine food webs. Global warming is thought to affect phytoplankton production both directly, by impacting their photosynthetic metabolism, and indirectly by modifying the physical environment in which they grow. In this respect, the Bermuda Atlantic Time-series Study (BATS) in the Sargasso Sea (North Atlantic gyre) provides a unique opportunity to explore effects of warming on phytoplankton production across the vast oligotrophic ocean regions because it is one of the few multidecadal records of measured net primary productivity (NPP). We analysed the time series of phytoplankton primary productivity at BATS site using machine learning techniques (ML) to show that increased water temperature over a 27-year period (1990-2016), and the consequent weakening of vertical mixing in the upper ocean, induced a negative feedback on phytoplankton productivity by reducing the availability of essential resources, nitrogen and light. The unbalanced availability of these resources with warming, coupled with ecological changes at the community level, is expected to intensify the oligotrophic state of open-ocean regions that are far from land-based nutrient sources.

5.
BMC Bioinformatics ; 15 Suppl 5: S2, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25077818

RESUMO

BACKGROUND: The huge quantity of data produced in Biomedical research needs sophisticated algorithmic methodologies for its storage, analysis, and processing. High Performance Computing (HPC) appears as a magic bullet in this challenge. However, several hard to solve parallelization and load balancing problems arise in this context. Here we discuss the HPC-oriented implementation of a general purpose learning algorithm, originally conceived for DNA analysis and recently extended to treat uncertainty on data (U-BRAIN). The U-BRAIN algorithm is a learning algorithm that finds a Boolean formula in disjunctive normal form (DNF), of approximately minimum complexity, that is consistent with a set of data (instances) which may have missing bits. The conjunctive terms of the formula are computed in an iterative way by identifying, from the given data, a family of sets of conditions that must be satisfied by all the positive instances and violated by all the negative ones; such conditions allow the computation of a set of coefficients (relevances) for each attribute (literal), that form a probability distribution, allowing the selection of the term literals. The great versatility that characterizes it, makes U-BRAIN applicable in many of the fields in which there are data to be analyzed. However the memory and the execution time required by the running are of O(n(3)) and of O(n(5)) order, respectively, and so, the algorithm is unaffordable for huge data sets. RESULTS: We find mathematical and programming solutions able to lead us towards the implementation of the algorithm U-BRAIN on parallel computers. First we give a Dynamic Programming model of the U-BRAIN algorithm, then we minimize the representation of the relevances. When the data are of great size we are forced to use the mass memory, and depending on where the data are actually stored, the access times can be quite different. According to the evaluation of algorithmic efficiency based on the Disk Model, in order to reduce the costs of the communications between different memories (RAM, Cache, Mass, Virtual) and to achieve efficient I/O performance, we design a mass storage structure able to access its data with a high degree of temporal and spatial locality. Then we develop a parallel implementation of the algorithm. We model it as a SPMD system together to a Message-Passing Programming Paradigm. Here, we adopt the high-level message-passing systems MPI (Message Passing Interface) in the version for the Java programming language, MPJ. The parallel processing is organized into four stages: partitioning, communication, agglomeration and mapping. The decomposition of the U-BRAIN algorithm determines the necessity of a communication protocol design among the processors involved. Efficient synchronization design is also discussed. CONCLUSIONS: In the context of a collaboration between public and private institutions, the parallel model of U-BRAIN has been implemented and tested on the INTEL XEON E7xxx and E5xxx family of the CRESCO structure of Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), developed in the framework of the European Grid Infrastructure (EGI), a series of efforts to provide access to high-throughput computing resources across Europe using grid computing techniques. The implementation is able to minimize both the memory space and the execution time. The test data used in this study are IPDATA (Irvine Primate splice- junction DATA set), a subset of HS3D (Homo Sapiens Splice Sites Dataset) and a subset of COSMIC (the Catalogue of Somatic Mutations in Cancer). The execution time and the speed-up on IPDATA reach the best values within about 90 processors. Then the parallelization advantage is balanced by the greater cost of non-local communications between the processors. A similar behaviour is evident on HS3D, but at a greater number of processors, so evidencing the direct relationship between data size and parallelization gain. This behaviour is confirmed on COSMIC. Overall, the results obtained show that the parallel version is up to 30 times faster than the serial one.


Assuntos
Algoritmos , Biologia Computacional/métodos , Metodologias Computacionais , Animais , Biologia Computacional/instrumentação , Bases de Dados de Ácidos Nucleicos , Europa (Continente) , Humanos , Software
6.
Neural Netw ; 18(8): 1087-92, 2005 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-16159708

RESUMO

In this paper, we consider learning problems defined on graph-structured data. We propose an incremental supervised learning algorithm for network-based estimators using diffusion kernels. Diffusion kernel nodes are iteratively added in the training process. For each new node added, the kernel function center and the output connection weight are decided according to an empirical risk driven rule based on an extended chained version of the Nadaraja-Watson estimator. Then the diffusion parameters are determined by a genetic-like optimization technique.


Assuntos
Gráficos por Computador , Simulação por Computador , Armazenamento e Recuperação da Informação , Redes Neurais de Computação , Análise de Regressão , Algoritmos , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...