Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Healthc Anal (N Y) ; 2: 100115, 2022 Nov.
Article in English | MEDLINE | ID: mdl-37520620

ABSTRACT

Following the outbreak of the coronavirus epidemic in early 2020, municipalities, regional governments and policymakers worldwide had to plan their Non-Pharmaceutical Interventions (NPIs) amidst a scenario of great uncertainty. At this early stage of an epidemic, where no vaccine or medical treatment is in sight, algorithmic prediction can become a powerful tool to inform local policymaking. However, when we replicated one prominent epidemiological model to inform health authorities in a region in the south of Brazil, we found that this model relied too heavily on manually predetermined covariates and was too reactive to changes in data trends. Our four proposed models access data of both daily reported deaths and infections as well as take into account missing data (e.g., the under-reporting of cases) more explicitly, with two of the proposed versions also attempting to model the delay in test reporting. We simulated weekly forecasting of deaths from the period from 31/05/2020 until 31/01/2021, with first week data being used as a cold-start to the algorithm, after which we use a lighter variant of the model for faster forecasting. Because our models are significantly more proactive in identifying trend changes, this has improved forecasting, especially in long-range predictions and after the peak of an infection wave, as they were quicker to adapt to scenarios after these peaks in reported deaths. Assuming reported cases were under-reported greatly benefited the model in its stability, and modelling retroactively-added data (due to the "hot" nature of the data used) had a negligible impact on performance.

2.
Comput Biol Chem ; 53PB: 251-276, 2014 12.
Article in English | MEDLINE | ID: mdl-25462334

ABSTRACT

A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction.

3.
IEEE Trans Neural Netw ; 22(12): 2409-21, 2011 Dec.
Article in English | MEDLINE | ID: mdl-22010150

ABSTRACT

The effective integration of knowledge representation, reasoning, and learning in a robust computational model is one of the key challenges of computer science and artificial intelligence. In particular, temporal knowledge and models have been fundamental in describing the behavior of computational systems. However, knowledge acquisition of correct descriptions of a system's desired behavior is a complex task. In this paper, we present a novel neural-computation model capable of representing and learning temporal knowledge in recurrent networks. The model works in an integrated fashion. It enables the effective representation of temporal knowledge, the adaptation of temporal models given a set of desirable system properties, and effective learning from examples, which in turn can lead to temporal knowledge extraction from the corresponding trained networks. The model is sound from a theoretical standpoint, but it has also been tested on a case study in the area of model verification and adaptation. The results contained in this paper indicate that model verification and learning can be integrated within the neural computation paradigm, contributing to the development of predictive temporal knowledge-based systems and offering interpretable results that allow system researchers and engineers to improve their models and specifications. The model has been implemented and is available as part of a neural-symbolic computational toolkit.


Subject(s)
Algorithms , Artificial Intelligence , Nonlinear Dynamics , Computer Simulation
4.
Philos Trans A Math Phys Eng Sci ; 369(1935): 307-21, 2011 Jan 28.
Article in English | MEDLINE | ID: mdl-21149373

ABSTRACT

Continuous reductions in the dimensions of semiconductor devices have led to an increasing number of noise sources, including random telegraph signals (RTS) due to the capture and emission of electrons by traps at random positions between oxide and semiconductor. The models traditionally used for microscopic devices become of limited validity in nano- and mesoscale systems since, in such systems, distributed quantities such as electron and trap densities, and concepts like electron mobility, become inadequate to model electrical behaviour. In addition, current experimental works have shown that RTS in semiconductor devices based on carbon nanotubes lead to giant current fluctuations. Therefore, the physics of this phenomenon and techniques to decrease the amplitudes of RTS need to be better understood. This problem can be described as a collective Poisson process under different, but time-independent, rates, τ(c) and τ(e), that control the capture and emission of electrons by traps distributed over the oxide. Thus, models that consider calculations performed under time-dependent periodic capture and emission rates should be of interest in order to model more efficient devices. We show a complete theoretical description of a model that is capable of showing a noise reduction of current fluctuations in the time domain, and a reduction of the power spectral density in the frequency domain, in semiconductor devices as predicted by previous experimental work. We do so through numerical integrations and a novel Monte Carlo Markov chain (MCMC) algorithm based on microscopic discrete values. The proposed model also handles the ballistic regime, relevant in nano- and mesoscale devices. Finally, we show that the ballistic regime leads to nonlinearity in the electrical behaviour.

5.
J Theor Biol ; 258(2): 208-18, 2009 May 21.
Article in English | MEDLINE | ID: mdl-19490871

ABSTRACT

We explore the emergent behavior in heterogeneous populations where players negotiate via an ultimatum game: two players are offered a gift, one of them (the proposer) suggests how to divide the offer while the other player (the responder) can either accept or reject the deal. Rejection is detrimental to both players as it results in no earnings. In this context, our contribution is twofold: (i) we consider a population where the distribution of used strategies is constant over time and properties of the random payoff received by the players (average and higher moments) are reported from simple exact methods and corroborated by computer simulations; (ii) the evolution of a population is analyzed via Monte Carlo simulations where agents may change independently the proposing and accepting parameters of their strategy depending on received payoffs. Our results show that evolution leads to a stationary state in which wealth (accumulated payoff) is fairly distributed. As time evolves, an increase in average payoff and a simultaneous variance decrease is observed when we use a dynamics based on a probabilistic version of the saying: "One should not comply with small earnings, but one's greed must be limited."


Subject(s)
Biological Evolution , Computer Simulation , Game Theory , Models, Psychological , Choice Behavior , Empathy , Humans , Population Dynamics , Probability , Social Behavior
6.
Neural Comput ; 18(7): 1711-38, 2006 Jul.
Article in English | MEDLINE | ID: mdl-16764519

ABSTRACT

The importance of the efforts to bridge the gap between the connectionist and symbolic paradigms of artificial intelligence has been widely recognized. The merging of theory (background knowledge) and data learning (learning from examples) into neural-symbolic systems has indicated that such a learning system is more effective than purely symbolic or purely connectionist systems. Until recently, however, neural-symbolic systems were not able to fully represent, reason, and learn expressive languages other than classical propositional and fragments of first-order logic. In this article, we show that nonclassical logics, in particular propositional temporal logic and combinations of temporal and epistemic (modal) reasoning, can be effectively computed by artificial neural networks. We present the language of a connectionist temporal logic of knowledge (CTLK). We then present a temporal algorithm that translates CTLK theories into ensembles of neural networks and prove that the translation is correct. Finally, we apply CTLK to the muddy children puzzle, which has been widely used as a test-bed for distributed knowledge representation. We provide a complete solution to the puzzle with the use of simple neural networks, capable of reasoning about knowledge evolution in time and of knowledge acquisition through learning.


Subject(s)
Artificial Intelligence , Computer Simulation , Learning/physiology , Logic , Neural Networks, Computer , Algorithms , Humans , Knowledge , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...