Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Nature ; 518(7540): 529-33, 2015 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-25719670

RESUMO

The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.


Assuntos
Inteligência Artificial , Reforço Psicológico , Jogos de Vídeo , Algoritmos , Humanos , Modelos Psicológicos , Redes Neurais de Computação , Recompensa
2.
Neuroinformatics ; 11(3): 267-90, 2013 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-23274962

RESUMO

This paper presents a toolbox of solutions that enable the user to construct biologically-inspired spiking neural networks with tens of thousands of neurons and millions of connections that can be simulated in real time, visualized in 3D and connected to robots and other devices. NeMo is a high performance simulator that works with a variety of neural and oscillator models and performs parallel simulations on either GPUs or multi-core processors. SpikeStream is a visualization and analysis environment that works with NeMo and can construct networks, store them in a database and visualize their activity in 3D. The iSpike library provides biologically-inspired conversion between real data and spike representations to support work with robots, such as the iCub. Each of the tools described in this paper can be used independently with other software, and they also work well together.


Assuntos
Potenciais de Ação/fisiologia , Gráficos por Computador , Modelos Neurológicos , Redes Neurais de Computação , Neurônios/fisiologia , Animais , Simulação por Computador , Humanos , Armazenamento e Recuperação da Informação , Rede Nervosa/fisiologia , Software , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...