Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Comput Neurosci ; 16: 1017284, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36249482

RESUMO

Artificial neural networks (ANNs) have been successfully trained to perform a wide range of sensory-motor behaviors. In contrast, the performance of spiking neuronal network (SNN) models trained to perform similar behaviors remains relatively suboptimal. In this work, we aimed to push the field of SNNs forward by exploring the potential of different learning mechanisms to achieve optimal performance. We trained SNNs to solve the CartPole reinforcement learning (RL) control problem using two learning mechanisms operating at different timescales: (1) spike-timing-dependent reinforcement learning (STDP-RL) and (2) evolutionary strategy (EVOL). Though the role of STDP-RL in biological systems is well established, several other mechanisms, though not fully understood, work in concert during learning in vivo. Recreating accurate models that capture the interaction of STDP-RL with these diverse learning mechanisms is extremely difficult. EVOL is an alternative method and has been successfully used in many studies to fit model neural responsiveness to electrophysiological recordings and, in some cases, for classification problems. One advantage of EVOL is that it may not need to capture all interacting components of synaptic plasticity and thus provides a better alternative to STDP-RL. Here, we compared the performance of each algorithm after training, which revealed EVOL as a powerful method for training SNNs to perform sensory-motor behaviors. Our modeling opens up new capabilities for SNNs in RL and could serve as a testbed for neurobiologists aiming to understand multi-timescale learning mechanisms and dynamics in neuronal circuits.

2.
PLoS One ; 17(5): e0265808, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35544518

RESUMO

Recent models of spiking neuronal networks have been trained to perform behaviors in static environments using a variety of learning rules, with varying degrees of biological realism. Most of these models have not been tested in dynamic visual environments where models must make predictions on future states and adjust their behavior accordingly. The models using these learning rules are often treated as black boxes, with little analysis on circuit architectures and learning mechanisms supporting optimal performance. Here we developed visual/motor spiking neuronal network models and trained them to play a virtual racket-ball game using several reinforcement learning algorithms inspired by the dopaminergic reward system. We systematically investigated how different architectures and circuit-motifs (feed-forward, recurrent, feedback) contributed to learning and performance. We also developed a new biologically-inspired learning rule that significantly enhanced performance, while reducing training time. Our models included visual areas encoding game inputs and relaying the information to motor areas, which used this information to learn to move the racket to hit the ball. Neurons in the early visual area relayed information encoding object location and motion direction across the network. Neuronal association areas encoded spatial relationships between objects in the visual scene. Motor populations received inputs from visual and association areas representing the dorsal pathway. Two populations of motor neurons generated commands to move the racket up or down. Model-generated actions updated the environment and triggered reward or punishment signals that adjusted synaptic weights so that the models could learn which actions led to reward. Here we demonstrate that our biologically-plausible learning rules were effective in training spiking neuronal network models to solve problems in dynamic environments. We used our models to dissect the circuit architectures and learning rules most effective for learning. Our model shows that learning mechanisms involving different neural circuits produce similar performance in sensory-motor tasks. In biological networks, all learning mechanisms may complement one another, accelerating the learning capabilities of animals. Furthermore, this also highlights the resilience and redundancy in biological systems.


Assuntos
Córtex Motor , Córtex Visual , Potenciais de Ação/fisiologia , Animais , Simulação por Computador , Modelos Neurológicos , Neurônios/fisiologia , Córtex Visual/fisiologia
3.
Proc Natl Acad Sci U S A ; 117(7): 3575-3582, 2020 02 18.
Artigo em Inglês | MEDLINE | ID: mdl-32024761

RESUMO

Excitability-a threshold-governed transient in transmembrane voltage-is a fundamental physiological process that controls the function of the heart, endocrine, muscles, and neuronal tissues. The 1950s Hodgkin and Huxley explicit formulation provides a mathematical framework for understanding excitability, as the consequence of the properties of voltage-gated sodium and potassium channels. The Hodgkin-Huxley model is more sensitive to parametric variations of protein densities and kinetics than biological systems whose excitability is apparently more robust. It is generally assumed that the model's sensitivity reflects missing functional relations between its parameters or other components present in biological systems. Here we experimentally assembled excitable membranes using the dynamic clamp and voltage-gated potassium ionic channels (Kv1.3) expressed in Xenopus oocytes. We take advantage of a theoretically derived phase diagram, where the phenomenon of excitability is reduced to two dimensions defined as combinations of the Hodgkin-Huxley model parameters, to examine functional relations in the parameter space. Moreover, we demonstrate activity dependence and hysteretic dynamics over the phase diagram due to the impacts of complex slow inactivation kinetics. The results suggest that maintenance of excitability amid parametric variation is a low-dimensional, physiologically tenable control process. In the context of model construction, the results point to a potentially significant gap between high-dimensional models that capture the full measure of complexity displayed by ion channel function and the lower dimensionality that captures physiological function.


Assuntos
Modelos Biológicos , Xenopus/metabolismo , Animais , Cinética , Potenciais da Membrana , Oócitos/química , Oócitos/metabolismo , Canais de Potássio de Abertura Dependente da Tensão da Membrana/química , Canais de Potássio de Abertura Dependente da Tensão da Membrana/metabolismo , Canais de Sódio Disparados por Voltagem/química , Canais de Sódio Disparados por Voltagem/metabolismo
4.
Neural Netw ; 120: 108-115, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31500931

RESUMO

Deep Reinforcement Learning (RL) demonstrates excellent performance on tasks that can be solved by trained policy. It plays a dominant role among cutting-edge machine learning approaches using multi-layer Neural networks (NNs). At the same time, Deep RL suffers from high sensitivity to noisy, incomplete, and misleading input data. Following biological intuition, we involve Spiking Neural Networks (SNNs) to address some deficiencies of deep RL solutions. Previous studies in image classification domain demonstrated that standard NNs (with ReLU nonlinearity) trained using supervised learning can be converted to SNNs with negligible deterioration in performance. In this paper, we extend those conversion results to the domain of Q-Learning NNs trained using RL. We provide a proof of principle of the conversion of standard NN to SNN. In addition, we show that the SNN has improved robustness to occlusion in the input image. Finally, we introduce results with converting full-scale Deep Q-network to SNN, paving the way for future research to robust Deep RL applications.


Assuntos
Aprendizado de Máquina/normas , Teoria dos Jogos
5.
Neural Netw ; 119: 332-340, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31499357

RESUMO

In recent years, spiking neural networks (SNNs) have demonstrated great success in completing various machine learning tasks. We introduce a method for learning image features with locally connected layers in SNNs using a spike-timing-dependent plasticity (STDP) rule. In our approach, sub-networks compete via inhibitory interactions to learn features from different locations of the input space. These locally-connected SNNs (LC-SNNs) manifest key topological features of the spatial interaction of biological neurons. We explore a biologically inspired n-gram classification approach allowing parallel processing over various patches of the image space. We report the classification accuracy of simple two-layer LC-SNNs on two image datasets, which respectively match state-of-art performance and are the first results to date. LC-SNNs have the advantage of fast convergence to a dataset representation, and they require fewer learnable parameters than other SNN approaches with unsupervised learning. Robustness tests demonstrate that LC-SNNs exhibit graceful degradation of performance despite the random deletion of large numbers of synapses and neurons. Our results have been obtained using the BindsNET library, which allows efficient machine learning implementations of spiking neural networks.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Modelos Neurológicos
6.
Front Neuroinform ; 12: 89, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30631269

RESUMO

The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid prototyping or application to problems in the domain of machine learning. In this paper, we describe a new Python package for the simulation of spiking neural networks, specifically geared toward machine learning and reinforcement learning. Our software, called BindsNET, enables rapid building and simulation of spiking networks and features user-friendly, concise syntax. BindsNET is built on the PyTorch deep neural networks library, facilitating the implementation of spiking neural networks on fast CPU and GPU computational platforms. Moreover, the BindsNET framework can be adjusted to utilize other existing computing and hardware backends; e.g., TensorFlow and SpiNNaker. We provide an interface with the OpenAI gym library, allowing for training and evaluation of spiking networks on reinforcement learning environments. We argue that this package facilitates the use of spiking networks for large-scale machine learning problems and show some simple examples by using BindsNET in practice.

7.
Front Neurosci ; 11: 579, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29093659

RESUMO

There is growing need for multichannel electrophysiological systems that record from and interact with neuronal systems in near real-time. Such systems are needed, for example, for closed loop, multichannel electrophysiological/optogenetic experimentation in vivo and in a variety of other neuronal preparations, or for developing and testing neuro-prosthetic devices, to name a few. Furthermore, there is a need for such systems to be inexpensive, reliable, user friendly, easy to set-up, open and expandable, and possess long life cycles in face of rapidly changing computing environments. Finally, they should provide powerful, yet reasonably easy to implement facilities for developing closed-loop protocols for interacting with neuronal systems. Here, we survey commercial and open source systems that address these needs to varying degrees. We then present our own solution, which we refer to as Closed Loop Experiments Manager (CLEM). CLEM is an open source, soft real-time, Microsoft Windows desktop application that is based on a single generic personal computer (PC) and an inexpensive, general-purpose data acquisition board. CLEM provides a fully functional, user-friendly graphical interface, possesses facilities for recording, presenting and logging electrophysiological data from up to 64 analog channels, and facilities for controlling external devices, such as stimulators, through digital and analog interfaces. Importantly, it includes facilities for running closed-loop protocols written in any programming language that can generate dynamic link libraries (DLLs). We describe the application, its architecture and facilities. We then demonstrate, using networks of cortical neurons growing on multielectrode arrays (MEA) that despite its reliance on generic hardware, its performance is appropriate for flexible, closed-loop experimentation at the neuronal network level.

8.
Neural Plast ; 2015: 804385, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26257961

RESUMO

Neocortical structures typically only support slow acquisition of declarative memory; however, learning through fast mapping may facilitate rapid learning-induced cortical plasticity and hippocampal-independent integration of novel associations into existing semantic networks. During fast mapping the meaning of new words and concepts is inferred, and durable novel associations are incidentally formed, a process thought to support early childhood's exuberant learning. The anterior temporal lobe, a cortical semantic memory hub, may critically support such learning. We investigated encoding of semantic associations through fast mapping using fMRI and multivoxel pattern analysis. Subsequent memory performance following fast mapping was more efficiently predicted using anterior temporal lobe than hippocampal voxels, while standard explicit encoding was best predicted by hippocampal activity. Searchlight algorithms revealed additional activity patterns that predicted successful fast mapping semantic learning located in lateral occipitotemporal and parietotemporal neocortex and ventrolateral prefrontal cortex. By contrast, successful explicit encoding could be classified by activity in medial and dorsolateral prefrontal and parahippocampal cortices. We propose that fast mapping promotes incidental rapid integration of new associations into existing neocortical semantic networks by activating related, nonoverlapping conceptual knowledge. In healthy adults, this is better captured by unique anterior and lateral temporal lobe activity patterns, while hippocampal involvement is less predictive of this kind of learning.


Assuntos
Aprendizagem por Associação/fisiologia , Imageamento por Ressonância Magnética/métodos , Memória/fisiologia , Neocórtex/fisiologia , Plasticidade Neuronal/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Aprendizagem , Masculino , Córtex Pré-Frontal/fisiologia , Desempenho Psicomotor/fisiologia , Semântica , Adulto Jovem
9.
Neural Netw ; 70: 61-73, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26218350

RESUMO

Standard methods for the analysis of functional MRI data strongly rely on prior implicit and explicit hypotheses made to simplify the analysis. In this work the attention is focused on two such commonly accepted hypotheses: (i) the hemodynamic response function (HRF) to be searched in the BOLD signal can be described by a specific parametric model e.g., double-gamma; (ii) the effect of stimuli on the signal is taken to be linearly additive. While these assumptions have been empirically proven to generate high sensitivity for statistical methods, they also limit the identification of relevant voxels to what is already postulated in the signal, thus not allowing the discovery of unknown correlates in the data due to the presence of unexpected hemodynamics. This paper tries to overcome these limitations by proposing a method wherein the HRF is learned directly from data rather than induced from its basic form assumed in advance. This approach produces a set of voxel-wise models of HRF and, as a result, relevant voxels are filterable according to the accuracy of their prediction in a machine learning framework. This approach is instantiated using a temporal architecture based on the paradigm of Reservoir Computing wherein a Liquid State Machine is combined with a decoding Feed-Forward Neural Network. This splits the modeling into two parts: first a representation of the complex temporal reactivity of the hemodynamic response is determined by a universal global "reservoir" which is essentially temporal; second an interpretation of the encoded representation is determined by a standard feed-forward neural network, which is trained by the data. Thus the reservoir models the temporal state of information during and following temporal stimuli in a feed-back system, while the neural network "translates" this data to fit the specific HRF response as given, e.g. by BOLD signal measurements in fMRI. An empirical analysis on synthetic datasets shows that the learning process can be robust both to noise and to the varying shape of the underlying HRF. A similar investigation on real fMRI datasets provides evidence that BOLD predictability allows for discrimination between relevant and irrelevant voxels for a given set of stimuli.


Assuntos
Hemodinâmica/fisiologia , Aprendizado de Máquina , Redes Neurais de Computação , Algoritmos , Simulação por Computador , Interpretação Estatística de Dados , Conjuntos de Dados como Assunto , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Cadeias de Markov , Modelos Estatísticos , Oxigênio/sangue , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...