Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS Comput Biol ; 18(6): e1010214, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35727828

RESUMO

The brain performs various cognitive functions by learning the spatiotemporal salient features of the environment. This learning requires unsupervised segmentation of hierarchically organized spike sequences, but the underlying neural mechanism is only poorly understood. Here, we show that a recurrent gated network of neurons with dendrites can efficiently solve difficult segmentation tasks. In this model, multiplicative recurrent connections learn a context-dependent gating of dendro-somatic information transfers to minimize error in the prediction of somatic responses by the dendrites. Consequently, these connections filter the redundant input features represented by the dendrites but unnecessary in the given context. The model was tested on both synthetic and real neural data. In particular, the model was successful for segmenting multiple cell assemblies repeating in large-scale calcium imaging data containing thousands of cortical neurons. Our results suggest that recurrent gating of dendro-somatic signal transfers is crucial for cortical learning of context-dependent segmentation tasks.


Assuntos
Modelos Neurológicos , Neurônios , Encéfalo , Aprendizagem/fisiologia , Neurônios/fisiologia
2.
Front Neurosci ; 16: 855753, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35573290

RESUMO

In natural auditory environments, acoustic signals originate from the temporal superimposition of different sound sources. The problem of inferring individual sources from ambiguous mixtures of sounds is known as blind source decomposition. Experiments on humans have demonstrated that the auditory system can identify sound sources as repeating patterns embedded in the acoustic input. Source repetition produces temporal regularities that can be detected and used for segregation. Specifically, listeners can identify sounds occurring more than once across different mixtures, but not sounds heard only in a single mixture. However, whether such a behavior can be computationally modeled has not yet been explored. Here, we propose a biologically inspired computational model to perform blind source separation on sequences of mixtures of acoustic stimuli. Our method relies on a somatodendritic neuron model trained with a Hebbian-like learning rule which was originally conceived to detect spatio-temporal patterns recurring in synaptic inputs. We show that the segregation capabilities of our model are reminiscent of the features of human performance in a variety of experimental settings involving synthesized sounds with naturalistic properties. Furthermore, we extend the study to investigate the properties of segregation on task settings not yet explored with human subjects, namely natural sounds and images. Overall, our work suggests that somatodendritic neuron models offer a promising neuro-inspired learning strategy to account for the characteristics of the brain segregation capabilities as well as to make predictions on yet untested experimental settings.

3.
Sci Rep ; 12(1): 4951, 2022 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-35322813

RESUMO

Isolated spikes and bursts of spikes are thought to provide the two major modes of information coding by neurons. Bursts are known to be crucial for fundamental processes between neuron pairs, such as neuronal communications and synaptic plasticity. Neuronal bursting also has implications in neurodegenerative diseases and mental disorders. Despite these findings on the roles of bursts, whether and how bursts have an advantage over isolated spikes in the network-level computation remains elusive. Here, we demonstrate in a computational model that not isolated spikes, but intrinsic bursts can greatly facilitate learning of Lévy flight random walk trajectories by synchronizing burst onsets across a neural population. Lévy flight is a hallmark of optimal search strategies and appears in cognitive behaviors such as saccadic eye movements and memory retrieval. Our results suggest that bursting is crucial for sequence learning by recurrent neural networks when sequences comprise long-tailed distributed discrete jumps.


Assuntos
Redes Neurais de Computação , Neurônios , Potenciais de Ação/fisiologia , Humanos , Modelos Neurológicos , Movimento , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia
4.
Curr Opin Neurobiol ; 70: 145-153, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34808521

RESUMO

Spatial and temporal information from the environment is often hierarchically organized, so is our knowledge formed about the environment. Identifying the meaningful segments embedded in hierarchically structured information is crucial for cognitive functions, including visual, auditory, motor, memory, and language processing. Segmentation enables the grasping of the links between isolated entities, offering the basis for reasoning and thinking. Importantly, the brain learns such segmentation without external instructions. Here, we review the underlying computational mechanisms implemented at the single-cell and network levels. The network-level mechanism has an interesting similarity to machine-learning methods for graph segmentation. The brain possibly implements methods for the analysis of the hierarchical structures of the environment at multiple levels of its processing hierarchy.


Assuntos
Encéfalo , Aprendizagem , Cognição , Idioma , Aprendizado de Máquina
5.
Nat Commun ; 11(1): 1554, 2020 03 25.
Artigo em Inglês | MEDLINE | ID: mdl-32214100

RESUMO

The brain identifies potentially salient features within continuous information streams to process hierarchical temporal events. This requires the compression of information streams, for which effective computational principles are yet to be explored. Backpropagating action potentials can induce synaptic plasticity in the dendrites of cortical pyramidal neurons. By analogy with this effect, we model a self-supervising process that increases the similarity between dendritic and somatic activities where the somatic activity is normalized by a running average. We further show that a family of networks composed of the two-compartment neurons performs a surprisingly wide variety of complex unsupervised learning tasks, including chunking of temporal sequences and the source separation of mixed correlated signals. Common methods applicable to these temporal feature analyses were previously unknown. Our results suggest the powerful ability of neural networks with dendrites to analyze temporal features. This simple neuron model may also be potentially useful in neural engineering applications.


Assuntos
Dendritos/fisiologia , Aprendizagem , Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Potenciais de Ação , Encéfalo/fisiologia , Biologia Computacional , Potenciais da Membrana , Rede Nervosa
6.
PLoS Comput Biol ; 14(10): e1006400, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30296262

RESUMO

Chunking is the process by which frequently repeated segments of temporal inputs are concatenated into single units that are easy to process. Such a process is fundamental to time-series analysis in biological and artificial information processing systems. The brain efficiently acquires chunks from various information streams in an unsupervised manner; however, the underlying mechanisms of this process remain elusive. A widely-adopted statistical method for chunking consists of predicting frequently repeated contiguous elements in an input sequence based on unequal transition probabilities over sequence elements. However, recent experimental findings suggest that the brain is unlikely to adopt this method, as human subjects can chunk sequences with uniform transition probabilities. In this study, we propose a novel conceptual framework to overcome this limitation. In this process, neural networks learn to predict dynamical response patterns to sequence input rather than to directly learn transition patterns. Using a mutually supervising pair of reservoir computing modules, we demonstrate how this mechanism works in chunking sequences of letters or visual images with variable regularity and complexity. In addition, we demonstrate that background noise plays a crucial role in correctly learning chunks in this model. In particular, the model can successfully chunk sequences that conventional statistical approaches fail to chunk due to uniform transition probabilities. In addition, the neural responses of the model exhibit an interesting similarity to those of the basal ganglia observed after motor habit formation.


Assuntos
Modelos Neurológicos , Aprendizado de Máquina não Supervisionado , Encéfalo/fisiologia , Biologia Computacional , Humanos , Aprendizagem/fisiologia , Redes Neurais de Computação
7.
Front Neurosci ; 12: 429, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29997474

RESUMO

Motor cortical microcircuits receive inputs from dispersed cortical and subcortical regions in behaving animals. However, how these inputs contribute to learning and execution of voluntary sequential motor behaviors remains elusive. Here, we analyzed the independent components extracted from the local field potential (LFP) activity recorded at multiple depths of rat motor cortex during reward-motivated movement to study their roles in motor learning. Because slow gamma (30-50 Hz), fast gamma (60-120 Hz), and theta (4-10 Hz) oscillations temporally coordinate task-relevant motor cortical activities, we first explored the behavioral state- and layer-dependent coordination of motor behavior in these frequency ranges. Consistent with previous findings, oscillations in the slow and fast gamma bands dominated during distinct movement states, i.e., preparation and execution states, respectively. However, we identified a novel independent component that dominantly appeared in deep cortical layers and exhibited enhanced slow gamma activity during the execution state. Then, we used the four major independent components to train a recurrent network model for the same lever movements as the rats performed. We show that the independent components differently contribute to the formation of various task-related activities, but they also play overlapping roles in motor learning.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...