Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 57
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 121(18): e2312992121, 2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38648479

RESUMO

Cortical neurons exhibit highly variable responses over trials and time. Theoretical works posit that this variability arises potentially from chaotic network dynamics of recurrently connected neurons. Here, we demonstrate that chaotic neural dynamics, formed through synaptic learning, allow networks to perform sensory cue integration in a sampling-based implementation. We show that the emergent chaotic dynamics provide neural substrates for generating samples not only of a static variable but also of a dynamical trajectory, where generic recurrent networks acquire these abilities with a biologically plausible learning rule through trial and error. Furthermore, the networks generalize their experience in the stimulus-evoked samples to the inference without partial or all sensory information, which suggests a computational role of spontaneous activity as a representation of the priors as well as a tractable biological computation for marginal distributions. These findings suggest that chaotic neural dynamics may serve for the brain function as a Bayesian generative model.


Assuntos
Modelos Neurológicos , Neurônios , Neurônios/fisiologia , Teorema de Bayes , Rede Nervosa/fisiologia , Dinâmica não Linear , Humanos , Aprendizagem/fisiologia , Animais , Encéfalo/fisiologia
2.
Neural Netw ; 173: 106209, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437772
3.
Nat Commun ; 15(1): 647, 2024 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-38245502

RESUMO

The hippocampal subfield CA3 is thought to function as an auto-associative network that stores experiences as memories. Information from these experiences arrives directly from the entorhinal cortex as well as indirectly through the dentate gyrus, which performs sparsification and decorrelation. The computational purpose for these dual input pathways has not been firmly established. We model CA3 as a Hopfield-like network that stores both dense, correlated encodings and sparse, decorrelated encodings. As more memories are stored, the former merge along shared features while the latter remain distinct. We verify our model's prediction in rat CA3 place cells, which exhibit more distinct tuning during theta phases with sparser activity. Finally, we find that neural networks trained in multitask learning benefit from a loss term that promotes both correlated and decorrelated representations. Thus, the complementary encodings we have found in CA3 can provide broad computational advantages for solving complex tasks.


Assuntos
Hipocampo , Células de Lugar , Ratos , Animais , Aprendizagem , Córtex Entorrinal , Redes Neurais de Computação , Giro Denteado
4.
Sci Rep ; 14(1): 657, 2024 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-38182692

RESUMO

There are many modeling works that aim to explain people's behaviors that violate classical economic theories. However, these models often do not take into full account the multi-stage nature of real-life problems and people's tendency in solving complicated problems sequentially. In this work, we propose a descriptive decision-making model for multi-stage problems with perceived post-decision information. In the model, decisions are chosen based on an entity which we call the 'anticipated surprise'. The reference point is determined by the expected value of the possible outcomes, which we assume to be dynamically changing during the mental simulation of a sequence of events. We illustrate how our formalism can help us understand prominent economic paradoxes and gambling behaviors that involve multi-stage or sequential planning. We also discuss how neuroscience findings, like prediction error signals and introspective neuronal replay, as well as psychological theories like affective forecasting, are related to the features in our model. This provides hints for future experiments to investigate the role of these entities in decision-making.

5.
Neural Netw ; 169: 793-794, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-38043151
6.
Phys Rev E ; 108(5-1): 054410, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38115467

RESUMO

We present a Hopfield-like autoassociative network for memories representing examples of concepts. Each memory is encoded by two activity patterns with complementary properties. The first is dense and correlated across examples within concepts, and the second is sparse and exhibits no correlation among examples. The network stores each memory as a linear combination of its encodings. During retrieval, the network recovers sparse or dense patterns with a high or low activity threshold, respectively. As more memories are stored, the dense representation at low threshold shifts from examples to concepts, which are learned from accumulating common example features. Meanwhile, the sparse representation at high threshold maintains distinctions between examples due to the high capacity of sparse, decorrelated patterns. Thus, a single network can retrieve memories at both example and concept scales and perform heteroassociation between them. We obtain our results by deriving macroscopic mean-field equations that yield capacity formulas for sparse examples, dense examples, and dense concepts. We also perform simulations that verify our theoretical results and explicitly demonstrate the capabilities of the network.

7.
Curr Opin Neurobiol ; 83: 102799, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37844426

RESUMO

Sleep is considered to play an essential role in memory reorganization. Despite its importance, classical theoretical models did not focus on some sleep characteristics. Here, we review recent theoretical approaches investigating their roles in learning and discuss the possibility that non-rapid eye movement (NREM) sleep selectively consolidates memory, and rapid eye movement (REM) sleep reorganizes the representations of memories. We first review the possibility that slow waves during NREM sleep contribute to memory selection by using sequential firing patterns and the existence of up and down states. Second, we discuss the role of dreaming during REM sleep in developing neuronal representations. We finally discuss how to develop these points further, emphasizing the connections to experimental neuroscience and machine learning.


Assuntos
Sono REM , Sono , Sono/fisiologia , Sono REM/fisiologia
8.
PNAS Nexus ; 2(1): pgac286, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36712943

RESUMO

Slow waves during the non-rapid eye movement (NREM) sleep reflect the alternating up and down states of cortical neurons; global and local slow waves promote memory consolidation and forgetting, respectively. Furthermore, distinct spike-timing-dependent plasticity (STDP) operates in these up and down states. The contribution of different plasticity rules to neural information coding and memory reorganization remains unknown. Here, we show that optimal synaptic plasticity for information maximization in a cortical neuron model provides a unified explanation for these phenomena. The model indicates that the optimal synaptic plasticity is biased toward depression as the baseline firing rate increases. This property explains the distinct STDP observed in the up and down states. Furthermore, it explains how global and local slow waves predominantly potentiate and depress synapses, respectively, if the background firing rate of excitatory neurons declines with the spatial scale of waves as the model predicts. The model provides a unifying account of the role of NREM sleep, bridging neural information coding, synaptic plasticity, and memory reorganization.

9.
Neural Netw ; 157: 471-472, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36442433
10.
Neural Comput ; 35(1): 38-57, 2022 12 14.
Artigo em Inglês | MEDLINE | ID: mdl-36417587

RESUMO

A deep neural network is a good task solver, but it is difficult to make sense of its operation. People have different ideas about how to interpret its operation. We look at this problem from a new perspective where the interpretation of task solving is synthesized by quantifying how much and what previously unused information is exploited in addition to the information used to solve previous tasks. First, after learning several tasks, the network acquires several information partitions related to each task. We propose that the network then learns the minimal information partition that supplements previously learned information partitions to more accurately represent the input. This extra partition is associated with unconceptualized information that has not been used in previous tasks. We manage to identify what unconceptualized information is used and quantify the amount. To interpret how the network solves a new task, we quantify as meta-information how much information from each partition is extracted. We implement this framework with the variational information bottleneck technique. We test the framework with the MNIST and the CLEVR data set. The framework is shown to be able to compose information partitions and synthesize experience-dependent interpretation in the form of meta-information. This system progressively improves the resolution of interpretation upon new experience by converting a part of the unconceptualized information partition to a task-related partition. It can also provide a visual interpretation by imaging what is the part of previously unconceptualized information that is needed to solve a new task.


Assuntos
Aprendizagem , Redes Neurais de Computação , Humanos
12.
iScience ; 25(12): 105492, 2022 Dec 22.
Artigo em Inglês | MEDLINE | ID: mdl-36419854

RESUMO

While principles governing encoding mechanisms in visual perceptual learning (VPL) are well-known, findings regarding posttraining processing are still unrelated in terms of their underlying mechanisms. Here, we examined the effect of repetitive high-frequency visual stimulation (H-RVS) on VPL in an orientation detection task. Application of H-RVS after a single task session led to enhanced orientation detection performance (n = 12), but not in a sham condition (n = 12). If prior training-based VPL had been established by seven sessions in the detection task, H-RVS instead led to a performance impairment (n = 12). Both sham (n = 8) and low-frequency stimulation (L-RVS, n = 12) did not lead to a significant impairment. These findings may suggest reversal dynamics in which conditions of elevated network excitation lead to a decrease in a signal-related activity instead of a further increase. These reversal dynamics may represent a means to link various findings regarding posttraining processing.

13.
Neural Comput ; 33(6): 1433-1468, 2021 05 13.
Artigo em Inglês | MEDLINE | ID: mdl-34496387

RESUMO

For many years, a combination of principal component analysis (PCA) and independent component analysis (ICA) has been used for blind source separation (BSS). However, it remains unclear why these linear methods work well with real-world data that involve nonlinear source mixtures. This work theoretically validates that a cascade of linear PCA and ICA can solve a nonlinear BSS problem accurately-when the sensory inputs are generated from hidden sources via nonlinear mappings with sufficient dimensionality. Our proposed theorem, termed the asymptotic linearization theorem, theoretically guarantees that applying linear PCA to the inputs can reliably extract a subspace spanned by the linear projections from every hidden source as the major components-and thus projecting the inputs onto their major eigenspace can effectively recover a linear transformation of the hidden sources. Then subsequent application of linear ICA can separate all the true independent hidden sources accurately. Zero-element-wise-error nonlinear BSS is asymptotically attained when the source dimensionality is large and the input dimensionality is sufficiently larger than the source dimensionality. Our proposed theorem is validated analytically and numerically. Moreover, the same computation can be performed by using Hebbian-like plasticity rules, implying the biological plausibility of this nonlinear BSS strategy. Our results highlight the utility of linear PCA and ICA for accurately and reliably recovering nonlinearly mixed sources and suggest the importance of employing sensors with sufficient dimensionality to identify true hidden sources of real-world data.

14.
Curr Opin Neurobiol ; 70: 34-42, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34303124

RESUMO

Conventional theories assume that long-term information storage in the brain is implemented by modifying synaptic efficacy. Recent experimental findings challenge this view by demonstrating that dendritic spine sizes, or their corresponding synaptic weights, are highly volatile even in the absence of neural activity. Here, we review previous computational works on the roles of these intrinsic synaptic dynamics. We first present the possibility for neuronal networks to sustain stable performance in their presence, and we then hypothesize that intrinsic dynamics could be more than mere noise to withstand, but they may improve information processing in the brain.


Assuntos
Modelos Neurológicos , Sinapses , Encéfalo/fisiologia , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Sinapses/fisiologia
15.
Nat Rev Neurosci ; 22(7): 407-422, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34050339

RESUMO

In the brain, most synapses are formed on minute protrusions known as dendritic spines. Unlike their artificial intelligence counterparts, spines are not merely tuneable memory elements: they also embody algorithms that implement the brain's ability to learn from experience and cope with new challenges. Importantly, they exhibit structural dynamics that depend on activity, excitatory input and inhibitory input (synaptic plasticity or 'extrinsic' dynamics) and dynamics independent of activity ('intrinsic' dynamics), both of which are subject to neuromodulatory influences and reinforcers such as dopamine. Here we succinctly review extrinsic and intrinsic dynamics, compare these with parallels in machine learning where they exist, describe the importance of intrinsic dynamics for memory management and adaptation, and speculate on how disruption of extrinsic and intrinsic dynamics may give rise to mental disorders. Throughout, we also highlight algorithmic features of spine dynamics that may be relevant to future artificial intelligence developments.


Assuntos
Encéfalo/fisiologia , Espinhas Dendríticas/fisiologia , Transtornos Mentais/fisiopatologia , Modelos Neurológicos , Redes Neurais de Computação , Algoritmos , Animais , Inteligência Artificial , Encéfalo/citologia , Espinhas Dendríticas/ultraestrutura , Dopamina/fisiologia , Humanos , Aprendizado de Máquina , Memória de Curto Prazo/fisiologia , Processos Mentais/fisiologia , Plasticidade Neuronal , Neurotransmissores/fisiologia , Optogenética , Receptores Dopaminérgicos/fisiologia , Recompensa , Especificidade da Espécie , Sinapses/fisiologia
17.
PLoS Comput Biol ; 17(2): e1008700, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33561118

RESUMO

Traveling waves are commonly observed across the brain. While previous studies have suggested the role of traveling waves in learning, the mechanism remains unclear. We adopted a computational approach to investigate the effect of traveling waves on synaptic plasticity. Our results indicate that traveling waves facilitate the learning of poly-synaptic network paths when combined with a reward-dependent local synaptic plasticity rule. We also demonstrate that traveling waves expedite finding the shortest paths and learning nonlinear input/output mapping, such as exclusive or (XOR) function.


Assuntos
Encéfalo/fisiologia , Modelos Neurológicos , Plasticidade Neuronal , Neurônios/fisiologia , Animais , Biologia Computacional , Simulação por Computador , Dopamina/metabolismo , Humanos , Aprendizagem , Memória , Dinâmica não Linear , Transdução de Sinais , Sinapses/fisiologia
18.
Phys Rev Lett ; 125(2): 028101, 2020 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-32701351

RESUMO

We propose an analytically tractable neural connectivity model with power-law distributed synaptic strengths. When threshold neurons with biologically plausible number of incoming connections are considered, our model features a continuous transition to chaos and can reproduce biologically relevant low activity levels and scale-free avalanches, i.e., bursts of activity with power-law distributions of sizes and lifetimes. In contrast, the Gaussian counterpart exhibits a discontinuous transition to chaos and thus cannot be poised near the edge of chaos. We validate our predictions in simulations of networks of binary as well as leaky integrate-and-fire neurons. Our results suggest that heavy-tailed synaptic distribution may form a weakly informative sparse-connectivity prior that can be useful in biological and artificial adaptive systems.


Assuntos
Modelos Neurológicos , Rede Nervosa/fisiologia , Sinapses/fisiologia , Animais , Encéfalo/anatomia & histologia , Encéfalo/fisiologia , Simulação por Computador , Rede Nervosa/anatomia & histologia , Vias Neurais/anatomia & histologia , Vias Neurais/fisiologia , Neurônios/citologia , Neurônios/fisiologia , Dinâmica não Linear
19.
Phys Rev E ; 100(3-1): 032110, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31639919

RESUMO

A new model of search based on stochastic resetting is introduced, wherein rate of resets depends explicitly on time elapsed since the beginning of the process. It is shown that rate inversely proportional to time leads to paradoxical diffusion which mixes self-similarity and linear growth of the mean-square displacement with nonlocality and non-Gaussian propagator. It is argued that such resetting protocol offers a general and efficient search-boosting method that does not need to be optimized with respect to the scale of the underlying search problem (e.g., distance to the goal) and is not very sensitive to other search parameters. Both subdiffusive and superdiffusive regimes of the mean-squared displacement scaling are demonstrated with more general rate functions.

20.
Nat Commun ; 10(1): 4250, 2019 09 18.
Artigo em Inglês | MEDLINE | ID: mdl-31534122

RESUMO

Sense of agency (SoA) refers to the experience or belief that one's own actions caused an external event. Here we present a model of SoA in the framework of optimal Bayesian cue integration with mutually involved principles, namely reliability of action and outcome sensory signals, their consistency with the causation of the outcome by the action, and the prior belief in causation. We used our Bayesian model to explain the intentional binding effect, which is regarded as a reliable indicator of SoA. Our model explains temporal binding in both self-intended and unintentional actions, suggesting that intentionality is not strictly necessary given high confidence in the action causing the outcome. Our Bayesian model also explains that if the sensory cues are reliable, SoA can emerge even for unintended actions. Our formal model therefore posits a precision-dependent causal agency.


Assuntos
Causalidade , Intenção , Controle Interno-Externo , Modelos Psicológicos , Teorema de Bayes , Humanos , Psicofísica , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...