Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
ArXiv ; 2023 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-38106458

RESUMO

Work on deep learning-based models of grid cells suggests that grid cells generically and robustly arise from optimizing networks to path integrate, i.e., track one's spatial position by integrating self-velocity signals. In previous work [27], we challenged this path integration hypothesis by showing that deep neural networks trained to path integrate almost always do so, but almost never learn grid-like tuning unless separately inserted by researchers via mechanisms unrelated to path integration. In this work, we restate the key evidence substantiating these insights, then address a response to [27] by authors of one of the path integration hypothesis papers [32]. First, we show that the response misinterprets our work, indirectly confirming our points. Second, we evaluate the response's preferred "unified theory for the origin of grid cells" in trained deep path integrators [31, 33, 34] and show that it is at best "occasionally suggestive," not exact or comprehensive. We finish by considering why assessing model quality through prediction of biological neural activity by regression of activity in deep networks [23] can lead to the wrong conclusions.

2.
Neural Comput ; 35(11): 1850-1869, 2023 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-37725708

RESUMO

Recurrent neural networks (RNNs) are often used to model circuits in the brain and can solve a variety of difficult computational problems requiring memory, error correction, or selection (Hopfield, 1982; Maass et al., 2002; Maass, 2011). However, fully connected RNNs contrast structurally with their biological counterparts, which are extremely sparse (about 0.1%). Motivated by the neocortex, where neural connectivity is constrained by physical distance along cortical sheets and other synaptic wiring costs, we introduce locality masked RNNs (LM-RNNs) that use task-agnostic predetermined graphs with sparsity as low as 4%. We study LM-RNNs in a multitask learning setting relevant to cognitive systems neuroscience with a commonly used set of tasks, 20-Cog-tasks (Yang et al., 2019). We show through reductio ad absurdum that 20-Cog-tasks can be solved by a small pool of separated autapses that we can mechanistically analyze and understand. Thus, these tasks fall short of the goal of inducing complex recurrent dynamics and modular structure in RNNs. We next contribute a new cognitive multitask battery, Mod-Cog, consisting of up to 132 tasks that expands by about seven-fold the number of tasks and task complexity of 20-Cog-tasks. Importantly, while autapses can solve the simple 20-Cog-tasks, the expanded task set requires richer neural architectures and continuous attractor dynamics. On these tasks, we show that LM-RNNs with an optimal sparsity result in faster training and better data efficiency than fully connected networks.

3.
Commun Biol ; 6(1): 841, 2023 08 14.
Artigo em Inglês | MEDLINE | ID: mdl-37580527

RESUMO

Rules of thumb are behavioral algorithms that approximate optimal behavior while lowering cognitive and sensory costs. One way to reduce these costs is by simplifying the representation of the environment: While the theoretically optimal behavior may depend on many environmental variables, a rule of thumb may use a smaller set of variables that performs reasonably well. Experimental proof of this simplification requires an exhaustive mapping of all relevant combinations of several environmental parameters, which we performed for Caenorhabditis elegans foraging by covering systematically combinations of food density (across 4 orders of magnitude) and food type (across 12 bacterial strains). We found that worms' response is dominated by a single environmental variable: food density measured as number of bacteria per unit surface. They disregard other factors such as biomass content or bacterial strain. We also measured experimentally the impact on fitness of each type of food, determining that the rule is near-optimal and therefore constitutes a rule of thumb that leverages the most informative environmental variable. These results set the stage for further investigations into the underlying genetic and neural mechanisms governing this simplification process, and into its role in the evolution of decision-making strategies.


Assuntos
Caenorhabditis elegans , Animais , Caenorhabditis elegans/fisiologia , Comportamento Alimentar , Bactérias
4.
Nat Rev Neurosci ; 23(12): 744-766, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36329249

RESUMO

In this Review, we describe the singular success of attractor neural network models in describing how the brain maintains persistent activity states for working memory, corrects errors and integrates noisy cues. We consider the mechanisms by which simple and forgetful units can organize to collectively generate dynamics on the long timescales required for such computations. We discuss the myriad potential uses of attractor dynamics for computation in the brain, and showcase notable examples of brain systems in which inherently low-dimensional continuous-attractor dynamics have been concretely and rigorously identified. Thus, it is now possible to conclusively state that the brain constructs and uses such systems for computation. Finally, we highlight recent theoretical advances in understanding how the fundamental trade-offs between robustness and capacity and between structure and flexibility can be overcome by reusing and recombining the same set of modular attractors for multiple functions, so they together produce representations that are structurally constrained and robust but exhibit high capacity and are flexible.


Assuntos
Encéfalo , Neurônios , Humanos , Redes Neurais de Computação , Memória de Curto Prazo , Modelos Neurológicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...