Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
Neuron ; 2024 May 06.
Article in English | MEDLINE | ID: mdl-38729151

ABSTRACT

The property of mixed selectivity has been discussed at a computational level and offers a strategy to maximize computational power by adding versatility to the functional role of each neuron. Here, we offer a biologically grounded implementational-level mechanistic explanation for mixed selectivity in neural circuits. We define pure, linear, and nonlinear mixed selectivity and discuss how these response properties can be obtained in simple neural circuits. Neurons that respond to multiple, statistically independent variables display mixed selectivity. If their activity can be expressed as a weighted sum, then they exhibit linear mixed selectivity; otherwise, they exhibit nonlinear mixed selectivity. Neural representations based on diverse nonlinear mixed selectivity are high dimensional; hence, they confer enormous flexibility to a simple downstream readout neural circuit. However, a simple neural circuit cannot possibly encode all possible mixtures of variables simultaneously, as this would require a combinatorially large number of mixed selectivity neurons. Gating mechanisms like oscillations and neuromodulation can solve this problem by dynamically selecting which variables are mixed and transmitted to the readout.

2.
Curr Opin Neurobiol ; 77: 102644, 2022 12.
Article in English | MEDLINE | ID: mdl-36332415

ABSTRACT

The firing rates of individual neurons displaying mixed selectivity are modulated by multiple task variables. When mixed selectivity is nonlinear, it confers an advantage by generating a high-dimensional neural representation that can be flexibly decoded by linear classifiers. Although the advantages of this coding scheme are well accepted, the means of designing an experiment and analyzing the data to test for and characterize mixed selectivity remain unclear. With the growing number of large datasets collected during complex tasks, the mixed selectivity is increasingly observed and is challenging to interpret correctly. We review recent approaches for analyzing and interpreting neural datasets and clarify the theoretical implications of mixed selectivity in the variety of forms that have been reported in the literature. We also aim to provide a practical guide for determining whether a neural population has linear or nonlinear mixed selectivity and whether this mixing leads to a categorical or category-free representation.


Subject(s)
Models, Neurological , Neurons , Neurons/physiology
3.
Nat Commun ; 12(1): 1417, 2021 03 03.
Article in English | MEDLINE | ID: mdl-33658520

ABSTRACT

Artificial neural networks have recently achieved many successes in solving sequential processing and planning tasks. Their success is often ascribed to the emergence of the task's low-dimensional latent structure in the network activity - i.e., in the learned neural representations. Here, we investigate the hypothesis that a means for generating representations with easily accessed low-dimensional latent structure, possibly reflecting an underlying semantic organization, is through learning to predict observations about the world. Specifically, we ask whether and when network mechanisms for sensory prediction coincide with those for extracting the underlying latent variables. Using a recurrent neural network model trained to predict a sequence of observations we show that network dynamics exhibit low-dimensional but nonlinearly transformed representations of sensory inputs that map the latent structure of the sensory environment. We quantify these results using nonlinear measures of intrinsic dimensionality and linear decodability of latent variables, and provide mathematical arguments for why such useful predictive representations emerge. We focus throughout on how our results can aid the analysis and interpretation of experimental data.

4.
Cell ; 183(4): 954-967.e21, 2020 11 12.
Article in English | MEDLINE | ID: mdl-33058757

ABSTRACT

The curse of dimensionality plagues models of reinforcement learning and decision making. The process of abstraction solves this by constructing variables describing features shared by different instances, reducing dimensionality and enabling generalization in novel situations. Here, we characterized neural representations in monkeys performing a task described by different hidden and explicit variables. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training, which requires a particular geometry of neural representations. Neural ensembles in prefrontal cortex, hippocampus, and simulated neural networks simultaneously represented multiple variables in a geometry reflecting abstraction but that still allowed a linear classifier to decode a large number of other variables (high shattering dimensionality). Furthermore, this geometry changed in relation to task events and performance. These findings elucidate how the brain and artificial systems represent variables in an abstract format while preserving the advantages conferred by high shattering dimensionality.


Subject(s)
Hippocampus/anatomy & histology , Prefrontal Cortex/anatomy & histology , Animals , Behavior, Animal , Brain Mapping , Computer Simulation , Hippocampus/physiology , Learning , Macaca mulatta , Male , Models, Neurological , Neural Networks, Computer , Neurons/physiology , Prefrontal Cortex/physiology , Reinforcement, Psychology , Task Performance and Analysis
5.
Front Neurosci ; 13: 753, 2019.
Article in English | MEDLINE | ID: mdl-31417340

ABSTRACT

Analog arrays are a promising emerging hardware technology with the potential to drastically speed up deep learning. Their main advantage is that they employ analog circuitry to compute matrix-vector products in constant time, irrespective of the size of the matrix. However, ConvNets map very unfavorably onto analog arrays when done in a straight-forward manner, because kernel matrices are typically small and the constant time operation needs to be sequentially iterated a large number of times. Here, we propose to parallelize the training by replicating the kernel matrix of a convolution layer on distinct analog arrays, and randomly divide parts of the compute among them. With this modification, analog arrays execute ConvNets with a large acceleration factor that is proportional to the number of kernel matrices used per layer (here tested 16-1024). Despite having more free parameters, we show analytically and in numerical experiments that this new convolution architecture is self-regularizing and implicitly learns similar filters across arrays. We also report superior performance on a number of datasets and increased robustness to adversarial attacks. Our investigation suggests to revise the notion that emerging hardware architectures that feature analog arrays for fast matrix-vector multiplication are not suitable for ConvNets.

6.
Nat Neurosci ; 21(3): 415-423, 2018 03.
Article in English | MEDLINE | ID: mdl-29459764

ABSTRACT

The social brain hypothesis posits that dedicated neural systems process social information. In support of this, neurophysiological data have shown that some brain regions are specialized for representing faces. It remains unknown, however, whether distinct anatomical substrates also represent more complex social variables, such as the hierarchical rank of individuals within a social group. Here we show that the primate amygdala encodes the hierarchical rank of individuals in the same neuronal ensembles that encode the rewards associated with nonsocial stimuli. By contrast, orbitofrontal and anterior cingulate cortices lack strong representations of hierarchical rank while still representing reward values. These results challenge the conventional view that dedicated neural systems process social information. Instead, information about hierarchical rank-which contributes to the assessment of the social value of individuals within a group-is linked in the amygdala to representations of rewards associated with nonsocial stimuli.


Subject(s)
Amygdala/physiology , Hierarchy, Social , Reward , Animals , Conditioning, Operant/physiology , Macaca mulatta , Male , Neurons/physiology , Photic Stimulation
7.
J Neurosci ; 37(45): 11021-11036, 2017 11 08.
Article in English | MEDLINE | ID: mdl-28986463

ABSTRACT

Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training.SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed selectivity")-is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training.


Subject(s)
Machine Learning , Neural Networks, Computer , Prefrontal Cortex/physiology , Algorithms , Animals , Cluster Analysis , Cognition/physiology , Computer Simulation , Female , Learning/physiology , Macaca mulatta , Male , Models, Neurological , Neurons , Prefrontal Cortex/cytology
8.
Neural Comput ; 28(10): 2011-44, 2016 10.
Article in English | MEDLINE | ID: mdl-27557100

ABSTRACT

Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered.


Subject(s)
Energy Metabolism , Neural Networks, Computer , Semiconductors , Support Vector Machine , Energy Metabolism/physiology , Humans , Semiconductors/trends , Support Vector Machine/trends
9.
Curr Opin Neurobiol ; 37: 66-74, 2016 04.
Article in English | MEDLINE | ID: mdl-26851755

ABSTRACT

Neurons often respond to diverse combinations of task-relevant variables. This form of mixed selectivity plays an important computational role which is related to the dimensionality of the neural representations: high-dimensional representations with mixed selectivity allow a simple linear readout to generate a huge number of different potential responses. In contrast, neural representations based on highly specialized neurons are low dimensional and they preclude a linear readout from generating several responses that depend on multiple task-relevant variables. Here we review the conceptual and theoretical framework that explains the importance of mixed selectivity and the experimental evidence that recorded neural representations are high-dimensional. We end by discussing the implications for the design of future experiments.


Subject(s)
Cognition/physiology , Models, Neurological , Neurons/physiology , Animals , Humans
10.
Nature ; 522(7556): 309-14, 2015 Jun 18.
Article in English | MEDLINE | ID: mdl-26053122

ABSTRACT

Spatial working memory, the caching of behaviourally relevant spatial cues on a timescale of seconds, is a fundamental constituent of cognition. Although the prefrontal cortex and hippocampus are known to contribute jointly to successful spatial working memory, the anatomical pathway and temporal window for the interaction of these structures critical to spatial working memory has not yet been established. Here we find that direct hippocampal-prefrontal afferents are critical for encoding, but not for maintenance or retrieval, of spatial cues in mice. These cues are represented by the activity of individual prefrontal units in a manner that is dependent on hippocampal input only during the cue-encoding phase of a spatial working memory task. Successful encoding of these cues appears to be mediated by gamma-frequency synchrony between the two structures. These findings indicate a critical role for the direct hippocampal-prefrontal afferent pathway in the continuous updating of task-related spatial information during spatial working memory.


Subject(s)
Hippocampus/physiology , Memory, Short-Term/physiology , Prefrontal Cortex/physiology , Space Perception/physiology , Spatial Memory/physiology , Action Potentials , Afferent Pathways/physiology , Animals , Cues , Hippocampus/cytology , Male , Mice , Models, Neurological , Optogenetics , Prefrontal Cortex/cytology
11.
Nature ; 497(7451): 585-90, 2013 May 30.
Article in English | MEDLINE | ID: mdl-23685452

ABSTRACT

Single-neuron activity in the prefrontal cortex (PFC) is tuned to mixtures of multiple task-related aspects. Such mixed selectivity is highly heterogeneous, seemingly disordered and therefore difficult to interpret. We analysed the neural activity recorded in monkeys during an object sequence memory task to identify a role of mixed selectivity in subserving the cognitive functions ascribed to the PFC. We show that mixed selectivity neurons encode distributed information about all task-relevant aspects. Each aspect can be decoded from the population of neurons even when single-cell selectivity to that aspect is eliminated. Moreover, mixed selectivity offers a significant computational advantage over specialized responses in terms of the repertoire of input-output functions implementable by readout neurons. This advantage originates from the highly diverse nonlinear selectivity to mixtures of task-relevant variables, a signature of high-dimensional neural representations. Crucially, this dimensionality is predictive of animal behaviour as it collapses in error trials. Our findings recommend a shift of focus for future studies from neurons that have easily interpretable response tuning to the widely observed, but rarely analysed, mixed selectivity neurons.


Subject(s)
Cognition/physiology , Haplorhini/physiology , Models, Neurological , Neurons/physiology , Prefrontal Cortex/cytology , Prefrontal Cortex/physiology , Animals , Behavior, Animal/physiology , Memory/physiology , Single-Cell Analysis
12.
J Neurosci ; 33(9): 3844-56, 2013 Feb 27.
Article in English | MEDLINE | ID: mdl-23447596

ABSTRACT

Intelligent behavior requires integrating several sources of information in a meaningful fashion-be it context with stimulus or shape with color and size. This requires the underlying neural mechanism to respond in a different manner to similar inputs (discrimination), while maintaining a consistent response for noisy variations of the same input (generalization). We show that neurons that mix information sources via random connectivity can form an easy to read representation of input combinations. Using analytical and numerical tools, we show that the coding level or sparseness of these neurons' activity controls a trade-off between generalization and discrimination, with the optimal level depending on the task at hand. In all realistic situations that we analyzed, the optimal fraction of inputs to which a neuron responds is close to 0.1. Finally, we predict a relation between a measurable property of the neural representation and task performance.


Subject(s)
Discrimination, Psychological/physiology , Generalization, Psychological/physiology , Models, Neurological , Neurons/physiology , Action Potentials/physiology , Animals , Color Perception , Humans , Nerve Net/physiology , Neurons/classification , Photic Stimulation , Size Perception
13.
Article in English | MEDLINE | ID: mdl-21048899

ABSTRACT

Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context-dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics), the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding). A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context-dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation.

14.
Neuroimage ; 52(3): 833-47, 2010 Sep.
Article in English | MEDLINE | ID: mdl-20100580

ABSTRACT

Complex tasks often require the memory of recent events, the knowledge about the context in which they occur, and the goals we intend to reach. All this information is stored in our mental states. Given a set of mental states, reinforcement learning (RL) algorithms predict the optimal policy that maximizes future reward. RL algorithms assign a value to each already-known state so that discovering the optimal policy reduces to selecting the action leading to the state with the highest value. But how does the brain create representations of these mental states in the first place? We propose a mechanism for the creation of mental states that contain information about the temporal statistics of the events in a particular context. We suggest that the mental states are represented by stable patterns of reverberating activity, which are attractors of the neural dynamics. These representations are built from neurons that are selective to specific combinations of external events (e.g. sensory stimuli) and pre-existent mental states. Consistent with this notion, we find that neurons in the amygdala and in orbitofrontal cortex (OFC) often exhibit this form of mixed selectivity. We propose that activating different mixed selectivity neurons in a fixed temporal order modifies synaptic connections so that conjunctions of events and mental states merge into a single pattern of reverberating activity. This process corresponds to the birth of a new, different mental state that encodes a different temporal context. The concretion process depends on temporal contiguity, i.e. on the probability that a combination of an event and mental states follows or precedes the events and states that define a certain context. The information contained in the context thereby allows an animal to assign unambiguously a value to the events that initially appeared in different situations with different meanings.


Subject(s)
Brain/physiology , Cognition/physiology , Models, Neurological , Neural Networks, Computer , Neurons/physiology , Animals , Humans , Learning/physiology , Reinforcement, Psychology
SELECTION OF CITATIONS
SEARCH DETAIL
...