Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
Curr Opin Neurobiol ; 84: 102816, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38052111

ABSTRACT

Connecting neural activity to function is a common aim in neuroscience. How to define and conceptualize function, however, can vary. Here I focus on grounding this goal in the specific question of how a given change in behavior is produced by a change in neural circuits or activity. Artificial neural network models offer a particularly fruitful format for tackling such questions because they use neural mechanisms to perform complex transformations and produce appropriate behavior. Therefore, they can be a means of causally testing the extent to which a neural change can be responsible for an experimentally observed behavioral change. Furthermore, because the field of interpretability in artificial intelligence has similar aims, neuroscientists can look to interpretability methods for new ways of identifying neural features that drive performance and behaviors.


Subject(s)
Artificial Intelligence , Neurosciences , Neural Networks, Computer
2.
Behav Brain Sci ; 46: e392, 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-38054329

ABSTRACT

An ideal vision model accounts for behavior and neurophysiology in both naturalistic conditions and designed lab experiments. Unlike psychological theories, artificial neural networks (ANNs) actually perform visual tasks and generate testable predictions for arbitrary inputs. These advantages enable ANNs to engage the entire spectrum of the evidence. Failures of particular models drive progress in a vibrant ANN research program of human vision.


Subject(s)
Language , Neural Networks, Computer , Humans
3.
Nat Hum Behav ; 7(11): 1814-1815, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37985909
4.
Nat Rev Neurosci ; 24(7): 431-450, 2023 07.
Article in English | MEDLINE | ID: mdl-37253949

ABSTRACT

Artificial neural networks (ANNs) inspired by biology are beginning to be widely used to model behavioural and neural data, an approach we call 'neuroconnectionism'. ANNs have been not only lauded as the current best models of information processing in the brain but also criticized for failing to account for basic cognitive functions. In this Perspective article, we propose that arguing about the successes and failures of a restricted set of current ANNs is the wrong approach to assess the promise of neuroconnectionism for brain science. Instead, we take inspiration from the philosophy of science, and in particular from Lakatos, who showed that the core of a scientific research programme is often not directly falsifiable but should be assessed by its capacity to generate novel insights. Following this view, we present neuroconnectionism as a general research programme centred around ANNs as a computational language for expressing falsifiable theories about brain computation. We describe the core of the programme, the underlying computational framework and its tools for testing specific neuroscientific hypotheses and deriving novel understanding. Taking a longitudinal view, we review past and present neuroconnectionist projects and their responses to challenges and argue that the research programme is highly progressive, generating new and otherwise unreachable insights into the workings of the brain.


Subject(s)
Brain , Neural Networks, Computer , Humans , Brain/physiology
5.
J Neurosci ; 42(45): 8514-8523, 2022 11 09.
Article in English | MEDLINE | ID: mdl-36351830

ABSTRACT

Biological neural networks adapt and learn in diverse behavioral contexts. Artificial neural networks (ANNs) have exploited biological properties to solve complex problems. However, despite their effectiveness for specific tasks, ANNs are yet to realize the flexibility and adaptability of biological cognition. This review highlights recent advances in computational and experimental research to advance our understanding of biological and artificial intelligence. In particular, we discuss critical mechanisms from the cellular, systems, and cognitive neuroscience fields that have contributed to refining the architecture and training algorithms of ANNs. Additionally, we discuss how recent work used ANNs to understand complex neuronal correlates of cognition and to process high throughput behavioral data.


Subject(s)
Artificial Intelligence , Neurosciences , Neural Networks, Computer , Algorithms , Cognition
6.
Front Comput Neurosci ; 15: 698574, 2021.
Article in English | MEDLINE | ID: mdl-34122030

ABSTRACT

[This corrects the article DOI: 10.3389/fncom.2020.00029.].

7.
J Cogn Neurosci ; 33(10): 2017-2031, 2021 09 01.
Article in English | MEDLINE | ID: mdl-32027584

ABSTRACT

Convolutional neural networks (CNNs) were inspired by early findings in the study of biological vision. They have since become successful tools in computer vision and state-of-the-art models of both neural activity and behavior on visual tasks. This review highlights what, in the context of CNNs, it means to be a good model in computational neuroscience and the various ways models can provide insight. Specifically, it covers the origins of CNNs and the methods by which we validate them as models of biological vision. It then goes on to elaborate on what we can learn about biological vision by understanding and experimenting on CNNs and discusses emerging opportunities for the use of CNNs in vision research beyond basic object recognition.


Subject(s)
Neural Networks, Computer , Visual Perception , Humans
8.
Front Comput Neurosci ; 14: 29, 2020.
Article in English | MEDLINE | ID: mdl-32372937

ABSTRACT

Attention is the important ability to flexibly control limited computational resources. It has been studied in conjunction with many other topics in neuroscience and psychology including awareness, vigilance, saliency, executive control, and learning. It has also recently been applied in several domains in machine learning. The relationship between the study of biological attention and its use as a tool to enhance artificial neural networks is not always clear. This review starts by providing an overview of how attention is conceptualized in the neuroscience and psychology literature. It then covers several use cases of attention in machine learning, indicating their biological counterparts where they exist. Finally, the ways in which artificial attention can be further inspired by biology for the production of complex and integrative systems is explored.

9.
Nat Neurosci ; 22(11): 1761-1770, 2019 11.
Article in English | MEDLINE | ID: mdl-31659335

ABSTRACT

Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.


Subject(s)
Artificial Intelligence , Deep Learning , Neural Networks, Computer , Animals , Brain/physiology , Humans
10.
Elife ; 72018 10 01.
Article in English | MEDLINE | ID: mdl-30272560

ABSTRACT

How does attentional modulation of neural activity enhance performance? Here we use a deep convolutional neural network as a large-scale model of the visual system to address this question. We model the feature similarity gain model of attention, in which attentional modulation is applied according to neural stimulus tuning. Using a variety of visual tasks, we show that neural modulations of the kind and magnitude observed experimentally lead to performance changes of the kind and magnitude observed experimentally. We find that, at earlier layers, attention applied according to tuning does not successfully propagate through the network, and has a weaker impact on performance than attention applied according to values computed for optimally modulating higher areas. This raises the question of whether biological attention might be applied at least in part to optimize function rather than strictly according to tuning. We suggest a simple experiment to distinguish these alternatives.


Subject(s)
Models, Biological , Task Performance and Analysis , Visual Pathways/physiology , Attention , Orientation
11.
J Neurosci ; 37(45): 11021-11036, 2017 11 08.
Article in English | MEDLINE | ID: mdl-28986463

ABSTRACT

Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training.SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed selectivity")-is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training.


Subject(s)
Machine Learning , Neural Networks, Computer , Prefrontal Cortex/physiology , Algorithms , Animals , Cluster Analysis , Cognition/physiology , Computer Simulation , Female , Learning/physiology , Macaca mulatta , Male , Models, Neurological , Neurons , Prefrontal Cortex/cytology
12.
Nat Neurosci ; 20(1): 62-71, 2017 01.
Article in English | MEDLINE | ID: mdl-27798631

ABSTRACT

Physical features of sensory stimuli are fixed, but sensory perception is context dependent. The precise mechanisms that govern contextual modulation remain unknown. Here, we trained mice to switch between two contexts: passively listening to pure tones and performing a recognition task for the same stimuli. Two-photon imaging showed that many excitatory neurons in auditory cortex were suppressed during behavior, while some cells became more active. Whole-cell recordings showed that excitatory inputs were affected only modestly by context, but inhibition was more sensitive, with PV+, SOM+, and VIP+ interneurons balancing inhibition and disinhibition within the network. Cholinergic modulation was involved in context switching, with cholinergic axons increasing activity during behavior and directly depolarizing inhibitory cells. Network modeling captured these findings, but only when modulation coincidently drove all three interneuron subtypes, ruling out either inhibition or disinhibition alone as sole mechanism for active engagement. Parallel processing of cholinergic modulation by cortical interneurons therefore enables context-dependent behavior.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Behavior, Animal/physiology , Neural Inhibition/physiology , Neurons/physiology , Visual Cortex/physiology , Animals , Mice, Transgenic , Somatostatin/metabolism , Vasoactive Intestinal Peptide/metabolism
SELECTION OF CITATIONS
SEARCH DETAIL
...