Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 55
Filter
Add more filters










Publication year range
1.
Proc Natl Acad Sci U S A ; 121(27): e2311893121, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38913890

ABSTRACT

In the quest to model neuronal function amid gaps in physiological data, a promising strategy is to develop a normative theory that interprets neuronal physiology as optimizing a computational objective. This study extends current normative models, which primarily optimize prediction, by conceptualizing neurons as optimal feedback controllers. We posit that neurons, especially those beyond early sensory areas, steer their environment toward a specific desired state through their output. This environment comprises both synaptically interlinked neurons and external motor sensory feedback loops, enabling neurons to evaluate the effectiveness of their control via synaptic feedback. To model neurons as biologically feasible controllers which implicitly identify loop dynamics, infer latent states, and optimize control we utilize the contemporary direct data-driven control (DD-DC) framework. Our DD-DC neuron model explains various neurophysiological phenomena: the shift from potentiation to depression in spike-timing-dependent plasticity with its asymmetry, the duration and adaptive nature of feedforward and feedback neuronal filters, the imprecision in spike generation under constant stimulation, and the characteristic operational variability and noise in the brain. Our model presents a significant departure from the traditional, feedforward, instant-response McCulloch-Pitts-Rosenblatt neuron, offering a modern, biologically informed fundamental unit for constructing neural networks.


Subject(s)
Models, Neurological , Neurons , Neurons/physiology , Humans , Neuronal Plasticity/physiology , Action Potentials/physiology , Animals
3.
Curr Biol ; 33(21): 4611-4623.e4, 2023 11 06.
Article in English | MEDLINE | ID: mdl-37774707

ABSTRACT

For most model organisms in neuroscience, research into visual processing in the brain is difficult because of a lack of high-resolution maps that capture complex neuronal circuitry. The microinsect Megaphragma viggianii, because of its small size and non-trivial behavior, provides a unique opportunity for tractable whole-organism connectomics. We image its whole head using serial electron microscopy. We reconstruct its compound eye and analyze the optical properties of the ommatidia as well as the connectome of the first visual neuropil-the lamina. Compared with the fruit fly and the honeybee, Megaphragma visual system is highly simplified: it has 29 ommatidia per eye and 6 lamina neuron types. We report features that are both stereotypical among most ommatidia and specialized to some. By identifying the "barebones" circuits critical for flying insects, our results will facilitate constructing computational models of visual processing in insects.


Subject(s)
Hymenoptera , Vision, Ocular , Animals , Neurons/physiology , Visual Perception , Neuropil , Drosophila
4.
Proc Natl Acad Sci U S A ; 120(29): e2117484120, 2023 07 18.
Article in English | MEDLINE | ID: mdl-37428907

ABSTRACT

One major question in neuroscience is how to relate connectomes to neural activity, circuit function, and learning. We offer an answer in the peripheral olfactory circuit of the Drosophila larva, composed of olfactory receptor neurons (ORNs) connected through feedback loops with interconnected inhibitory local neurons (LNs). We combine structural and activity data and, using a holistic normative framework based on similarity-matching, we formulate biologically plausible mechanistic models of the circuit. In particular, we consider a linear circuit model, for which we derive an exact theoretical solution, and a nonnegative circuit model, which we examine through simulations. The latter largely predicts the ORN [Formula: see text] LN synaptic weights found in the connectome and demonstrates that they reflect correlations in ORN activity patterns. Furthermore, this model accounts for the relationship between ORN [Formula: see text] LN and LN-LN synaptic counts and the emergence of different LN types. Functionally, we propose that LNs encode soft cluster memberships of ORN activity, and partially whiten and normalize the stimulus representations in ORNs through inhibitory feedback. Such a synaptic organization could, in principle, autonomously arise through Hebbian plasticity and would allow the circuit to adapt to different environments in an unsupervised manner. We thus uncover a general and potent circuit motif that can learn and extract significant input features and render stimulus representations more efficient. Finally, our study provides a unified framework for relating structure, activity, function, and learning in neural circuits and supports the conjecture that similarity-matching shapes the transformation of neural representations.


Subject(s)
Connectome , Olfactory Receptor Neurons , Animals , Drosophila , Olfactory Receptor Neurons/physiology , Smell/physiology , Larva
5.
Res Sq ; 2023 Apr 21.
Article in English | MEDLINE | ID: mdl-37131789

ABSTRACT

Anatomically segregated apical and basal dendrites of pyramidal neurons receive functionally distinct inputs, but it is unknown if this results in compartment-level functional diversity during behavior. Here we imaged calcium signals from apical dendrites, soma, and basal dendrites of pyramidal neurons in area CA3 of mouse hippocampus during head-fixed navigation. To examine dendritic population activity, we developed computational tools to identify dendritic regions of interest and extract accurate fluorescence traces. We identified robust spatial tuning in apical and basal dendrites, similar to soma, though basal dendrites had reduced activity rates and place field widths. Across days, apical dendrites were more stable than soma or basal dendrites, resulting in better decoding of the animal's position. These population-level dendritic differences may reflect functionally distinct input streams leading to different dendritic computations in CA3. These tools will facilitate future studies of signal transformations between cellular compartments and their relation to behavior.

6.
Nat Commun ; 14(1): 1597, 2023 03 22.
Article in English | MEDLINE | ID: mdl-36949048

ABSTRACT

Neuroscience has long been an essential driver of progress in artificial intelligence (AI). We propose that to accelerate progress in AI, we must invest in fundamental research in NeuroAI. A core component of this is the embodied Turing test, which challenges AI animal models to interact with the sensorimotor world at skill levels akin to their living counterparts. The embodied Turing test shifts the focus from those capabilities like game playing and language that are especially well-developed or uniquely human to those capabilities - inherited from over 500 million years of evolution - that are shared with all animals. Building models that can pass the embodied Turing test will provide a roadmap for the next generation of AI.


Subject(s)
Artificial Intelligence , Neurosciences , Animals , Humans
7.
PLoS Comput Biol ; 19(2): e1010864, 2023 02.
Article in English | MEDLINE | ID: mdl-36745688

ABSTRACT

To adapt to their environments, animals learn associations between sensory stimuli and unconditioned stimuli. In invertebrates, olfactory associative learning primarily occurs in the mushroom body, which is segregated into separate compartments. Within each compartment, Kenyon cells (KCs) encoding sparse odor representations project onto mushroom body output neurons (MBONs) whose outputs guide behavior. Associated with each compartment is a dopamine neuron (DAN) that modulates plasticity of the KC-MBON synapses within the compartment. Interestingly, DAN-induced plasticity of the KC-MBON synapse is imbalanced in the sense that it only weakens the synapse and is temporally sparse. We propose a normative mechanistic model of the MBON as a linear discriminant analysis (LDA) classifier that predicts the presence of an unconditioned stimulus (class identity) given a KC odor representation (feature vector). Starting from a principled LDA objective function and under the assumption of temporally sparse DAN activity, we derive an online algorithm which maps onto the mushroom body compartment. Our model accounts for the imbalanced learning at the KC-MBON synapse and makes testable predictions that provide clear contrasts with existing models.


Subject(s)
Learning , Mushroom Bodies , Animals , Mushroom Bodies/physiology , Discriminant Analysis , Learning/physiology , Smell/physiology , Drosophila melanogaster/physiology , Odorants , Dopaminergic Neurons/physiology
8.
Nat Neurosci ; 26(2): 339-349, 2023 02.
Article in English | MEDLINE | ID: mdl-36635497

ABSTRACT

Recent experiments have revealed that neural population codes in many brain areas continuously change even when animals have fully learned and stably perform their tasks. This representational 'drift' naturally leads to questions about its causes, dynamics and functions. Here we explore the hypothesis that neural representations optimize a representational objective with a degenerate solution space, and noisy synaptic updates drive the network to explore this (near-)optimal space causing representational drift. We illustrate this idea and explore its consequences in simple, biologically plausible Hebbian/anti-Hebbian network models of representation learning. We find that the drifting receptive fields of individual neurons can be characterized by a coordinated random walk, with effective diffusion constants depending on various parameters such as learning rate, noise amplitude and input statistics. Despite such drift, the representational similarity of population codes is stable over time. Our model recapitulates experimental observations in the hippocampus and posterior parietal cortex and makes testable predictions that can be probed in future experiments.


Subject(s)
Brain , Learning , Animals , Learning/physiology , Neurons/physiology , Hippocampus , Head , Models, Neurological
9.
Biol Cybern ; 116(5-6): 557-568, 2022 12.
Article in English | MEDLINE | ID: mdl-36070103

ABSTRACT

An important problem in neuroscience is to understand how brains extract relevant signals from mixtures of unknown sources, i.e., perform blind source separation. To model how the brain performs this task, we seek a biologically plausible single-layer neural network implementation of a blind source separation algorithm. For biological plausibility, we require the network to satisfy the following three basic properties of neuronal circuits: (i) the network operates in the online setting; (ii) synaptic learning rules are local; and (iii) neuronal outputs are nonnegative. Closest is the work by Pehlevan et al. (Neural Comput 29:2925-2954, 2017), which considers nonnegative independent component analysis (NICA), a special case of blind source separation that assumes the mixture is a linear combination of uncorrelated, nonnegative sources. They derive an algorithm with a biologically plausible 2-layer network implementation. In this work, we improve upon their result by deriving 2 algorithms for NICA, each with a biologically plausible single-layer network implementation. The first algorithm maps onto a network with indirect lateral connections mediated by interneurons. The second algorithm maps onto a network with direct lateral connections and multi-compartmental output neurons.


Subject(s)
Algorithms , Neural Networks, Computer , Neurons/physiology , Learning/physiology , Brain
10.
Neural Comput ; 34(4): 891-938, 2022 03 23.
Article in English | MEDLINE | ID: mdl-35026035

ABSTRACT

The brain must extract behaviorally relevant latent variables from the signals streamed by the sensory organs. Such latent variables are often encoded in the dynamics that generated the signal rather than in the specific realization of the waveform. Therefore, one problem faced by the brain is to segment time series based on underlying dynamics. We present two algorithms for performing this segmentation task that are biologically plausible, which we define as acting in a streaming setting and all learning rules being local. One algorithm is model based and can be derived from an optimization problem involving a mixture of autoregressive processes. This algorithm relies on feedback in the form of a prediction error and can also be used for forecasting future samples. In some brain regions, such as the retina, the feedback connections necessary to use the prediction error for learning are absent. For this case, we propose a second, model-free algorithm that uses a running estimate of the autocorrelation structure of the signal to perform the segmentation. We show that both algorithms do well when tasked with segmenting signals drawn from autoregressive models with piecewise-constant parameters. In particular, the segmentation accuracy is similar to that obtained from oracle-like methods in which the ground-truth parameters of the autoregressive models are known. We also test our methods on data sets generated by alternating snippets of voice recordings. We provide implementations of our algorithms at https://github.com/ttesileanu/bio-time-series.


Subject(s)
Algorithms , Brain , Image Processing, Computer-Assisted/methods , Learning , Time Factors
11.
Curr Opin Neurobiol ; 71: 77-83, 2021 12.
Article in English | MEDLINE | ID: mdl-34656052

ABSTRACT

As the study of the human brain is complicated by its sheer scale, complexity, and impracticality of invasive experiments, neuroscience research has long relied on model organisms. The brains of macaque, mouse, zebrafish, fruit fly, nematode, and others have yielded many secrets that advanced our understanding of the human brain. Here, we propose that adding miniature insects to this collection would reduce the costs and accelerate brain research. The smallest insects occupy a special place among miniature animals: despite their body sizes, comparable to unicellular organisms, they retain complex brains that include thousands of neurons. Their brains possess the advantages of those in insects, such as neuronal identifiability and the connectome stereotypy, yet are smaller and hence easier to map and understand. Finally, the brains of miniature insects offer insights into the evolution of brain design.


Subject(s)
Brain , Connectome , Animals , Brain/physiology , Humans , Insecta , Mice , Neurons/physiology , Zebrafish
12.
Neural Comput ; 33(9): 2309-2352, 2021 08 19.
Article in English | MEDLINE | ID: mdl-34412114

ABSTRACT

Cortical pyramidal neurons receive inputs from multiple distinct neural populations and integrate these inputs in separate dendritic compartments. We explore the possibility that cortical microcircuits implement canonical correlation analysis (CCA), an unsupervised learning method that projects the inputs onto a common subspace so as to maximize the correlations between the projections. To this end, we seek a multichannel CCA algorithm that can be implemented in a biologically plausible neural network. For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local. Starting from a novel CCA objective function, we derive an online optimization algorithm whose optimization steps can be implemented in a single-layer neural network with multicompartmental neurons and local non-Hebbian learning rules. We also derive an extension of our online CCA algorithm with adaptive output rank and output whitening. Interestingly, the extension maps onto a neural network whose neural architecture and synaptic updates resemble neural circuitry and non-Hebbian plasticity observed in the cortex.


Subject(s)
Canonical Correlation Analysis , Neural Networks, Computer , Algorithms , Neurons
13.
Front Comput Neurosci ; 14: 55, 2020.
Article in English | MEDLINE | ID: mdl-32694989

ABSTRACT

Normative models of neural computation offer simplified yet lucid mathematical descriptions of murky biological phenomena. Previously, online Principal Component Analysis (PCA) was used to model a network of single-compartment neurons accounting for weighted summation of upstream neural activity in the soma and Hebbian/anti-Hebbian synaptic learning rules. However, synaptic plasticity in biological neurons often depends on the integration of synaptic currents over a dendritic compartment rather than total current in the soma. Motivated by this observation, we model a pyramidal neuronal network using online Canonical Correlation Analysis (CCA). Given two related datasets represented by distal and proximal dendritic inputs, CCA projects them onto the subspace which maximizes the correlation between their projections. First, adopting a normative approach and starting from a single-channel CCA objective function, we derive an online gradient-based optimization algorithm whose steps can be interpreted as the operation of a pyramidal neuron. To model networks of pyramidal neurons, we introduce a novel multi-channel CCA objective function, and derive from it an online gradient-based optimization algorithm whose steps can be interpreted as the operation of a pyramidal neuron network including its architecture, dynamics, and synaptic learning rules. Next, we model a neuron with more than two dendritic compartments by deriving its operation from a known objective function for multi-view CCA. Finally, we confirm the functionality of our networks via numerical simulations. Overall, our work presents a simplified but informative abstraction of learning in a pyramidal neuron network, and demonstrates how such networks can integrate multiple sources of inputs.

14.
Elife ; 82019 01 17.
Article in English | MEDLINE | ID: mdl-30652683

ABSTRACT

Advances in fluorescence microscopy enable monitoring larger brain areas in-vivo with finer time resolution. The resulting data rates require reproducible analysis pipelines that are reliable, fully automated, and scalable to datasets generated over the course of months. We present CaImAn, an open-source library for calcium imaging data analysis. CaImAn provides automatic and scalable methods to address problems common to pre-processing, including motion correction, neural activity identification, and registration across different sessions of data collection. It does this while requiring minimal user intervention, with good scalability on computers ranging from laptops to high-performance computing clusters. CaImAn is suitable for two-photon and one-photon imaging, and also enables real-time analysis on streaming data. To benchmark the performance of CaImAn we collected and combined a corpus of manual annotations from multiple labelers on nine mouse two-photon datasets. We demonstrate that CaImAn achieves near-human performance in detecting locations of active neurons.


Subject(s)
Brain/diagnostic imaging , Calcium/metabolism , Image Processing, Computer-Assisted/methods , Microscopy, Fluorescence , Pattern Recognition, Automated , Algorithms , Animals , Artifacts , Computational Biology , Data Analysis , Humans , Mice , Motion , Neurons/metabolism , Observer Variation , Photons , Reproducibility of Results , Software , Zebrafish
15.
Neural Comput ; 30(1): 84-124, 2018 01.
Article in English | MEDLINE | ID: mdl-28957017

ABSTRACT

Modeling self-organization of neural networks for unsupervised learning using Hebbian and anti-Hebbian plasticity has a long history in neuroscience. Yet derivations of single-layer networks with such local learning rules from principled optimization objectives became possible only recently, with the introduction of similarity matching objectives. What explains the success of similarity matching objectives in deriving neural networks with local learning rules? Here, using dimensionality reduction as an example, we introduce several variable substitutions that illuminate the success of similarity matching. We show that the full network objective may be optimized separately for each synapse using local learning rules in both the offline and online settings. We formalize the long-standing intuition of the rivalry between Hebbian and anti-Hebbian rules by formulating a min-max optimization problem. We introduce a novel dimensionality reduction objective using fractional matrix exponents. To illustrate the generality of our approach, we apply it to a novel formulation of dimensionality reduction combined with whitening. We confirm numerically that the networks with learning rules derived from principled objectives perform better than those with heuristic learning rules.


Subject(s)
Learning/physiology , Models, Neurological , Neural Pathways/physiology , Neurons/physiology , Synapses/physiology , Algorithms , Game Theory , Humans
16.
Neural Comput ; 29(11): 2925-2954, 2017 11.
Article in English | MEDLINE | ID: mdl-28777718

ABSTRACT

Blind source separation-the extraction of independent sources from a mixture-is an important problem for both artificial and natural signal processing. Here, we address a special case of this problem when sources (but not the mixing matrix) are known to be nonnegative-for example, due to the physical nature of the sources. We search for the solution to this problem that can be implemented using biologically plausible neural networks. Specifically, we consider the online setting where the data set is streamed to a neural network. The novelty of our approach is that we formulate blind nonnegative source separation as a similarity matching problem and derive neural networks from the similarity matching objective. Importantly, synaptic weights in our networks are updated according to biologically plausible local learning rules.


Subject(s)
Models, Biological , Neural Networks, Computer , Signal Processing, Computer-Assisted , Animals , Computer Simulation
17.
Elife ; 62017 04 22.
Article in English | MEDLINE | ID: mdl-28432786

ABSTRACT

Analysing computations in neural circuits often uses simplified models because the actual neuronal implementation is not known. For example, a problem in vision, how the eye detects image motion, has long been analysed using Hassenstein-Reichardt (HR) detector or Barlow-Levick (BL) models. These both simulate motion detection well, but the exact neuronal circuits undertaking these tasks remain elusive. We reconstructed a comprehensive connectome of the circuits of Drosophila's motion-sensing T4 cells using a novel EM technique. We uncover complex T4 inputs and reveal that putative excitatory inputs cluster at T4's dendrite shafts, while inhibitory inputs localize to the bases. Consistent with our previous study, we reveal that Mi1 and Tm3 cells provide most synaptic contacts onto T4. We are, however, unable to reproduce the spatial offset between these cells reported previously. Our comprehensive connectome reveals complex circuits that include candidate anatomical substrates for both HR and BL types of motion detectors.


Subject(s)
Connectome , Drosophila melanogaster/anatomy & histology , Drosophila melanogaster/physiology , Motion Perception , Visual Pathways/anatomy & histology , Visual Pathways/physiology , Animals , Models, Neurological
19.
Proc Natl Acad Sci U S A ; 112(44): 13711-6, 2015 Nov 03.
Article in English | MEDLINE | ID: mdl-26483464

ABSTRACT

We reconstructed the synaptic circuits of seven columns in the second neuropil or medulla behind the fly's compound eye. These neurons embody some of the most stereotyped circuits in one of the most miniaturized of animal brains. The reconstructions allow us, for the first time to our knowledge, to study variations between circuits in the medulla's neighboring columns. This variation in the number of synapses and the types of their synaptic partners has previously been little addressed because methods that visualize multiple circuits have not resolved detailed connections, and existing connectomic studies, which can see such connections, have not so far examined multiple reconstructions of the same circuit. Here, we address the omission by comparing the circuits common to all seven columns to assess variation in their connection strengths and the resultant rates of several different and distinct types of connection error. Error rates reveal that, overall, <1% of contacts are not part of a consensus circuit, and we classify those contacts that supplement (E+) or are missing from it (E-). Autapses, in which the same cell is both presynaptic and postsynaptic at the same synapse, are occasionally seen; two cells in particular, Dm9 and Mi1, form ≥ 20-fold more autapses than do other neurons. These results delimit the accuracy of developmental events that establish and normally maintain synaptic circuits with such precision, and thereby address the operation of such circuits. They also establish a precedent for error rates that will be required in the new science of connectomics.


Subject(s)
Drosophila melanogaster/physiology , Synapses/physiology , Vision, Ocular/physiology , Animals
20.
PLoS Comput Biol ; 11(8): e1004315, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26247884

ABSTRACT

Neurons must faithfully encode signals that can vary over many orders of magnitude despite having only limited dynamic ranges. For a correlated signal, this dynamic range constraint can be relieved by subtracting away components of the signal that can be predicted from the past, a strategy known as predictive coding, that relies on learning the input statistics. However, the statistics of input natural signals can also vary over very short time scales e.g., following saccades across a visual scene. To maintain a reduced transmission cost to signals with rapidly varying statistics, neuronal circuits implementing predictive coding must also rapidly adapt their properties. Experimentally, in different sensory modalities, sensory neurons have shown such adaptations within 100 ms of an input change. Here, we show first that linear neurons connected in a feedback inhibitory circuit can implement predictive coding. We then show that adding a rectification nonlinearity to such a feedback inhibitory circuit allows it to automatically adapt and approximate the performance of an optimal linear predictive coding network, over a wide range of inputs, while keeping its underlying temporal and synaptic properties unchanged. We demonstrate that the resulting changes to the linearized temporal filters of this nonlinear network match the fast adaptations observed experimentally in different sensory modalities, in different vertebrate species. Therefore, the nonlinear feedback inhibitory network can provide automatic adaptation to fast varying signals, maintaining the dynamic range necessary for accurate neuronal transmission of natural inputs.


Subject(s)
Feedback, Physiological/physiology , Models, Neurological , Neurons/physiology , Algorithms , Animals , Computational Biology , Finches , Signal Transduction
SELECTION OF CITATIONS
SEARCH DETAIL
...