Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 68
Filter
Add more filters










Publication year range
1.
Cell Rep ; 43(5): 114188, 2024 May 28.
Article in English | MEDLINE | ID: mdl-38713584

ABSTRACT

Detecting novelty is ethologically useful for an organism's survival. Recent experiments characterize how different types of novelty over timescales from seconds to weeks are reflected in the activity of excitatory and inhibitory neuron types. Here, we introduce a learning mechanism, familiarity-modulated synapses (FMSs), consisting of multiplicative modulations dependent on presynaptic or pre/postsynaptic neuron activity. With FMSs, network responses that encode novelty emerge under unsupervised continual learning and minimal connectivity constraints. Implementing FMSs within an experimentally constrained model of a visual cortical circuit, we demonstrate the generalizability of FMSs by simultaneously fitting absolute, contextual, and omission novelty effects. Our model also reproduces functional diversity within cell subpopulations, leading to experimentally testable predictions about connectivity and synaptic dynamics that can produce both population-level novelty responses and heterogeneous individual neuron signals. Altogether, our findings demonstrate how simple plasticity mechanisms within a cortical circuit structure can produce qualitatively distinct and complex novelty responses.


Subject(s)
Models, Neurological , Neurons , Synapses , Synapses/physiology , Synapses/metabolism , Animals , Neurons/physiology , Neurons/metabolism , Neuronal Plasticity/physiology , Visual Cortex/physiology , Learning/physiology
2.
bioRxiv ; 2024 May 15.
Article in English | MEDLINE | ID: mdl-38798582

ABSTRACT

Recurrent neural networks exhibit chaotic dynamics when the variance in their connection strengths exceed a critical value. Recent work indicates connection variance also modulates learning strategies; networks learn "rich" representations when initialized with low coupling and "lazier" solutions with larger variance. Using Watts-Strogatz networks of varying sparsity, structure, and hidden weight variance, we find that the critical coupling strength dividing chaotic from ordered dynamics also differentiates rich and lazy learning strategies. Training moves both stable and chaotic networks closer to the edge of chaos, with networks learning richer representations before the transition to chaos. In contrast, biologically realistic connectivity structures foster stability over a wide range of variances. The transition to chaos is also reflected in a measure that clinically discriminates levels of consciousness, the perturbational complexity index (PCIst). Networks with high values of PCIst exhibit stable dynamics and rich learning, suggesting a consciousness prior may promote rich learning. The results suggest a clear relationship between critical dynamics, learning regimes and complexity-based measures of consciousness.

3.
ArXiv ; 2024 Feb 19.
Article in English | MEDLINE | ID: mdl-37873007

ABSTRACT

In theoretical neuroscience, recent work leverages deep learning tools to explore how some network attributes critically influence its learning dynamics. Notably, initial weight distributions with small (resp. large) variance may yield a rich (resp. lazy) regime, where significant (resp. minor) changes to network states and representation are observed over the course of learning. However, in biology, neural circuit connectivity could exhibit a low-rank structure and therefore differs markedly from the random initializations generally used for these studies. As such, here we investigate how the structure of the initial weights -- in particular their effective rank -- influences the network learning regime. Through both empirical and theoretical analyses, we discover that high-rank initializations typically yield smaller network changes indicative of lazier learning, a finding we also confirm with experimentally-driven initial connectivity in recurrent neural networks. Conversely, low-rank initialization biases learning towards richer learning. Importantly, however, as an exception to this rule, we find lazier learning can still occur with a low-rank initialization that aligns with task and data statistics. Our research highlights the pivotal role of initial weight structures in shaping learning regimes, with implications for metabolic costs of plasticity and risks of catastrophic forgetting.

4.
Nat Neurosci ; 27(1): 129-136, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37957319

ABSTRACT

Visual masking can reveal the timescale of perception, but the underlying circuit mechanisms are not understood. Here we describe a backward masking task in mice and humans in which the location of a stimulus is potently masked. Humans report reduced subjective visibility that tracks behavioral deficits. In mice, both masking and optogenetic silencing of visual cortex (V1) reduce performance over a similar timecourse but have distinct effects on response rates and accuracy. Activity in V1 is consistent with masked behavior when quantified over long, but not short, time windows. A dual accumulator model recapitulates both mouse and human behavior. The model and subjects' performance imply that the initial spikes in V1 can trigger a correct response, but subsequent V1 activity degrades performance. Supporting this hypothesis, optogenetically suppressing mask-evoked activity in V1 fully restores accurate behavior. Together, these results demonstrate that mice, like humans, are susceptible to masking and that target and mask information is first confounded downstream of V1.


Subject(s)
Perceptual Masking , Visual Cortex , Humans , Mice , Animals , Perceptual Masking/physiology , Visual Cortex/physiology , Photic Stimulation/methods , Visual Perception/physiology
5.
Netw Neurosci ; 7(4): 1497-1512, 2023.
Article in English | MEDLINE | ID: mdl-38144695

ABSTRACT

The Allen Mouse Brain Connectivity Atlas consists of anterograde tracing experiments targeting diverse structures and classes of projecting neurons. Beyond regional anterograde tracing done in C57BL/6 wild-type mice, a large fraction of experiments are performed using transgenic Cre-lines. This allows access to cell-class-specific whole-brain connectivity information, with class defined by the transgenic lines. However, even though the number of experiments is large, it does not come close to covering all existing cell classes in every area where they exist. Here, we study how much we can fill in these gaps and estimate the cell-class-specific connectivity function given the simplifying assumptions that nearby voxels have smoothly varying projections, but that these projection tensors can change sharply depending on the region and class of the projecting cells. This paper describes the conversion of Cre-line tracer experiments into class-specific connectivity matrices representing the connection strengths between source and target structures. We introduce and validate a novel statistical model for creation of connectivity matrices. We extend the Nadaraya-Watson kernel learning method that we previously used to fill in spatial gaps to also fill in gaps in cell-class connectivity information. To do this, we construct a "cell-class space" based on class-specific averaged regionalized projections and combine smoothing in 3D space as well as in this abstract space to share information between similar neuron classes. Using this method, we construct a set of connectivity matrices using multiple levels of resolution at which discontinuities in connectivity are assumed. We show that the connectivities obtained from this model display expected cell-type- and structure-specific connectivities. We also show that the wild-type connectivity matrix can be factored using a sparse set of factors, and analyze the informativeness of this latent variable model.

6.
Neural Netw ; 168: 615-630, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37839332

ABSTRACT

Humans and other animals navigate different environments effortlessly, their brains rapidly and accurately generalizing across contexts. Despite recent progress in deep learning, this flexibility remains a challenge for many artificial systems. Here, we show how a bio-inspired network motif can explicitly address this issue. We do this using a dataset of MNIST digits of varying transparency, set on one of two backgrounds of different statistics that define two contexts: a pixel-wise noise or a more naturalistic background from the CIFAR-10 dataset. After learning digit classification when both contexts are shown sequentially, we find that both shallow and deep networks have sharply decreased performance when returning to the first background - an instance of the catastrophic forgetting phenomenon known from continual learning. To overcome this, we propose the bottleneck-switching network or switching network for short. This is a bio-inspired architecture analogous to a well-studied network motif in the visual cortex, with additional "switching" units that are activated in the presence of a new background, assuming a priori a contextual signal to turn these units on or off. Intriguingly, only a few of these switching units are sufficient to enable the network to learn the new context without catastrophic forgetting through inhibition of redundant background features. Further, the bottleneck-switching network can generalize to novel contexts similar to contexts it has learned. Importantly, we find that - again as in the underlying biological network motif, recurrently connecting the switching units to network layers is advantageous for context generalization.


Subject(s)
Brain , Neural Networks, Computer , Humans , Brain/physiology , Generalization, Psychological
7.
bioRxiv ; 2023 Aug 18.
Article in English | MEDLINE | ID: mdl-37645978

ABSTRACT

Since environments are constantly in flux, the brain's ability to identify novel stimuli that fall outside its own internal representation of the world is crucial for an organism's survival. Within the mammalian neocortex, inhibitory microcircuits are proposed to regulate activity in an experience-dependent manner and different inhibitory neuron subtypes exhibit distinct novelty responses. Discerning the function of diverse neural circuits and their modulation by experience can be daunting unless one has a biologically plausible mechanism to detect and learn from novel experiences that is both understandable and flexible. Here we introduce a learning mechanism, familiarity modulated synapses (FMSs), through which a network response that encodes novelty emerges from unsupervised synaptic modifications depending only on the presynaptic or both the pre- and postsynaptic activity. FMSs stand apart from other familiarity mechanisms in their simplicity: they operate under continual learning, do not require specialized architecture, and can distinguish novelty rapidly without requiring feedback. Implementing FMSs within a model of a visual cortical circuit that includes multiple inhibitory populations, we simultaneously reproduce three distinct novelty effects recently observed in experimental data from visual cortical circuits in mice: absolute, contextual, and omission novelty. Additionally, our model results in a set of diverse physiological responses across cell subpopulations, allowing us to analyze how their connectivity and synaptic dynamics influences their distinct behavior, leading to predictions that can be tested in experiment. Altogether, our findings demonstrate how experimentally-constrained cortical circuit structure can give rise to qualitatively distinct novelty responses using simple plasticity mechanisms. The flexibility of FMSs opens the door to computationally and theoretically investigating how distinct synapse modulations can lead to a variety of experience-dependent responses in a simple, understandable, and biologically plausible setup.

8.
Neural Comput ; 35(4): 555-592, 2023 03 18.
Article in English | MEDLINE | ID: mdl-36827598

ABSTRACT

Individual neurons in the brain have complex intrinsic dynamics that are highly diverse. We hypothesize that the complex dynamics produced by networks of complex and heterogeneous neurons may contribute to the brain's ability to process and respond to temporally complex data. To study the role of complex and heterogeneous neuronal dynamics in network computation, we develop a rate-based neuronal model, the generalized-leaky-integrate-and-fire-rate (GLIFR) model, which is a rate equivalent of the generalized-leaky-integrate-and-fire model. The GLIFR model has multiple dynamical mechanisms, which add to the complexity of its activity while maintaining differentiability. We focus on the role of after-spike currents, currents induced or modulated by neuronal spikes, in producing rich temporal dynamics. We use machine learning techniques to learn both synaptic weights and parameters underlying intrinsic dynamics to solve temporal tasks. The GLIFR model allows the use of standard gradient descent techniques rather than surrogate gradient descent, which has been used in spiking neural networks. After establishing the ability to optimize parameters using gradient descent in single neurons, we ask how networks of GLIFR neurons learn and perform on temporally challenging tasks, such as sequential MNIST. We find that these networks learn diverse parameters, which gives rise to diversity in neuronal dynamics, as demonstrated by clustering of neuronal parameters. GLIFR networks have mixed performance when compared to vanilla recurrent neural networks, with higher performance in pixel-by-pixel MNIST but lower in line-by-line MNIST. However, they appear to be more robust to random silencing. We find that the ability to learn heterogeneity and the presence of after-spike currents contribute to these gains in performance. Our work demonstrates both the computational robustness of neuronal complexity and diversity in networks and a feasible method of training such models using exact gradients.


Subject(s)
Time Perception , Action Potentials/physiology , Models, Neurological , Neurons/physiology , Neural Networks, Computer
9.
Elife ; 122023 02 23.
Article in English | MEDLINE | ID: mdl-36820526

ABSTRACT

In addition to long-timescale rewiring, synapses in the brain are subject to significant modulation that occurs at faster timescales that endow the brain with additional means of processing information. Despite this, models of the brain like recurrent neural networks (RNNs) often have their weights frozen after training, relying on an internal state stored in neuronal activity to hold task-relevant information. In this work, we study the computational potential and resulting dynamics of a network that relies solely on synapse modulation during inference to process task-relevant information, the multi-plasticity network (MPN). Since the MPN has no recurrent connections, this allows us to study the computational capabilities and dynamical behavior contributed by synapses modulations alone. The generality of the MPN allows for our results to apply to synaptic modulation mechanisms ranging from short-term synaptic plasticity (STSP) to slower modulations such as spike-time dependent plasticity (STDP). We thoroughly examine the neural population dynamics of the MPN trained on integration-based tasks and compare it to known RNN dynamics, finding the two to have fundamentally different attractor structure. We find said differences in dynamics allow the MPN to outperform its RNN counterparts on several neuroscience-relevant tests. Training the MPN across a battery of neuroscience tasks, we find its computational capabilities in such settings is comparable to networks that compute with recurrent connections. Altogether, we believe this work demonstrates the computational possibilities of computing with synaptic modulations and highlights important motifs of these computations so that they can be identified in brain-like systems.


Subject(s)
Brain , Neuronal Plasticity , Synapses , Brain/physiology , Neural Networks, Computer
10.
Nat Neurosci ; 26(2): 350-364, 2023 02.
Article in English | MEDLINE | ID: mdl-36550293

ABSTRACT

Identification of structural connections between neurons is a prerequisite to understanding brain function. Here we developed a pipeline to systematically map brain-wide monosynaptic input connections to genetically defined neuronal populations using an optimized rabies tracing system. We used mouse visual cortex as the exemplar system and revealed quantitative target-specific, layer-specific and cell-class-specific differences in its presynaptic connectomes. The retrograde connectivity indicates the presence of ventral and dorsal visual streams and further reveals topographically organized and continuously varying subnetworks mediated by different higher visual areas. The visual cortex hierarchy can be derived from intracortical feedforward and feedback pathways mediated by upper-layer and lower-layer input neurons. We also identify a new role for layer 6 neurons in mediating reciprocal interhemispheric connections. This study expands our knowledge of the visual system connectomes and demonstrates that the pipeline can be scaled up to dissect connectivity of different cell populations across the mouse brain.


Subject(s)
Connectome , Visual Cortex , Mice , Animals , Neurons/physiology , Brain/physiology , Visual Cortex/physiology , Visual Pathways
11.
Science ; 378(6626): eabq6740, 2022 12 23.
Article in English | MEDLINE | ID: mdl-36480599

ABSTRACT

Learning to predict rewards based on environmental cues is essential for survival. It is believed that animals learn to predict rewards by updating predictions whenever the outcome deviates from expectations, and that such reward prediction errors (RPEs) are signaled by the mesolimbic dopamine system-a key controller of learning. However, instead of learning prospective predictions from RPEs, animals can infer predictions by learning the retrospective cause of rewards. Hence, whether mesolimbic dopamine instead conveys a causal associative signal that sometimes resembles RPE remains unknown. We developed an algorithm for retrospective causal learning and found that mesolimbic dopamine release conveys causal associations but not RPE, thereby challenging the dominant theory of reward learning. Our results reshape the conceptual and biological framework for associative learning.


Subject(s)
Association Learning , Dopamine , Limbic System , Reward , Animals , Dopamine/metabolism , Limbic System/metabolism , Cues , Mice
12.
PLoS Comput Biol ; 18(11): e1010716, 2022 11.
Article in English | MEDLINE | ID: mdl-36441762

ABSTRACT

Neurons in sensory areas encode/represent stimuli. Surprisingly, recent studies have suggested that, even during persistent performance, these representations are not stable and change over the course of days and weeks. We examine stimulus representations from fluorescence recordings across hundreds of neurons in the visual cortex using in vivo two-photon calcium imaging and we corroborate previous studies finding that such representations change as experimental trials are repeated across days. This phenomenon has been termed "representational drift". In this study we geometrically characterize the properties of representational drift in the primary visual cortex of mice in two open datasets from the Allen Institute and propose a potential mechanism behind such drift. We observe representational drift both for passively presented stimuli, as well as for stimuli which are behaviorally relevant. Across experiments, the drift differs from in-session variance and most often occurs along directions that have the most in-class variance, leading to a significant turnover in the neurons used for a given representation. Interestingly, despite this significant change due to drift, linear classifiers trained to distinguish neuronal representations show little to no degradation in performance across days. The features we observe in the neural data are similar to properties of artificial neural networks where representations are updated by continual learning in the presence of dropout, i.e. a random masking of nodes/weights, but not other types of noise. Therefore, we conclude that a potential reason for the representational drift in biological networks is driven by an underlying dropout-like noise while continuously learning and that such a mechanism may be computational advantageous for the brain in the same way it is for artificial neural networks, e.g. preventing overfitting.


Subject(s)
Neural Networks, Computer , Animals , Mice
13.
PLoS Comput Biol ; 18(9): e1010427, 2022 09.
Article in English | MEDLINE | ID: mdl-36067234

ABSTRACT

Convolutional neural networks trained on object recognition derive inspiration from the neural architecture of the visual system in mammals, and have been used as models of the feedforward computation performed in the primate ventral stream. In contrast to the deep hierarchical organization of primates, the visual system of the mouse has a shallower arrangement. Since mice and primates are both capable of visually guided behavior, this raises questions about the role of architecture in neural computation. In this work, we introduce a novel framework for building a biologically constrained convolutional neural network model of the mouse visual cortex. The architecture and structural parameters of the network are derived from experimental measurements, specifically the 100-micrometer resolution interareal connectome, the estimates of numbers of neurons in each area and cortical layer, and the statistics of connections between cortical layers. This network is constructed to support detailed task-optimized models of mouse visual cortex, with neural populations that can be compared to specific corresponding populations in the mouse brain. Using a well-studied image classification task as our working example, we demonstrate the computational capability of this mouse-sized network. Given its relatively small size, MouseNet achieves roughly 2/3rds the performance level on ImageNet as VGG16. In combination with the large scale Allen Brain Observatory Visual Coding dataset, we use representational similarity analysis to quantify the extent to which MouseNet recapitulates the neural representation in mouse visual cortex. Importantly, we provide evidence that optimizing for task performance does not improve similarity to the corresponding biological system beyond a certain point. We demonstrate that the distributions of some physiological quantities are closer to the observed distributions in the mouse brain after task training. We encourage the use of the MouseNet architecture by making the code freely available.


Subject(s)
Neural Networks, Computer , Visual Cortex , Animals , Mammals , Mice , Neurons/physiology , Visual Cortex/physiology , Visual Perception
14.
Neural Comput ; 34(3): 541-594, 2022 02 17.
Article in English | MEDLINE | ID: mdl-35016220

ABSTRACT

As animals adapt to their environments, their brains are tasked with processing stimuli in different sensory contexts. Whether these computations are context dependent or independent, they are all implemented in the same neural tissue. A crucial question is what neural architectures can respond flexibly to a range of stimulus conditions and switch between them. This is a particular case of flexible architecture that permits multiple related computations within a single circuit. Here, we address this question in the specific case of the visual system circuitry, focusing on context integration, defined as the integration of feedforward and surround information across visual space. We show that a biologically inspired microcircuit with multiple inhibitory cell types can switch between visual processing of the static context and the moving context. In our model, the VIP population acts as the switch and modulates the visual circuit through a disinhibitory motif. Moreover, the VIP population is efficient, requiring only a relatively small number of neurons to switch contexts. This circuit eliminates noise in videos by using appropriate lateral connections for contextual spatiotemporal surround modulation, having superior denoising performance compared to circuits where only one context is learned. Our findings shed light on a minimally complex architecture that is capable of switching between two naturalistic contexts using few switching units.


Subject(s)
Visual Cortex , Animals , Brain , Learning , Neurons/physiology , Photic Stimulation , Visual Cortex/physiology , Visual Perception/physiology
15.
Proc Natl Acad Sci U S A ; 118(51)2021 12 21.
Article in English | MEDLINE | ID: mdl-34916291

ABSTRACT

Brains learn tasks via experience-driven differential adjustment of their myriad individual synaptic connections, but the mechanisms that target appropriate adjustment to particular connections remain deeply enigmatic. While Hebbian synaptic plasticity, synaptic eligibility traces, and top-down feedback signals surely contribute to solving this synaptic credit-assignment problem, alone, they appear to be insufficient. Inspired by new genetic perspectives on neuronal signaling architectures, here, we present a normative theory for synaptic learning, where we predict that neurons communicate their contribution to the learning outcome to nearby neurons via cell-type-specific local neuromodulation. Computational tests suggest that neuron-type diversity and neuron-type-specific local neuromodulation may be critical pieces of the biological credit-assignment puzzle. They also suggest algorithms for improved artificial neural network learning efficiency.


Subject(s)
Nerve Net/physiology , Neurons/physiology , Synapses/physiology , Computer Simulation , Learning/physiology , Ligands , Models, Neurological , Neural Networks, Computer , Neuronal Plasticity/genetics , Receptors, G-Protein-Coupled/genetics , Receptors, G-Protein-Coupled/metabolism , Spatio-Temporal Analysis , Synaptic Transmission
16.
PLoS Comput Biol ; 17(9): e1009246, 2021 09.
Article in English | MEDLINE | ID: mdl-34534203

ABSTRACT

The maintenance of short-term memories is critical for survival in a dynamically changing world. Previous studies suggest that this memory can be stored in the form of persistent neural activity or using a synaptic mechanism, such as with short-term plasticity. Here, we compare the predictions of these two mechanisms to neural and behavioral measurements in a visual change detection task. Mice were trained to respond to changes in a repeated sequence of natural images while neural activity was recorded using two-photon calcium imaging. We also trained two types of artificial neural networks on the same change detection task as the mice. Following fixed pre-processing using a pretrained convolutional neural network, either a recurrent neural network (RNN) or a feedforward neural network with short-term synaptic depression (STPNet) was trained to the same level of performance as the mice. While both networks are able to learn the task, the STPNet model contains units whose activity are more similar to the in vivo data and produces errors which are more similar to the mice. When images are omitted, an unexpected perturbation which was absent during training, mice often do not respond to the omission but are more likely to respond to the subsequent image. Unlike the RNN model, STPNet produces a similar pattern of behavior. These results suggest that simple neural adaptation mechanisms may serve as an important bottom-up memory signal in this task, which can be used by downstream areas in the decision-making process.


Subject(s)
Adaptation, Physiological , Memory, Short-Term , Photic Stimulation , Visual Perception , Animals , Behavior, Animal , Computational Biology/methods , Decision Making , Mice , Neural Networks, Computer , Task Performance and Analysis
17.
Front Synaptic Neurosci ; 13: 703621, 2021.
Article in English | MEDLINE | ID: mdl-34456706

ABSTRACT

Neuromodulation can profoundly impact the gain and polarity of postsynaptic changes in Hebbian synaptic plasticity. An emerging pattern observed in multiple central synapses is a pull-push type of control in which activation of receptors coupled to the G-protein Gs promote long-term potentiation (LTP) at the expense of long-term depression (LTD), whereas receptors coupled to Gq promote LTD at the expense of LTP. Notably, coactivation of both Gs- and Gq-coupled receptors enhances the gain of both LTP and LTD. To account for these observations, we propose a simple kinetic model in which AMPA receptors (AMPARs) are trafficked between multiple subcompartments in and around the postsynaptic spine. In the model AMPARs in the postsynaptic density compartment (PSD) are the primary contributors to synaptic conductance. During LTP induction, AMPARs are trafficked to the PSD primarily from a relatively small perisynaptic (peri-PSD) compartment. Gs-coupled receptors promote LTP by replenishing peri-PSD through increased AMPAR exocytosis from a pool of endocytic AMPAR. During LTD induction AMPARs are trafficked in the reverse direction, from the PSD to the peri-PSD compartment, and Gq-coupled receptors promote LTD by clearing the peri-PSD compartment through increased AMPAR endocytosis. We claim that the model not only captures essential features of the pull-push neuromodulation of synaptic plasticity, but it is also consistent with other actions of neuromodulators observed in slice experiments and is compatible with the current understanding of AMPAR trafficking.

18.
Neural Netw ; 141: 330-343, 2021 Sep.
Article in English | MEDLINE | ID: mdl-33957382

ABSTRACT

Advances in electron microscopy and data processing techniques are leading to increasingly large and complete microscale connectomes. At the same time, advances in artificial neural networks have produced model systems that perform comparably rich computations with perfectly specified connectivity. This raises an exciting scientific opportunity for the study of both biological and artificial neural networks: to infer the underlying circuit function from the structure of its connectivity. A potential roadblock, however, is that - even with well constrained neural dynamics - there are in principle many different connectomes that could support a given computation. Here, we define a tractable setting in which the problem of inferring circuit function from circuit connectivity can be analyzed in detail: the function of input compression and reconstruction, in an autoencoder network with a single hidden layer. Here, in general there is substantial ambiguity in the weights that can produce the same circuit function, because largely arbitrary changes to input weights can be undone by applying the inverse modifications to the output weights. However, we use mathematical arguments and simulations to show that adding simple, biologically motivated regularization of connectivity resolves this ambiguity in an interesting way: weights are constrained such that the latent variable structure underlying the inputs can be extracted from the weights by using nonlinear dimensionality reduction methods.


Subject(s)
Connectome , Neural Networks, Computer
19.
Neuron ; 109(3): 545-559.e8, 2021 02 03.
Article in English | MEDLINE | ID: mdl-33290731

ABSTRACT

The evolutionarily conserved default mode network (DMN) is a distributed set of brain regions coactivated during resting states that is vulnerable to brain disorders. How disease affects the DMN is unknown, but detailed anatomical descriptions could provide clues. Mice offer an opportunity to investigate structural connectivity of the DMN across spatial scales with cell-type resolution. We co-registered maps from functional magnetic resonance imaging and axonal tracing experiments into the 3D Allen mouse brain reference atlas. We find that the mouse DMN consists of preferentially interconnected cortical regions. As a population, DMN layer 2/3 (L2/3) neurons project almost exclusively to other DMN regions, whereas L5 neurons project in and out of the DMN. In the retrosplenial cortex, a core DMN region, we identify two L5 projection types differentiated by in- or out-DMN targets, laminar position, and gene expression. These results provide a multi-scale description of the anatomical correlates of the mouse DMN.


Subject(s)
Brain/diagnostic imaging , Default Mode Network/diagnostic imaging , Nerve Net/diagnostic imaging , Neurons/physiology , Animals , Brain/cytology , Connectome , Default Mode Network/cytology , Magnetic Resonance Imaging , Mice , Nerve Net/cytology , Neurons/cytology
20.
PLoS Comput Biol ; 16(11): e1008386, 2020 11.
Article in English | MEDLINE | ID: mdl-33253147

ABSTRACT

Experimental studies in neuroscience are producing data at a rapidly increasing rate, providing exciting opportunities and formidable challenges to existing theoretical and modeling approaches. To turn massive datasets into predictive quantitative frameworks, the field needs software solutions for systematic integration of data into realistic, multiscale models. Here we describe the Brain Modeling ToolKit (BMTK), a software suite for building models and performing simulations at multiple levels of resolution, from biophysically detailed multi-compartmental, to point-neuron, to population-statistical approaches. Leveraging the SONATA file format and existing software such as NEURON, NEST, and others, BMTK offers a consistent user experience across multiple levels of resolution. It permits highly sophisticated simulations to be set up with little coding required, thus lowering entry barriers to new users. We illustrate successful applications of BMTK to large-scale simulations of a cortical area. BMTK is an open-source package provided as a resource supporting modeling-based discovery in the community.


Subject(s)
Brain Mapping/methods , Brain/physiology , Computational Biology , Software , Action Potentials , Biophysical Phenomena , Humans , Nerve Net
SELECTION OF CITATIONS
SEARCH DETAIL
...