Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 84
Filter
Add more filters










Publication year range
1.
PLoS Comput Biol ; 20(5): e1012056, 2024 May.
Article in English | MEDLINE | ID: mdl-38781156

ABSTRACT

Responses to natural stimuli in area V4-a mid-level area of the visual ventral stream-are well predicted by features from convolutional neural networks (CNNs) trained on image classification. This result has been taken as evidence for the functional role of V4 in object classification. However, we currently do not know if and to what extent V4 plays a role in solving other computational objectives. Here, we investigated normative accounts of V4 (and V1 for comparison) by predicting macaque single-neuron responses to natural images from the representations extracted by 23 CNNs trained on different computer vision tasks including semantic, geometric, 2D, and 3D types of tasks. We found that V4 was best predicted by semantic classification features and exhibited high task selectivity, while the choice of task was less consequential to V1 performance. Consistent with traditional characterizations of V4 function that show its high-dimensional tuning to various 2D and 3D stimulus directions, we found that diverse non-semantic tasks explained aspects of V4 function that are not captured by individual semantic tasks. Nevertheless, jointly considering the features of a pair of semantic classification tasks was sufficient to yield one of our top V4 models, solidifying V4's main functional role in semantic processing and suggesting that V4's selectivity to 2D or 3D stimulus properties found by electrophysiologists can result from semantic functional goals.


Subject(s)
Models, Neurological , Neural Networks, Computer , Semantics , Visual Cortex , Animals , Visual Cortex/physiology , Computational Biology , Photic Stimulation , Neurons/physiology , Macaca mulatta , Macaca
2.
ArXiv ; 2024 Mar 14.
Article in English | MEDLINE | ID: mdl-38560735

ABSTRACT

Identifying cell types and understanding their functional properties is crucial for unraveling the mechanisms underlying perception and cognition. In the retina, functional types can be identified by carefully selected stimuli, but this requires expert domain knowledge and biases the procedure towards previously known cell types. In the visual cortex, it is still unknown what functional types exist and how to identify them. Thus, for unbiased identification of the functional cell types in retina and visual cortex, new approaches are needed. Here we propose an optimization-based clustering approach using deep predictive models to obtain functional clusters of neurons using Most Discriminative Stimuli (MDS). Our approach alternates between stimulus optimization with cluster reassignment akin to an expectation-maximization algorithm. The algorithm recovers functional clusters in mouse retina, marmoset retina and macaque visual area V4. This demonstrates that our approach can successfully find discriminative stimuli across species, stages of the visual system and recording techniques. The resulting most discriminative stimuli can be used to assign functional cell types fast and on the fly, without the need to train complex predictive models or show a large natural scene dataset, paving the way for experiments that were previously limited by experimental time. Crucially, MDS are interpretable: they visualize the distinctive stimulus patterns that most unambiguously identify a specific type of neuron.

3.
Article in English | MEDLINE | ID: mdl-38415197

ABSTRACT

Over the past two decades Biomedical Engineering has emerged as a major discipline that bridges societal needs of human health care with the development of novel technologies. Every medical institution is now equipped at varying degrees of sophistication with the ability to monitor human health in both non-invasive and invasive modes. The multiple scales at which human physiology can be interrogated provide a profound perspective on health and disease. We are at the nexus of creating "avatars" (herein defined as an extension of "digital twins") of human patho/physiology to serve as paradigms for interrogation and potential intervention. Motivated by the emergence of these new capabilities, the IEEE Engineering in Medicine and Biology Society, the Departments of Biomedical Engineering at Johns Hopkins University and Bioengineering at University of California at San Diego sponsored an interdisciplinary workshop to define the grand challenges that face biomedical engineering and the mechanisms to address these challenges. The Workshop identified five grand challenges with cross-cutting themes and provided a roadmap for new technologies, identified new training needs, and defined the types of interdisciplinary teams needed for addressing these challenges. The themes presented in this paper include: 1) accumedicine through creation of avatars of cells, tissues, organs and whole human; 2) development of smart and responsive devices for human function augmentation; 3) exocortical technologies to understand brain function and treat neuropathologies; 4) the development of approaches to harness the human immune system for health and wellness; and 5) new strategies to engineer genomes and cells.

4.
Methods Mol Biol ; 2752: 227-243, 2024.
Article in English | MEDLINE | ID: mdl-38194038

ABSTRACT

Cells exhibit diverse morphologic phenotypes, biophysical and functional properties, and gene expression patterns. Understanding how these features are interrelated at the level of single cells has been challenging due to the lack of techniques for multimodal profiling of individual cells. We recently developed Patch-seq, a technique that combines whole-cell patch clamp recording, immunohistochemistry, and single-cell RNA-sequencing (scRNA-seq) to comprehensively profile single cells. Here we present a detailed step-by-step protocol for obtaining high-quality morphological, electrophysiological, and transcriptomic data from single cells. Patch-seq enables researchers to explore the rich, multidimensional phenotypic variability among cells and to directly correlate gene expression with phenotype at the level of single cells.


Subject(s)
Gene Expression Profiling , Transcriptome , Biophysics , Patch-Clamp Techniques , Electrophysiology
5.
bioRxiv ; 2024 Jan 06.
Article in English | MEDLINE | ID: mdl-36747710

ABSTRACT

Mammalian cortex features a vast diversity of neuronal cell types, each with characteristic anatomical, molecular and functional properties. Synaptic connectivity powerfully shapes how each cell type participates in the cortical circuit, but mapping connectivity rules at the resolution of distinct cell types remains difficult. Here, we used millimeter-scale volumetric electron microscopy1 to investigate the connectivity of all inhibitory neurons across a densely-segmented neuronal population of 1352 cells spanning all layers of mouse visual cortex, producing a wiring diagram of inhibitory connections with more than 70,000 synapses. Taking a data-driven approach inspired by classical neuroanatomy, we classified inhibitory neurons based on the relative targeting of dendritic compartments and other inhibitory cells and developed a novel classification of excitatory neurons based on the morphological and synaptic input properties. The synaptic connectivity between inhibitory cells revealed a novel class of disinhibitory specialist targeting basket cells, in addition to familiar subclasses. Analysis of the inhibitory connectivity onto excitatory neurons found widespread specificity, with many interneurons exhibiting differential targeting of certain subpopulations spatially intermingled with other potential targets. Inhibitory targeting was organized into "motif groups," diverse sets of cells that collectively target both perisomatic and dendritic compartments of the same excitatory targets. Collectively, our analysis identified new organizing principles for cortical inhibition and will serve as a foundation for linking modern multimodal neuronal atlases with the cortical wiring diagram.

6.
ArXiv ; 2023 May 31.
Article in English | MEDLINE | ID: mdl-37396602

ABSTRACT

Understanding how biological visual systems process information is challenging due to the complex nonlinear relationship between neuronal responses and high-dimensional visual input. Artificial neural networks have already improved our understanding of this system by allowing computational neuroscientists to create predictive models and bridge biological and machine vision. During the Sensorium 2022 competition, we introduced benchmarks for vision models with static input (i.e. images). However, animals operate and excel in dynamic environments, making it crucial to study and understand how the brain functions under these conditions. Moreover, many biological theories, such as predictive coding, suggest that previous input is crucial for current input processing. Currently, there is no standardized benchmark to identify state-of-the-art dynamic models of the mouse visual system. To address this gap, we propose the Sensorium 2023 Benchmark Competition with dynamic input (https://www.sensorium-competition.net/). This competition includes the collection of a new large-scale dataset from the primary visual cortex of five mice, containing responses from over 38,000 neurons to over 2 hours of dynamic stimuli per neuron. Participants in the main benchmark track will compete to identify the best predictive models of neuronal responses for dynamic input (i.e. video). We will also host a bonus track in which submission performance will be evaluated on out-of-domain input, using withheld neuronal responses to dynamic input stimuli whose statistics differ from the training set. Both tracks will offer behavioral data along with video stimuli. As before, we will provide code, tutorials, and strong pre-trained baseline models to encourage participation. We hope this competition will continue to strengthen the accompanying Sensorium benchmarks collection as a standard tool to measure progress in large-scale neural system identification models of the entire mouse visual hierarchy and beyond.

7.
bioRxiv ; 2023 Jun 05.
Article in English | MEDLINE | ID: mdl-37333280

ABSTRACT

Color is an important visual feature that informs behavior, and the retinal basis for color vision has been studied across various vertebrate species. While we know how color information is processed in visual brain areas of primates, we have limited understanding of how it is organized beyond the retina in other species, including most dichromatic mammals. In this study, we systematically characterized how color is represented in the primary visual cortex (V1) of mice. Using large-scale neuronal recordings and a luminance and color noise stimulus, we found that more than a third of neurons in mouse V1 are color-opponent in their receptive field center, while the receptive field surround predominantly captures luminance contrast. Furthermore, we found that color-opponency is especially pronounced in posterior V1 that encodes the sky, matching the statistics of mouse natural scenes. Using unsupervised clustering, we demonstrate that the asymmetry in color representations across cortex can be explained by an uneven distribution of green-On/UV-Off color-opponent response types that are represented in the upper visual field. This type of color-opponency in the receptive field center was not present at the level of the retinal output and, therefore, is likely computed in the cortex by integrating upstream visual signals. Finally, a simple model with natural scene-inspired parametric stimuli shows that green-On/UV-Off color-opponent response types may enhance the detection of "predatory"-like dark UV-objects in noisy daylight scenes. The results from this study highlight the relevance of color processing in the mouse visual system and contribute to our understanding of how color information is organized in the visual hierarchy across species. More broadly, they support the hypothesis that visual cortex combines upstream information towards computing neuronal selectivity to behaviorally-relevant sensory features.

8.
bioRxiv ; 2023 May 20.
Article in English | MEDLINE | ID: mdl-37292670

ABSTRACT

In recent years, most exciting inputs (MEIs) synthesized from encoding models of neuronal activity have become an established method to study tuning properties of biological and artificial visual systems. However, as we move up the visual hierarchy, the complexity of neuronal computations increases. Consequently, it becomes more challenging to model neuronal activity, requiring more complex models. In this study, we introduce a new attention readout for a convolutional data-driven core for neurons in macaque V4 that outperforms the state-of-the-art task-driven ResNet model in predicting neuronal responses. However, as the predictive network becomes deeper and more complex, synthesizing MEIs via straightforward gradient ascent (GA) can struggle to produce qualitatively good results and overfit to idiosyncrasies of a more complex model, potentially decreasing the MEI's model-to-brain transferability. To solve this problem, we propose a diffusion-based method for generating MEIs via Energy Guidance (EGG). We show that for models of macaque V4, EGG generates single neuron MEIs that generalize better across architectures than the state-of-the-art GA while preserving the within-architectures activation and requiring 4.7x less compute time. Furthermore, EGG diffusion can be used to generate other neurally exciting images, like most exciting natural images that are on par with a selection of highly activating natural images, or image reconstructions that generalize better across architectures. Finally, EGG is simple to implement, requires no retraining of the diffusion model, and can easily be generalized to provide other characterizations of the visual system, such as invariances. Thus EGG provides a general and flexible framework to study coding properties of the visual system in the context of natural images.

9.
bioRxiv ; 2023 Mar 16.
Article in English | MEDLINE | ID: mdl-36993218

ABSTRACT

A defining characteristic of intelligent systems, whether natural or artificial, is the ability to generalize and infer behaviorally relevant latent causes from high-dimensional sensory input, despite significant variations in the environment. To understand how brains achieve generalization, it is crucial to identify the features to which neurons respond selectively and invariantly. However, the high-dimensional nature of visual inputs, the non-linearity of information processing in the brain, and limited experimental time make it challenging to systematically characterize neuronal tuning and invariances, especially for natural stimuli. Here, we extended "inception loops" - a paradigm that iterates between large-scale recordings, neural predictive models, and in silico experiments followed by in vivo verification - to systematically characterize single neuron invariances in the mouse primary visual cortex. Using the predictive model we synthesized Diverse Exciting Inputs (DEIs), a set of inputs that differ substantially from each other while each driving a target neuron strongly, and verified these DEIs' efficacy in vivo. We discovered a novel bipartite invariance: one portion of the receptive field encoded phase-invariant texture-like patterns, while the other portion encoded a fixed spatial pattern. Our analysis revealed that the division between the fixed and invariant portions of the receptive fields aligns with object boundaries defined by spatial frequency differences present in highly activating natural images. These findings suggest that bipartite invariance might play a role in segmentation by detecting texture-defined object boundaries, independent of the phase of the texture. We also replicated these bipartite DEIs in the functional connectomics MICrONs data set, which opens the way towards a circuit-level mechanistic understanding of this novel type of invariance. Our study demonstrates the power of using a data-driven deep learning approach to systematically characterize neuronal invariances. By applying this method across the visual hierarchy, cell types, and sensory modalities, we can decipher how latent variables are robustly extracted from natural scenes, leading to a deeper understanding of generalization.

10.
bioRxiv ; 2023 Mar 14.
Article in English | MEDLINE | ID: mdl-36993321

ABSTRACT

A key role of sensory processing is integrating information across space. Neuronal responses in the visual system are influenced by both local features in the receptive field center and contextual information from the surround. While center-surround interactions have been extensively studied using simple stimuli like gratings, investigating these interactions with more complex, ecologically-relevant stimuli is challenging due to the high dimensionality of the stimulus space. We used large-scale neuronal recordings in mouse primary visual cortex to train convolutional neural network (CNN) models that accurately predicted center-surround interactions for natural stimuli. These models enabled us to synthesize surround stimuli that strongly suppressed or enhanced neuronal responses to the optimal center stimulus, as confirmed by in vivo experiments. In contrast to the common notion that congruent center and surround stimuli are suppressive, we found that excitatory surrounds appeared to complete spatial patterns in the center, while inhibitory surrounds disrupted them. We quantified this effect by demonstrating that CNN-optimized excitatory surround images have strong similarity in neuronal response space with surround images generated by extrapolating the statistical properties of the center, and with patches of natural scenes, which are known to exhibit high spatial correlations. Our findings cannot be explained by theories like redundancy reduction or predictive coding previously linked to contextual modulation in visual cortex. Instead, we demonstrated that a hierarchical probabilistic model incorporating Bayesian inference, and modulating neuronal responses based on prior knowledge of natural scene statistics, can explain our empirical results. We replicated these center-surround effects in the multi-area functional connectomics MICrONS dataset using natural movies as visual stimuli, which opens the way towards understanding circuit level mechanism, such as the contributions of lateral and feedback recurrent connections. Our data-driven modeling approach provides a new understanding of the role of contextual interactions in sensory processing and can be adapted across brain areas, sensory modalities, and species.

11.
bioRxiv ; 2023 Apr 21.
Article in English | MEDLINE | ID: mdl-36993435

ABSTRACT

Understanding the brain's perception algorithm is a highly intricate problem, as the inherent complexity of sensory inputs and the brain's nonlinear processing make characterizing sensory representations difficult. Recent studies have shown that functional models-capable of predicting large-scale neuronal activity in response to arbitrary sensory input-can be powerful tools for characterizing neuronal representations by enabling high-throughput in silico experiments. However, accurately modeling responses to dynamic and ecologically relevant inputs like videos remains challenging, particularly when generalizing to new stimulus domains outside the training distribution. Inspired by recent breakthroughs in artificial intelligence, where foundation models-trained on vast quantities of data-have demonstrated remarkable capabilities and generalization, we developed a "foundation model" of the mouse visual cortex: a deep neural network trained on large amounts of neuronal responses to ecological videos from multiple visual cortical areas and mice. The model accurately predicted neuronal responses not only to natural videos but also to various new stimulus domains, such as coherent moving dots and noise patterns, underscoring its generalization abilities. The foundation model could also be adapted to new mice with minimal natural movie training data. We applied the foundation model to the MICrONS dataset: a study of the brain that integrates structure with function at unprecedented scale, containing nanometer-scale morphology, connectivity with >500,000,000 synapses, and function of >70,000 neurons within a ~1mm3 volume spanning multiple areas of the mouse visual cortex. This accurate functional model of the MICrONS data opens the possibility for a systematic characterization of the relationship between circuit structure and function. By precisely capturing the response properties of the visual cortex and generalizing to new stimulus domains and mice, foundation models can pave the way for a deeper understanding of visual computation.

12.
PLoS Comput Biol ; 19(3): e1010932, 2023 03.
Article in English | MEDLINE | ID: mdl-36972288

ABSTRACT

Machine learning models have difficulty generalizing to data outside of the distribution they were trained on. In particular, vision models are usually vulnerable to adversarial attacks or common corruptions, to which the human visual system is robust. Recent studies have found that regularizing machine learning models to favor brain-like representations can improve model robustness, but it is unclear why. We hypothesize that the increased model robustness is partly due to the low spatial frequency preference inherited from the neural representation. We tested this simple hypothesis with several frequency-oriented analyses, including the design and use of hybrid images to probe model frequency sensitivity directly. We also examined many other publicly available robust models that were trained on adversarial images or with data augmentation, and found that all these robust models showed a greater preference to low spatial frequency information. We show that preprocessing by blurring can serve as a defense mechanism against both adversarial attacks and common corruptions, further confirming our hypothesis and demonstrating the utility of low spatial frequency information in robust object recognition.


Subject(s)
Deep Learning , Neural Networks, Computer , Humans , Visual Perception , Machine Learning , Head
13.
Nat Commun ; 14(1): 1597, 2023 03 22.
Article in English | MEDLINE | ID: mdl-36949048

ABSTRACT

Neuroscience has long been an essential driver of progress in artificial intelligence (AI). We propose that to accelerate progress in AI, we must invest in fundamental research in NeuroAI. A core component of this is the embodied Turing test, which challenges AI animal models to interact with the sensorimotor world at skill levels akin to their living counterparts. The embodied Turing test shifts the focus from those capabilities like game playing and language that are especially well-developed or uniquely human to those capabilities - inherited from over 500 million years of evolution - that are shared with all animals. Building models that can pass the embodied Turing test will provide a roadmap for the next generation of AI.


Subject(s)
Artificial Intelligence , Neurosciences , Animals , Humans
14.
Biol Psychiatry ; 94(6): 445-453, 2023 09 15.
Article in English | MEDLINE | ID: mdl-36736418

ABSTRACT

BACKGROUND: Disorders of mood and cognition are prevalent, disabling, and notoriously difficult to treat. Fueling this challenge in treatment is a significant gap in our understanding of their neurophysiological basis. METHODS: We recorded high-density neural activity from intracranial electrodes implanted in depression-relevant prefrontal cortical regions in 3 human subjects with severe depression. Neural recordings were labeled with depression severity scores across a wide dynamic range using an adaptive assessment that allowed sampling with a temporal frequency greater than that possible with typical rating scales. We modeled these data using regularized regression techniques with region selection to decode depression severity from the prefrontal recordings. RESULTS: Across prefrontal regions, we found that reduced depression severity is associated with decreased low-frequency neural activity and increased high-frequency activity. When constraining our model to decode using a single region, spectral changes in the anterior cingulate cortex best predicted depression severity in all 3 subjects. Relaxing this constraint revealed unique, individual-specific sets of spatiospectral features predictive of symptom severity, reflecting the heterogeneous nature of depression. CONCLUSIONS: The ability to decode depression severity from neural activity increases our fundamental understanding of how depression manifests in the human brain and provides a target neural signature for personalized neuromodulation therapies.


Subject(s)
Brain , Depression , Humans , Brain/physiology , Prefrontal Cortex , Brain Mapping/methods , Gyrus Cinguli
15.
Nat Mach Intell ; 4(12): 1185-1197, 2022.
Article in English | MEDLINE | ID: mdl-36567959

ABSTRACT

Incrementally learning new information from a non-stationary stream of data, referred to as 'continual learning', is a key feature of natural intelligence, but a challenging problem for deep neural networks. In recent years, numerous deep learning methods for continual learning have been proposed, but comparing their performances is difficult due to the lack of a common framework. To help address this, we describe three fundamental types, or 'scenarios', of continual learning: task-incremental, domain-incremental and class-incremental learning. Each of these scenarios has its own set of challenges. To illustrate this, we provide a comprehensive empirical comparison of currently used continual learning strategies, by performing the Split MNIST and Split CIFAR-100 protocols according to each scenario. We demonstrate substantial differences between the three scenarios in terms of difficulty and in terms of the effectiveness of different strategies. The proposed categorization aims to structure the continual learning field, by forming a key foundation for clearly defining benchmark problems.

16.
Elife ; 112022 11 16.
Article in English | MEDLINE | ID: mdl-36382887

ABSTRACT

Learning from experience depends at least in part on changes in neuronal connections. We present the largest map of connectivity to date between cortical neurons of a defined type (layer 2/3 [L2/3] pyramidal cells in mouse primary visual cortex), which was enabled by automated analysis of serial section electron microscopy images with improved handling of image defects (250 × 140 × 90 µm3 volume). We used the map to identify constraints on the learning algorithms employed by the cortex. Previous cortical studies modeled a continuum of synapse sizes by a log-normal distribution. A continuum is consistent with most neural network models of learning, in which synaptic strength is a continuously graded analog variable. Here, we show that synapse size, when restricted to synapses between L2/3 pyramidal cells, is well modeled by the sum of a binary variable and an analog variable drawn from a log-normal distribution. Two synapses sharing the same presynaptic and postsynaptic cells are known to be correlated in size. We show that the binary variables of the two synapses are highly correlated, while the analog variables are not. Binary variation could be the outcome of a Hebbian or other synaptic plasticity rule depending on activity signals that are relatively uniform across neuronal arbors, while analog variation may be dominated by other influences such as spontaneous dynamical fluctuations. We discuss the implications for the longstanding hypothesis that activity-dependent plasticity switches synapses between bistable states.


Subject(s)
Pyramidal Cells , Synapses , Mice , Animals , Pyramidal Cells/physiology , Synapses/physiology , Neuronal Plasticity/physiology , Microscopy, Electron
17.
Nat Commun ; 13(1): 6389, 2022 10 27.
Article in English | MEDLINE | ID: mdl-36302912

ABSTRACT

Neocortical feedback is critical for attention, prediction, and learning. To mechanically understand its function requires deciphering its cell-type wiring. Recent studies revealed that feedback between primary motor to primary somatosensory areas in mice is disinhibitory, targeting vasoactive intestinal peptide-expressing interneurons, in addition to pyramidal cells. It is unknown whether this circuit motif represents a general cortico-cortical feedback organizing principle. Here we show that in contrast to this wiring rule, feedback between higher-order lateromedial visual area to primary visual cortex preferentially activates somatostatin-expressing interneurons. Functionally, both feedback circuits temporally sharpen feed-forward excitation eliciting a transient increase-followed by a prolonged decrease-in pyramidal cell activity under sustained feed-forward input. However, under feed-forward transient input, the primary motor to primary somatosensory cortex feedback facilitates bursting while lateromedial area to primary visual cortex feedback increases time precision. Our findings argue for multiple cortico-cortical feedback motifs implementing different dynamic non-linear operations.


Subject(s)
Interneurons , Pyramidal Cells , Mice , Animals , Feedback , Interneurons/physiology , Vasoactive Intestinal Peptide
18.
Nature ; 610(7930): 128-134, 2022 10.
Article in English | MEDLINE | ID: mdl-36171291

ABSTRACT

To increase computational flexibility, the processing of sensory inputs changes with behavioural context. In the visual system, active behavioural states characterized by motor activity and pupil dilation1,2 enhance sensory responses, but typically leave the preferred stimuli of neurons unchanged2-9. Here we find that behavioural state also modulates stimulus selectivity in the mouse visual cortex in the context of coloured natural scenes. Using population imaging in behaving mice, pharmacology and deep neural network modelling, we identified a rapid shift in colour selectivity towards ultraviolet stimuli during an active behavioural state. This was exclusively caused by state-dependent pupil dilation, which resulted in a dynamic switch from rod to cone photoreceptors, thereby extending their role beyond night and day vision. The change in tuning facilitated the decoding of ethological stimuli, such as aerial predators against the twilight sky10. For decades, studies in neuroscience and cognitive science have used pupil dilation as an indirect measure of brain state. Our data suggest that, in addition, state-dependent pupil dilation itself tunes visual representations to behavioural demands by differentially recruiting rods and cones on fast timescales.


Subject(s)
Color , Pupil , Reflex, Pupillary , Vision, Ocular , Visual Cortex , Animals , Darkness , Deep Learning , Mice , Photic Stimulation , Pupil/physiology , Pupil/radiation effects , Reflex, Pupillary/physiology , Retinal Cone Photoreceptor Cells/drug effects , Retinal Cone Photoreceptor Cells/physiology , Retinal Rod Photoreceptor Cells/drug effects , Retinal Rod Photoreceptor Cells/physiology , Time Factors , Ultraviolet Rays , Vision, Ocular/physiology , Visual Cortex/physiology
19.
Cell ; 185(18): 3408-3425.e29, 2022 09 01.
Article in English | MEDLINE | ID: mdl-35985322

ABSTRACT

Genetically encoded voltage indicators are emerging tools for monitoring voltage dynamics with cell-type specificity. However, current indicators enable a narrow range of applications due to poor performance under two-photon microscopy, a method of choice for deep-tissue recording. To improve indicators, we developed a multiparameter high-throughput platform to optimize voltage indicators for two-photon microscopy. Using this system, we identified JEDI-2P, an indicator that is faster, brighter, and more sensitive and photostable than its predecessors. We demonstrate that JEDI-2P can report light-evoked responses in axonal termini of Drosophila interneurons and the dendrites and somata of amacrine cells of isolated mouse retina. JEDI-2P can also optically record the voltage dynamics of individual cortical neurons in awake behaving mice for more than 30 min using both resonant-scanning and ULoVE random-access microscopy. Finally, ULoVE recording of JEDI-2P can robustly detect spikes at depths exceeding 400 µm and report voltage correlations in pairs of neurons.


Subject(s)
Microscopy , Neurons , Animals , Interneurons , Mice , Microscopy/methods , Neurons/physiology , Photons , Wakefulness
20.
Front Artif Intell ; 5: 890016, 2022.
Article in English | MEDLINE | ID: mdl-35903397

ABSTRACT

Despite the enormous success of artificial neural networks (ANNs) in many disciplines, the characterization of their computations and the origin of key properties such as generalization and robustness remain open questions. Recent literature suggests that robust networks with good generalization properties tend to be biased toward processing low frequencies in images. To explore the frequency bias hypothesis further, we develop an algorithm that allows us to learn modulatory masks highlighting the essential input frequencies needed for preserving a trained network's performance. We achieve this by imposing invariance in the loss with respect to such modulations in the input frequencies. We first use our method to test the low-frequency preference hypothesis of adversarially trained or data-augmented networks. Our results suggest that adversarially robust networks indeed exhibit a low-frequency bias but we find this bias is also dependent on directions in frequency space. However, this is not necessarily true for other types of data augmentation. Our results also indicate that the essential frequencies in question are effectively the ones used to achieve generalization in the first place. Surprisingly, images seen through these modulatory masks are not recognizable and resemble texture-like patterns.

SELECTION OF CITATIONS
SEARCH DETAIL
...