Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 156
Filter
1.
Nat Neurosci ; 27(6): 1137-1147, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38755272

ABSTRACT

In the perception of color, wavelengths of light reflected off objects are transformed into the derived quantities of brightness, saturation and hue. Neurons responding selectively to hue have been reported in primate cortex, but it is unknown how their narrow tuning in color space is produced by upstream circuit mechanisms. We report the discovery of neurons in the Drosophila optic lobe with hue-selective properties, which enables circuit-level analysis of color processing. From our analysis of an electron microscopy volume of a whole Drosophila brain, we construct a connectomics-constrained circuit model that accounts for this hue selectivity. Our model predicts that recurrent connections in the circuit are critical for generating hue selectivity. Experiments using genetic manipulations to perturb recurrence in adult flies confirm this prediction. Our findings reveal a circuit basis for hue selectivity in color vision.


Subject(s)
Drosophila , Animals , Color Perception/physiology , Visual Pathways/physiology , Neurons/physiology , Optic Lobe, Nonmammalian/physiology , Photic Stimulation/methods , Color Vision/physiology , Connectome , Nerve Net/physiology
2.
PLoS Comput Biol ; 20(4): e1011954, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38662797

ABSTRACT

Relational cognition-the ability to infer relationships that generalize to novel combinations of objects-is fundamental to human and animal intelligence. Despite this importance, it remains unclear how relational cognition is implemented in the brain due in part to a lack of hypotheses and predictions at the levels of collective neural activity and behavior. Here we discovered, analyzed, and experimentally tested neural networks (NNs) that perform transitive inference (TI), a classic relational task (if A > B and B > C, then A > C). We found NNs that (i) generalized perfectly, despite lacking overt transitive structure prior to training, (ii) generalized when the task required working memory (WM), a capacity thought to be essential to inference in the brain, (iii) emergently expressed behaviors long observed in living subjects, in addition to a novel order-dependent behavior, and (iv) expressed different task solutions yielding alternative behavioral and neural predictions. Further, in a large-scale experiment, we found that human subjects performing WM-based TI showed behavior inconsistent with a class of NNs that characteristically expressed an intuitive task solution. These findings provide neural insights into a classical relational ability, with wider implications for how the brain realizes relational cognition.


Subject(s)
Brain , Cognition , Memory, Short-Term , Neural Networks, Computer , Humans , Cognition/physiology , Brain/physiology , Memory, Short-Term/physiology , Models, Neurological , Computational Biology , Male , Adult , Female , Young Adult , Nerve Net/physiology , Task Performance and Analysis
3.
Nature ; 626(8000): 808-818, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38326612

ABSTRACT

Neuronal signals that are relevant for spatial navigation have been described in many species1-10. However, a circuit-level understanding of how such signals interact to guide navigational behaviour is lacking. Here we characterize a neuronal circuit in the Drosophila central complex that compares internally generated estimates of the heading and goal angles of the fly-both of which are encoded in world-centred (allocentric) coordinates-to generate a body-centred (egocentric) steering signal. Past work has suggested that the activity of EPG neurons represents the fly's moment-to-moment angular orientation, or heading angle, during navigation2,11. An animal's moment-to-moment heading angle, however, is not always aligned with its goal angle-that is, the allocentric direction in which it wishes to progress forward. We describe FC2 cells12, a second set of neurons in the Drosophila brain with activity that correlates with the fly's goal angle. Focal optogenetic activation of FC2 neurons induces flies to orient along experimenter-defined directions as they walk forward. EPG and FC2 neurons connect monosynaptically to a third neuronal class, PFL3 cells12,13. We found that individual PFL3 cells show conjunctive, spike-rate tuning to both the heading angle and the goal angle during goal-directed navigation. Informed by the anatomy and physiology of these three cell classes, we develop a model that explains how this circuit compares allocentric heading and goal angles to build an egocentric steering signal in the PFL3 output terminals. Quantitative analyses and optogenetic manipulations of PFL3 activity support the model. Finally, using a new navigational memory task, we show that flies expressing disruptors of synaptic transmission in subsets of PFL3 cells have a reduced ability to orient along arbitrary goal directions, with an effect size in quantitative accordance with the prediction of our model. The biological circuit described here reveals how two population-level allocentric signals are compared in the brain to produce an egocentric output signal that is appropriate for motor control.


Subject(s)
Brain , Drosophila melanogaster , Goals , Head , Neural Pathways , Orientation, Spatial , Spatial Navigation , Animals , Action Potentials , Brain/cytology , Brain/physiology , Drosophila melanogaster/cytology , Drosophila melanogaster/physiology , Head/physiology , Locomotion , Neurons/metabolism , Optogenetics , Orientation, Spatial/physiology , Space Perception/physiology , Spatial Memory/physiology , Spatial Navigation/physiology , Synaptic Transmission
4.
bioRxiv ; 2024 Apr 02.
Article in English | MEDLINE | ID: mdl-37662223

ABSTRACT

Humans and animals routinely infer relations between different items or events and generalize these relations to novel combinations of items. This allows them to respond appropriately to radically novel circumstances and is fundamental to advanced cognition. However, how learning systems (including the brain) can implement the necessary inductive biases has been unclear. Here we investigated transitive inference (TI), a classic relational task paradigm in which subjects must learn a relation (A > B and B > C) and generalize it to new combinations of items (A > C). Through mathematical analysis, we found that a broad range of biologically relevant learning models (e.g. gradient flow or ridge regression) perform TI successfully and recapitulate signature behavioral patterns long observed in living subjects. First, we found that models with item-wise additive representations automatically encode transitive relations. Second, for more general representations, a single scalar "conjunctivity factor" determines model behavior on TI and, further, the principle of norm minimization (a standard statistical inductive bias) enables models with fixed, partly conjunctive representations to generalize transitively. Finally, neural networks in the "rich regime," which enables representation learning and has been found to improve generalization, unexpectedly show poor generalization and anomalous behavior. We find that such networks implement a form of norm minimization (over hidden weights) that yields a local encoding mechanism lacking transitivity. Our findings show how minimal statistical learning principles give rise to a classical relational inductive bias (transitivity), explain empirically observed behaviors, and establish a formal approach to understanding the neural basis of relational abstraction.

5.
bioRxiv ; 2023 Nov 28.
Article in English | MEDLINE | ID: mdl-38077032

ABSTRACT

A typical neuron signals to downstream cells when it is depolarized and firing sodium spikes. Some neurons, however, also fire calcium spikes when hyperpolarized. The function of such bidirectional signaling remains unclear in most circuits. Here we show how a neuron class that participates in vector computation in the fly central complex employs hyperpolarization-elicited calcium spikes to invert two-dimensional mathematical vectors. When cells switch from firing sodium to calcium spikes, this leads to a ~180° realignment between the vector encoded in the neuronal population and the fly's internal heading signal, thus inverting the vector. We show that the calcium spikes rely on the T-type calcium channel Ca-α1T, and argue, via analytical and experimental approaches, that these spikes enable vector computations in portions of angular space that would otherwise be inaccessible. These results reveal a seamless interaction between molecular, cellular and circuit properties for implementing vector math in the brain.

6.
Nat Commun ; 14(1): 5572, 2023 09 11.
Article in English | MEDLINE | ID: mdl-37696814

ABSTRACT

What are the spatial and temporal scales of brainwide neuronal activity? We used swept, confocally-aligned planar excitation (SCAPE) microscopy to image all cells in a large volume of the brain of adult Drosophila with high spatiotemporal resolution while flies engaged in a variety of spontaneous behaviors. This revealed neural representations of behavior on multiple spatial and temporal scales. The activity of most neurons correlated (or anticorrelated) with running and flailing over timescales that ranged from seconds to a minute. Grooming elicited a weaker global response. Significant residual activity not directly correlated with behavior was high dimensional and reflected the activity of small clusters of spatially organized neurons that may correspond to genetically defined cell types. These clusters participate in the global dynamics, indicating that neural activity reflects a combination of local and broadly distributed components. This suggests that microcircuits with highly specified functions are provided with knowledge of the larger context in which they operate.


Subject(s)
Brain , Neurons , Animals , Drosophila , Grooming , Knowledge
7.
Phys Rev Lett ; 131(11): 118401, 2023 Sep 15.
Article in English | MEDLINE | ID: mdl-37774280

ABSTRACT

Neural networks are high-dimensional nonlinear dynamical systems that process information through the coordinated activity of many connected units. Understanding how biological and machine-learning networks function and learn requires knowledge of the structure of this coordinated activity, information contained, for example, in cross covariances between units. Self-consistent dynamical mean field theory (DMFT) has elucidated several features of random neural networks-in particular, that they can generate chaotic activity-however, a calculation of cross covariances using this approach has not been provided. Here, we calculate cross covariances self-consistently via a two-site cavity DMFT. We use this theory to probe spatiotemporal features of activity coordination in a classic random-network model with independent and identically distributed (i.i.d.) couplings, showing an extensive but fractionally low effective dimension of activity and a long population-level timescale. Our formulas apply to a wide range of single-unit dynamics and generalize to non-i.i.d. couplings. As an example of the latter, we analyze the case of partially symmetric couplings.

8.
bioRxiv ; 2023 Jul 13.
Article in English | MEDLINE | ID: mdl-37502934

ABSTRACT

A universal principle of sensory perception is the progressive transformation of sensory information from broad non-specific signals to stimulus-selective signals that form the basis of perception. To perceive color, our brains must transform the wavelengths of light reflected off objects into the derived quantities of brightness, saturation and hue. Neurons responding selectively to hue have been reported in primate cortex, but it is unknown how their narrow tuning in color space is produced by upstream circuit mechanisms. To enable circuit level analysis of color perception, we here report the discovery of neurons in the Drosophila optic lobe with hue selective properties. Using the connectivity graph of the fly brain, we construct a connectomics-constrained circuit model that accounts for this hue selectivity. Unexpectedly, our model predicts that recurrent connections in the circuit are critical for hue selectivity. Experiments using genetic manipulations to perturb recurrence in adult flies confirms this prediction. Our findings reveal the circuit basis for hue selectivity in color vision.

9.
Curr Biol ; 33(13): 2657-2667.e4, 2023 07 10.
Article in English | MEDLINE | ID: mdl-37311457

ABSTRACT

In addition to the action potentials used for axonal signaling, many neurons generate dendritic "spikes" associated with synaptic plasticity. However, in order to control both plasticity and signaling, synaptic inputs must be able to differentially modulate the firing of these two spike types. Here, we investigate this issue in the electrosensory lobe (ELL) of weakly electric mormyrid fish, where separate control over axonal and dendritic spikes is essential for the transmission of learned predictive signals from inhibitory interneurons to the output stage of the circuit. Through a combination of experimental and modeling studies, we uncover a novel mechanism by which sensory input selectively modulates the rate of dendritic spiking by adjusting the amplitude of backpropagating axonal action potentials. Interestingly, this mechanism does not require spatially segregated synaptic inputs or dendritic compartmentalization but relies instead on an electrotonically distant spike initiation site in the axon-a common biophysical feature of neurons.


Subject(s)
Electric Fish , Neurons , Animals , Neurons/physiology , Action Potentials/physiology , Electric Fish/physiology , Axons , Cerebellum , Dendrites/physiology , Neuronal Plasticity/physiology
10.
Elife ; 122023 03 16.
Article in English | MEDLINE | ID: mdl-36928104

ABSTRACT

The predictive nature of the hippocampus is thought to be useful for memory-guided cognitive behaviors. Inspired by the reinforcement learning literature, this notion has been formalized as a predictive map called the successor representation (SR). The SR captures a number of observations about hippocampal activity. However, the algorithm does not provide a neural mechanism for how such representations arise. Here, we show the dynamics of a recurrent neural network naturally calculate the SR when the synaptic weights match the transition probability matrix. Interestingly, the predictive horizon can be flexibly modulated simply by changing the network gain. We derive simple, biologically plausible learning rules to learn the SR in a recurrent network. We test our model with realistic inputs and match hippocampal data recorded during random foraging. Taken together, our results suggest that the SR is more accessible in neural circuits than previously thought and can support a broad range of cognitive functions.


Memories are an important part of how we think, understand the world around us, and plan out future actions. In the brain, memories are thought to be stored in a region called the hippocampus. When memories are formed, neurons store events that occur around the same time together. This might explain why often, in the brains of animals, the activity associated with retrieving memories is not just a snapshot of what happened at a specific moment-- it can also include information about what the animal might experience next. This can have a clear utility if animals use memories to predict what they might experience next and plan out future actions. Mathematically, this notion of predictiveness can be summarized by an algorithm known as the successor representation. This algorithm describes what the activity of neurons in the hippocampus looks like when retrieving memories and making predictions based on them. However, even though the successor representation can computationally reproduce the activity seen in the hippocampus when it is making predictions, it is unclear what biological mechanisms underpin this computation in the brain. Fang et al. approached this problem by trying to build a model that could generate the same activity patterns computed by the successor representation using only biological mechanisms known to exist in the hippocampus. First, they used computational methods to design a network of neurons that had the biological properties of neural networks in the hippocampus. They then used the network to simulate neural activity. The results show that the activity of the network they designed was able to exactly match the successor representation. Additionally, the data resulting from the simulated activity in the network fitted experimental observations of hippocampal activity in Tufted Titmice. One advantage of the network designed by Fang et al. is that it can generate predictions in flexible ways,. That is, it canmake both short and long-term predictions from what an individual is experiencing at the moment. This flexibility means that the network can be used to simulate how the hippocampus learns in a variety of cognitive tasks. Additionally, the network is robust to different conditions. Given that the brain has to be able to store memories in many different situations, this is a promising indication that this network may be a reasonable model of how the brain learns. The results of Fang et al. lay the groundwork for connecting biological mechanisms in the hippocampus at the cellular level to cognitive effects, an essential step to understanding the hippocampus, as well as its role in health and disease. For instance, their network may provide a concrete approach to studying how disruptions to the ways neurons make and break connections can impair memory formation. More generally, better models of the biological mechanisms involved in making computations in the hippocampus can help scientists better understand and test out theories about how memories are formed and stored in the brain.


Subject(s)
Learning , Neural Networks, Computer , Hippocampus , Reinforcement, Psychology , Algorithms
11.
Neuron ; 111(5): 631-649.e10, 2023 03 01.
Article in English | MEDLINE | ID: mdl-36630961

ABSTRACT

Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.


Subject(s)
Neural Networks, Computer , Neurons , Action Potentials/physiology , Neurons/physiology , Nerve Net/physiology , Models, Neurological
12.
PLoS Comput Biol ; 18(12): e1010759, 2022 12.
Article in English | MEDLINE | ID: mdl-36516226

ABSTRACT

Feedforward network models performing classification tasks rely on highly convergent output units that collect the information passed on by preceding layers. Although convergent output-unit like neurons may exist in some biological neural circuits, notably the cerebellar cortex, neocortical circuits do not exhibit any obvious candidates for this role; instead they are highly recurrent. We investigate whether a sparsely connected recurrent neural network (RNN) can perform classification in a distributed manner without ever bringing all of the relevant information to a single convergence site. Our model is based on a sparse RNN that performs classification dynamically. Specifically, the interconnections of the RNN are trained to resonantly amplify the magnitude of responses to some external inputs but not others. The amplified and non-amplified responses then form the basis for binary classification. Furthermore, the network acts as an evidence accumulator and maintains its decision even after the input is turned off. Despite highly sparse connectivity, learned recurrent connections allow input information to flow to every neuron of the RNN, providing the basis for distributed computation. In this arrangement, the minimum number of synapses per neuron required to reach maximum memory capacity scales only logarithmically with network size. The model is robust to various types of noise, works with different activation and loss functions and with both backpropagation- and Hebbian-based learning rules. The RNN can also be constructed with a split excitation-inhibition architecture with little reduction in performance.


Subject(s)
Learning , Neural Networks, Computer , Learning/physiology , Neurons/physiology , Synapses/physiology
13.
PLoS Comput Biol ; 18(12): e1010590, 2022 12.
Article in English | MEDLINE | ID: mdl-36469504

ABSTRACT

Neural circuits exhibit complex activity patterns, both spontaneously and evoked by external stimuli. Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity. We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, i.e., the suppression of internally-generated chaotic variability, strongly depends on correlations in the input. A distinctive feature of balanced networks is that, because common external input is dynamically canceled by recurrent feedback, it is far more difficult to suppress chaos with common input into each neuron than through independent input. To study this phenomenon, we develop a non-stationary dynamic mean-field theory for driven networks. The theory explains how the activity statistics and the largest Lyapunov exponent depend on the frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input. We further show that uncorrelated inputs facilitate learning in balanced networks.


Subject(s)
Models, Neurological , Nerve Net , Action Potentials/physiology , Nerve Net/physiology , Neurons/physiology , Learning
14.
Nat Neurosci ; 25(11): 1492-1504, 2022 11.
Article in English | MEDLINE | ID: mdl-36216998

ABSTRACT

Voluntary movement requires communication from cortex to the spinal cord, where a dedicated pool of motor units (MUs) activates each muscle. The canonical description of MU function rests upon two foundational tenets. First, cortex cannot control MUs independently but supplies each pool with a common drive. Second, MUs are recruited in a rigid fashion that largely accords with Henneman's size principle. Although this paradigm has considerable empirical support, a direct test requires simultaneous observations of many MUs across diverse force profiles. In this study, we developed an isometric task that allowed stable MU recordings, in a rhesus macaque, even during rapidly changing forces. Patterns of MU activity were surprisingly behavior-dependent and could be accurately described only by assuming multiple drives. Consistent with flexible descending control, microstimulation of neighboring cortical sites recruited different MUs. Furthermore, the cortical population response displayed sufficient degrees of freedom to potentially exert fine-grained control. Thus, MU activity is flexibly controlled to meet task demands, and cortex may contribute to this ability.


Subject(s)
Motor Neurons , Spinal Cord , Animals , Motor Neurons/physiology , Macaca mulatta , Muscle, Skeletal/physiology , Electromyography , Muscle Contraction/physiology
16.
PLoS Comput Biol ; 18(2): e1008836, 2022 02.
Article in English | MEDLINE | ID: mdl-35139071

ABSTRACT

Cortical circuits generate excitatory currents that must be cancelled by strong inhibition to assure stability. The resulting excitatory-inhibitory (E-I) balance can generate spontaneous irregular activity but, in standard balanced E-I models, this requires that an extremely strong feedforward bias current be included along with the recurrent excitation and inhibition. The absence of experimental evidence for such large bias currents inspired us to examine an alternative regime that exhibits asynchronous activity without requiring unrealistically large feedforward input. In these networks, irregular spontaneous activity is supported by a continually changing sparse set of neurons. To support this activity, synaptic strengths must be drawn from high-variance distributions. Unlike standard balanced networks, these sparse balance networks exhibit robust nonlinear responses to uniform inputs and non-Gaussian input statistics. Interestingly, the speed, not the size, of synaptic fluctuations dictates the degree of sparsity in the model. In addition to simulations, we provide a mean-field analysis to illustrate the properties of these networks.


Subject(s)
Cerebral Cortex , Models, Neurological , Nerve Net , Neurons , Synaptic Potentials/physiology , Animals , Cerebral Cortex/cytology , Cerebral Cortex/physiology , Computational Biology , Nerve Net/cytology , Nerve Net/physiology , Neurons/cytology , Neurons/physiology
17.
Neuron ; 110(3): 544-557.e8, 2022 02 02.
Article in English | MEDLINE | ID: mdl-34861149

ABSTRACT

Over the course of a lifetime, we process a continual stream of information. Extracted from this stream, memories must be efficiently encoded and stored in an addressable manner for retrieval. To explore potential mechanisms, we consider a familiarity detection task in which a subject reports whether an image has been previously encountered. We design a feedforward network endowed with synaptic plasticity and an addressing matrix, meta-learned to optimize familiarity detection over long intervals. We find that anti-Hebbian plasticity leads to better performance than Hebbian plasticity and replicates experimental results such as repetition suppression. A combinatorial addressing function emerges, selecting a unique neuron as an index into the synaptic memory matrix for storage or retrieval. Unlike previous models, this network operates continuously and generalizes to intervals it has not been trained on. Our work suggests a biologically plausible mechanism for continual learning and demonstrates an effective application of machine learning for neuroscience discovery.


Subject(s)
Neuronal Plasticity , Recognition, Psychology , Longitudinal Studies , Machine Learning , Neuronal Plasticity/physiology , Neurons/physiology , Recognition, Psychology/physiology
18.
Nature ; 601(7891): 92-97, 2022 01.
Article in English | MEDLINE | ID: mdl-34912112

ABSTRACT

Many behavioural tasks require the manipulation of mathematical vectors, but, outside of computational models1-7, it is not known how brains perform vector operations. Here we show how the Drosophila central complex, a region implicated in goal-directed navigation7-10, performs vector arithmetic. First, we describe a neural signal in the fan-shaped body that explicitly tracks the allocentric travelling angle of a fly, that is, the travelling angle in reference to external cues. Past work has identified neurons in Drosophila8,11-13 and mammals14 that track the heading angle of an animal referenced to external cues (for example, head direction cells), but this new signal illuminates how the sense of space is properly updated when travelling and heading angles differ (for example, when walking sideways). We then characterize a neuronal circuit that performs an egocentric-to-allocentric (that is, body-centred to world-centred) coordinate transformation and vector addition to compute the allocentric travelling direction. This circuit operates by mapping two-dimensional vectors onto sinusoidal patterns of activity across distinct neuronal populations, with the amplitude of the sinusoid representing the length of the vector and its phase representing the angle of the vector. The principles of this circuit may generalize to other brains and to domains beyond navigation where vector operations or reference-frame transformations are required.


Subject(s)
Brain/physiology , Cues , Drosophila melanogaster/physiology , Mathematics , Models, Neurological , Spatial Memory/physiology , Spatial Navigation/physiology , Animals , Brain/cytology , Drosophila melanogaster/cytology , Female , Flight, Animal , Goals , Head/physiology , Neurons/physiology , Space Perception/physiology , Walking
19.
Curr Opin Neurobiol ; 70: 137-144, 2021 10.
Article in English | MEDLINE | ID: mdl-34801787

ABSTRACT

Advances in experimental neuroscience have transformed our ability to explore the structure and function of neural circuits. At the same time, advances in machine learning have unleashed the remarkable computational power of artificial neural networks (ANNs). While these two fields have different tools and applications, they present a similar challenge: namely, understanding how information is embedded and processed through high-dimensional representations to solve complex tasks. One approach to addressing this challenge is to utilize mathematical and computational tools to analyze the geometry of these high-dimensional representations, i.e., neural population geometry. We review examples of geometrical approaches providing insight into the function of biological and artificial neural networks: representation untangling in perception, a geometric theory of classification capacity, disentanglement, and abstraction in cognitive systems, topological representations underlying cognitive maps, dynamic untangling in motor systems, and a dynamical approach to cognition. Together, these findings illustrate an exciting trend at the intersection of machine learning, neuroscience, and geometry, in which neural population geometry provides a useful population-level mechanistic descriptor underlying task implementation. Importantly, geometric descriptions are applicable across sensory modalities, brain regions, network architectures, and timescales. Thus, neural population geometry has the potential to unify our understanding of structure and function in biological and artificial neural networks, bridging the gap between single neurons, population activities, and behavior.


Subject(s)
Machine Learning , Neural Networks, Computer , Brain/physiology , Cognition , Neurons/physiology
20.
Curr Biol ; 31(23): 5249-5260.e5, 2021 12 06.
Article in English | MEDLINE | ID: mdl-34670114

ABSTRACT

Sensory systems flexibly adapt their processing properties across a wide range of environmental and behavioral conditions. Such variable processing complicates attempts to extract a mechanistic understanding of sensory computations. This is evident in the highly constrained, canonical Drosophila motion detection circuit, where the core computation underlying direction selectivity is still debated despite extensive studies. Here we measured the filtering properties of neural inputs to the OFF motion-detecting T5 cell in Drosophila. We report state- and stimulus-dependent changes in the shape of these signals, which become more biphasic under specific conditions. Summing these inputs within the framework of a connectomic-constrained model of the circuit demonstrates that these shapes are sufficient to explain T5 responses to various motion stimuli. Thus, our stimulus- and state-dependent measurements reconcile motion computation with the anatomy of the circuit. These findings provide a clear example of how a basic circuit supports flexible sensory computation.


Subject(s)
Motion Perception , Animals , Drosophila/physiology , Motion , Motion Perception/physiology , Visual Pathways/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...