Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
Add more filters










Publication year range
1.
bioRxiv ; 2023 May 12.
Article in English | MEDLINE | ID: mdl-37214812

ABSTRACT

When stimulated, neural populations in the visual cortex exhibit fast rhythmic activity with frequencies in the gamma band (30-80 Hz). The gamma rhythm manifests as a broad resonance peak in the power-spectrum of recorded local field potentials, which exhibits various stimulus dependencies. In particular, in macaque primary visual cortex (V1), the gamma peak frequency increases with increasing stimulus contrast. Moreover, this contrast dependence is local: when contrast varies smoothly over visual space, the gamma peak frequency in each cortical column is controlled by the local contrast in that column's receptive field. No parsimonious mechanistic explanation for these contrast dependencies of V1 gamma oscillations has been proposed. The stabilized supralinear network (SSN) is a mechanistic model of cortical circuits that has accounted for a range of visual cortical response nonlinearities and contextual modulations, as well as their contrast dependence. Here, we begin by showing that a reduced SSN model without retinotopy robustly captures the contrast dependence of gamma peak frequency, and provides a mechanistic explanation for this effect based on the observed non-saturating and supralinear input-output function of V1 neurons. Given this result, the local dependence on contrast can trivially be captured in a retinotopic SSN which however lacks horizontal synaptic connections between its cortical columns. However, long-range horizontal connections in V1 are in fact strong, and underlie contextual modulation effects such as surround suppression. We thus explored whether a retinotopically organized SSN model of V1 with strong excitatory horizontal connections can exhibit both surround suppression and the local contrast dependence of gamma peak frequency. We found that retinotopic SSNs can account for both effects, but only when the horizontal excitatory projections are composed of two components with different patterns of spatial fall-off with distance: a short-range component that only targets the source column, combined with a long-range component that targets columns neighboring the source column. We thus make a specific qualitative prediction for the spatial structure of horizontal connections in macaque V1, consistent with the columnar structure of cortex.

2.
Neuron ; 109(21): 3373-3391, 2021 11 03.
Article in English | MEDLINE | ID: mdl-34464597

ABSTRACT

Many studies have shown that the excitation and inhibition received by cortical neurons remain roughly balanced across many conditions. A key question for understanding the dynamical regime of cortex is the nature of this balancing. Theorists have shown that network dynamics can yield systematic cancellation of most of a neuron's excitatory input by inhibition. We review a wide range of evidence pointing to this cancellation occurring in a regime in which the balance is loose, meaning that the net input remaining after cancellation of excitation and inhibition is comparable in size with the factors that cancel, rather than tight, meaning that the net input is very small relative to the canceling factors. This choice of regime has important implications for cortical functional responses, as we describe: loose balance, but not tight balance, can yield many nonlinear population behaviors seen in sensory cortical neurons, allow the presence of correlated variability, and yield decrease of that variability with increasing external stimulus drive as observed across multiple cortical areas.


Subject(s)
Cerebral Cortex , Models, Neurological , Cerebral Cortex/physiology , Neurons/physiology
3.
Elife ; 102021 05 04.
Article in English | MEDLINE | ID: mdl-33942713

ABSTRACT

For many organisms, searching for relevant targets such as food or mates entails active, strategic sampling of the environment. Finding odorous targets may be the most ancient search problem that motile organisms evolved to solve. While chemosensory navigation has been well characterized in microorganisms and invertebrates, spatial olfaction in vertebrates is poorly understood. We have established an olfactory search assay in which freely moving mice navigate noisy concentration gradients of airborne odor. Mice solve this task using concentration gradient cues and do not require stereo olfaction for performance. During task performance, respiration and nose movement are synchronized with tens of milliseconds precision. This synchrony is present during trials and largely absent during inter-trial intervals, suggesting that sniff-synchronized nose movement is a strategic behavioral state rather than simply a constant accompaniment to fast breathing. To reveal the spatiotemporal structure of these active sensing movements, we used machine learning methods to parse motion trajectories into elementary movement motifs. Motifs fall into two clusters, which correspond to investigation and approach states. Investigation motifs lock precisely to sniffing, such that the individual motifs preferentially occur at specific phases of the sniff cycle. The allocentric structure of investigation and approach indicates an advantage to sampling both sides of the sharpest part of the odor gradient, consistent with a serial-sniff strategy for gradient sensing. This work clarifies sensorimotor strategies for mouse olfactory search and guides ongoing work into the underlying neural mechanisms.


Subject(s)
Movement , Odorants , Smell/physiology , Animals , Cues , Female , Food , Male , Mice , Mice, Inbred C57BL , Respiration , Task Performance and Analysis
4.
J Neurosci ; 40(18): 3564-3575, 2020 04 29.
Article in English | MEDLINE | ID: mdl-32220950

ABSTRACT

Sensory systems integrate multiple stimulus features to generate coherent percepts. Spectral surround suppression, the phenomenon by which sound-evoked responses of auditory neurons are suppressed by stimuli outside their receptive field, is an example of this integration taking place in the auditory system. While this form of global integration is commonly observed in auditory cortical neurons, and potentially used by the nervous system to separate signals from noise, the mechanisms that underlie this suppression of activity are not well understood. We evaluated the contributions to spectral surround suppression of the two most common inhibitory cell types in the cortex, parvalbumin-expressing (PV+) and somatostatin-expressing (SOM+) interneurons, in mice of both sexes. We found that inactivating SOM+ cells, but not PV+ cells, significantly reduces sustained spectral surround suppression in excitatory cells, indicating a dominant causal role for SOM+ cells in the integration of information across multiple frequencies. The similarity of these results to those from other sensory cortices provides evidence of common mechanisms across the cerebral cortex for generating global percepts from separate features.SIGNIFICANCE STATEMENT To generate coherent percepts, sensory systems integrate simultaneously occurring features of a stimulus, yet the mechanisms by which this integration occurs are not fully understood. Our results show that neurochemically distinct neuronal subtypes in the primary auditory cortex have different contributions to the integration of different frequency components of an acoustic stimulus. Together with findings from other sensory cortices, our results provide evidence of a common mechanism for cortical computations used for global integration of stimulus features.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/metabolism , Interneurons/metabolism , Somatostatin/biosynthesis , Action Potentials/physiology , Animals , Auditory Cortex/cytology , Electrodes, Implanted , Female , Gene Expression , Male , Mice , Mice, Transgenic , Somatostatin/genetics
5.
Nat Commun ; 10(1): 1466, 2019 04 01.
Article in English | MEDLINE | ID: mdl-30931937

ABSTRACT

Behavior deviating from our normative expectations often appears irrational. For example, even though behavior following the so-called matching law can maximize reward in a stationary foraging task, actual behavior commonly deviates from matching. Such behavioral deviations are interpreted as a failure of the subject; however, here we instead suggest that they reflect an adaptive strategy, suitable for uncertain, non-stationary environments. To prove it, we analyzed the behavior of primates that perform a dynamic foraging task. In such nonstationary environment, learning on both fast and slow timescales is beneficial: fast learning allows the animal to react to sudden changes, at the price of large fluctuations (variance) in the estimates of task relevant variables. Slow learning reduces the fluctuations but costs a bias that causes systematic behavioral deviations. Our behavioral analysis shows that the animals solved this bias-variance tradeoff by combining learning on both fast and slow timescales, suggesting that learning on multiple timescales can be a biologically plausible mechanism for optimizing decisions under uncertainty.


Subject(s)
Appetitive Behavior/physiology , Learning/physiology , Reward , Uncertainty , Animals , Behavior, Animal , Macaca mulatta , Male , Models, Theoretical , Time Factors
6.
PLoS Comput Biol ; 15(4): e1006816, 2019 04.
Article in English | MEDLINE | ID: mdl-31002660

ABSTRACT

Tuning curves characterizing the response selectivities of biological neurons can exhibit large degrees of irregularity and diversity across neurons. Theoretical network models that feature heterogeneous cell populations or partially random connectivity also give rise to diverse tuning curves. Empirical tuning curve distributions can thus be utilized to make model-based inferences about the statistics of single-cell parameters and network connectivity. However, a general framework for such an inference or fitting procedure is lacking. We address this problem by proposing to view mechanistic network models as implicit generative models whose parameters can be optimized to fit the distribution of experimentally measured tuning curves. A major obstacle for fitting such models is that their likelihood function is not explicitly available or is highly intractable. Recent advances in machine learning provide ways for fitting implicit generative models without the need to evaluate the likelihood and its gradient. Generative Adversarial Networks (GANs) provide one such framework which has been successful in traditional machine learning tasks. We apply this approach in two separate experiments, showing how GANs can be used to fit commonly used mechanistic circuit models in theoretical neuroscience to datasets of tuning curves. This fitting procedure avoids the computationally expensive step of inferring latent variables, such as the biophysical parameters of, or synaptic connections between, particular recorded cells. Instead, it directly learns generalizable model parameters characterizing the network's statistical structure such as the statistics of strength and spatial range of connections between different cell types. Another strength of this approach is that it fits the joint high-dimensional distribution of tuning curves, instead of matching a few summary statistics picked a priori by the user, resulting in a more accurate inference of circuit properties. More generally, this framework opens the door to direct model-based inference of circuit structure from data beyond single-cell tuning curves, such as simultaneous population recordings.


Subject(s)
Models, Neurological , Models, Statistical , Nerve Net/physiology , Neurons/physiology , Algorithms , Animals , Computational Biology/methods , Databases, Factual , Machine Learning , Neural Networks, Computer
7.
Neuron ; 98(4): 846-860.e5, 2018 05 16.
Article in English | MEDLINE | ID: mdl-29772203

ABSTRACT

Correlated variability in cortical activity is ubiquitously quenched following stimulus onset, in a stimulus-dependent manner. These modulations have been attributed to circuit dynamics involving either multiple stable states ("attractors") or chaotic activity. Here we show that a qualitatively different dynamical regime, involving fluctuations about a single, stimulus-driven attractor in a loosely balanced excitatory-inhibitory network (the stochastic "stabilized supralinear network"), best explains these modulations. Given the supralinear input/output functions of cortical neurons, increased stimulus drive strengthens effective network connectivity. This shifts the balance from interactions that amplify variability to suppressive inhibitory feedback, quenching correlated variability around more strongly driven steady states. Comparing to previously published and original data analyses, we show that this mechanism, unlike previous proposals, uniquely accounts for the spatial patterns and fast temporal dynamics of variability suppression. Specifying the cortical operating regime is key to understanding the computations underlying perception.


Subject(s)
Neurons/physiology , Visual Cortex/physiology , Animals , Macaca , Neural Inhibition/physiology , Neural Networks, Computer , Nonlinear Dynamics , Occipital Lobe/cytology , Occipital Lobe/physiology , Visual Cortex/cytology
8.
eNeuro ; 5(6)2018.
Article in English | MEDLINE | ID: mdl-30627641

ABSTRACT

Sampling regulates stimulus intensity and temporal dynamics at the sense organ. Despite variations in sampling behavior, animals must make veridical perceptual judgments about external stimuli. In olfaction, odor sampling varies with respiration, which influences neural responses at the olfactory periphery. Nevertheless, rats were able to perform fine odor intensity judgments despite variations in sniff kinetics. To identify the features of neural activity supporting stable intensity perception, in awake mice we measured responses of mitral/tufted (MT) cells to different odors and concentrations across a range of sniff frequencies. Amplitude and latency of the MT cells' responses vary with sniff duration. A fluid dynamics (FD) model based on odor concentration kinetics in the intranasal cavity can account for this variability. Eliminating sniff waveform dependence of MT cell responses using the FD model allows for significantly better decoding of concentration. This suggests potential schemes for sniff waveform invariant odor concentration coding.


Subject(s)
Action Potentials/physiology , Conditioning, Psychological/physiology , Odorants , Sensory Receptor Cells/physiology , Smell/physiology , Animals , Body Weight/physiology , Drinking/physiology , Electrophysiology , Male , Mice , Mice, Inbred C57BL , Mice, Transgenic , Models, Neurological , Olfactory Bulb/cytology , Olfactory Pathways/physiology , Rats , Rats, Long-Evans , Reaction Time/physiology , Reward
9.
Article in English | MEDLINE | ID: mdl-25679669

ABSTRACT

Networks studied in many disciplines, including neuroscience and mathematical biology, have connectivity that may be stochastic about some underlying mean connectivity represented by a non-normal matrix. Furthermore, the stochasticity may not be independent and identically distributed (iid) across elements of the connectivity matrix. More generally, the problem of understanding the behavior of stochastic matrices with nontrivial mean structure and correlations arises in many settings. We address this by characterizing large random N×N matrices of the form A=M+LJR, where M,L, and R are arbitrary deterministic matrices and J is a random matrix of zero-mean iid elements. M can be non-normal, and L and R allow correlations that have separable dependence on row and column indices. We first provide a general formula for the eigenvalue density of A. For A non-normal, the eigenvalues do not suffice to specify the dynamics induced by A, so we also provide general formulas for the transient evolution of the magnitude of activity and frequency power spectrum in an N-dimensional linear dynamical system with a coupling matrix given by A. These quantities can also be thought of as characterizing the stability and the magnitude of the linear response of a nonlinear network to small perturbations about a fixed point. We derive these formulas and work them out analytically for some examples of M,L, and R motivated by neurobiological models. We also argue that the persistence as N→∞ of a finite number of randomly distributed outlying eigenvalues outside the support of the eigenvalue density of A, as previously observed, arises in regions of the complex plane Ω where there are nonzero singular values of L(-1)(z1-M)R(-1) (for z∈Ω) that vanish as N→∞. When such singular values do not exist and L and R are equal to the identity, there is a correspondence in the normalized Frobenius norm (but not in the operator norm) between the support of the spectrum of A for J of norm σ and the σ pseudospectrum of M.


Subject(s)
Models, Theoretical , Stochastic Processes
10.
Neural Comput ; 25(8): 1994-2037, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23663149

ABSTRACT

We study a rate-model neural network composed of excitatory and inhibitory neurons in which neuronal input-output functions are power laws with a power greater than 1, as observed in primary visual cortex. This supralinear input-output function leads to supralinear summation of network responses to multiple inputs for weak inputs. We show that for stronger inputs, which would drive the excitatory subnetwork to instability, the network will dynamically stabilize provided feedback inhibition is sufficiently strong. For a wide range of network and stimulus parameters, this dynamic stabilization yields a transition from supralinear to sublinear summation of network responses to multiple inputs. We compare this to the dynamic stabilization in the balanced network, which yields only linear behavior. We more exhaustively analyze the two-dimensional case of one excitatory and one inhibitory population. We show that in this case, dynamic stabilization will occur whenever the determinant of the weight matrix is positive and the inhibitory time constant is sufficiently small, and analyze the conditions for supersaturation, or decrease of firing rates with increasing stimulus contrast (which represents increasing input firing rates). In work to be presented elsewhere, we have found that this transition from supralinear to sublinear summation can explain a wide variety of nonlinearities in cerebral cortical processing.


Subject(s)
Models, Neurological , Nerve Net/physiology , Neural Networks, Computer , Neurons/physiology , Visual Cortex/cytology , Feedback, Physiological , Humans , Neural Inhibition/physiology , Nonlinear Dynamics , Visual Cortex/physiology
11.
J Comput Neurosci ; 33(1): 97-121, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22203465

ABSTRACT

Synchronized spontaneous firing among retinal ganglion cells (RGCs), on timescales faster than visual responses, has been reported in many studies. Two candidate mechanisms of synchronized firing include direct coupling and shared noisy inputs. In neighboring parasol cells of primate retina, which exhibit rapid synchronized firing that has been studied extensively, recent experimental work indicates that direct electrical or synaptic coupling is weak, but shared synaptic input in the absence of modulated stimuli is strong. However, previous modeling efforts have not accounted for this aspect of firing in the parasol cell population. Here we develop a new model that incorporates the effects of common noise, and apply it to analyze the light responses and synchronized firing of a large, densely-sampled network of over 250 simultaneously recorded parasol cells. We use a generalized linear model in which the spike rate in each cell is determined by the linear combination of the spatio-temporally filtered visual input, the temporally filtered prior spikes of that cell, and unobserved sources representing common noise. The model accurately captures the statistical structure of the spike trains and the encoding of the visual stimulus, without the direct coupling assumption present in previous modeling work. Finally, we examined the problem of decoding the visual stimulus from the spike train given the estimated parameters. The common-noise model produces Bayesian decoding performance as accurate as that of a model with direct coupling, but with significantly more robustness to spike timing perturbations.


Subject(s)
Action Potentials/physiology , Models, Neurological , Nerve Net/physiology , Retina/cytology , Retinal Ganglion Cells/physiology , Animals , Computer Simulation , In Vitro Techniques , Macaca mulatta , Photic Stimulation , Visual Pathways/physiology
12.
J Neurophysiol ; 106(2): 1038-53, 2011 Aug.
Article in English | MEDLINE | ID: mdl-21511704

ABSTRACT

Recent advances in experimental stimulation methods have raised the following important computational question: how can we choose a stimulus that will drive a neuron to output a target spike train with optimal precision, given physiological constraints? Here we adopt an approach based on models that describe how a stimulating agent (such as an injected electrical current or a laser light interacting with caged neurotransmitters or photosensitive ion channels) affects the spiking activity of neurons. Based on these models, we solve the reverse problem of finding the best time-dependent modulation of the input, subject to hardware limitations as well as physiologically inspired safety measures, that causes the neuron to emit a spike train that with highest probability will be close to a target spike train. We adopt fast convex constrained optimization methods to solve this problem. Our methods can potentially be implemented in real time and may also be generalized to the case of many cells, suitable for neural prosthesis applications. With the use of biologically sensible parameters and constraints, our method finds stimulation patterns that generate very precise spike trains in simulated experiments. We also tested the intracellular current injection method on pyramidal cells in mouse cortical slices, quantifying the dependence of spiking reliability and timing precision on constraints imposed on the applied currents.


Subject(s)
Action Potentials , Models, Neurological , Neural Networks, Computer , Neurons , Action Potentials/physiology , Neurons/physiology , Time Factors
13.
J Neurosci ; 31(10): 3828-42, 2011 Mar 09.
Article in English | MEDLINE | ID: mdl-21389238

ABSTRACT

Birdsong is comprised of rich spectral and temporal organization, which might be used for vocal perception. To quantify how this structure could be used, we have reconstructed birdsong spectrograms by combining the spike trains of zebra finch auditory midbrain neurons with information about the correlations present in song. We calculated maximum a posteriori estimates of song spectrograms using a generalized linear model of neuronal responses and a series of prior distributions, each carrying different amounts of statistical information about zebra finch song. We found that spike trains from a population of mesencephalicus lateral dorsalis (MLd) neurons combined with an uncorrelated Gaussian prior can estimate the amplitude envelope of song spectrograms. The same set of responses can be combined with Gaussian priors that have correlations matched to those found across multiple zebra finch songs to yield song spectrograms similar to those presented to the animal. The fidelity of spectrogram reconstructions from MLd responses relies more heavily on prior knowledge of spectral correlations than temporal correlations. However, the best reconstructions combine MLd responses with both spectral and temporal correlations.


Subject(s)
Mesencephalon/physiology , Neurons/physiology , Sound Spectrography/methods , Vocalization, Animal/physiology , Acoustic Stimulation , Action Potentials/physiology , Animals , Electrophysiology , Finches , Signal Processing, Computer-Assisted
14.
Neural Comput ; 23(1): 1-45, 2011 Jan.
Article in English | MEDLINE | ID: mdl-20964538

ABSTRACT

One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.


Subject(s)
Action Potentials/physiology , Models, Neurological , Nerve Net/physiology , Sensory Receptor Cells/physiology , Signal Processing, Computer-Assisted , Animals , Humans
15.
Neural Comput ; 23(1): 46-96, 2011 Jan.
Article in English | MEDLINE | ID: mdl-20964539

ABSTRACT

Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior distribution over the stimuli that caused an observed set of spike trains is log concave so long as the prior is. This allows the maximum a posteriori (MAP) stimulus estimate to be obtained using efficient optimization algorithms. Unfortunately, the MAP estimate can have a relatively large average error when the posterior is highly nongaussian. Here we compare several Markov chain Monte Carlo (MCMC) algorithms that allow for the calculation of general Bayesian estimators involving posterior expectations (conditional on model parameters). An efficient version of the hybrid Monte Carlo (HMC) algorithm was significantly superior to other MCMC methods for gaussian priors. When the prior distribution has sharp edges and corners, on the other hand, the "hit-and-run" algorithm performed better than other MCMC methods. Using these algorithms, we show that for this latter class of priors, the posterior mean estimate can have a considerably lower average error than MAP, whereas for gaussian priors, the two estimators have roughly equal efficiency. We also address the application of MCMC methods for extracting nonmarginal properties of the posterior distribution. For example, by using MCMC to calculate the mutual information between the stimulus and response, we verify the validity of a computationally efficient Laplace approximation to this quantity for gaussian priors in a wide range of model parameters; this makes direct model-based computation of the mutual information tractable even in the case of large observed neural populations, where methods based on binning the spike train fail. Finally, we consider the effect of uncertainty in the GLM parameters on the posterior estimators.


Subject(s)
Action Potentials/physiology , Markov Chains , Monte Carlo Method , Nerve Net/physiology , Neurons/physiology , Algorithms , Animals , Bayes Theorem , Humans , Neural Networks, Computer , Retinal Ganglion Cells/physiology , Signal Processing, Computer-Assisted
16.
Adv Neural Inf Process Syst ; 24: 738-746, 2011 Dec.
Article in English | MEDLINE | ID: mdl-28781497

ABSTRACT

Loopy belief propagation performs approximate inference on graphical models with loops. One might hope to compensate for the approximation by adjusting model parameters. Learning algorithms for this purpose have been explored previously, and the claim has been made that every set of locally consistent marginals can arise from belief propagation run on a graphical model. On the contrary, here we show that many probability distributions have marginals that cannot be reached by belief propagation using any set of model parameters or any learning algorithm. We call such marginals 'unbelievable.' This problem occurs whenever the Hessian of the Bethe free energy is not positive-definite at the target marginals. All learning algorithms for belief propagation necessarily fail in these cases, producing beliefs or sets of beliefs that may even be worse than the pre-learning approximation. We then show that averaging inaccurate beliefs, each obtained from belief propagation using model parameters perturbed about some learned mean values, can achieve the unbelievable marginals.

17.
J Comput Neurosci ; 29(1-2): 107-126, 2010 Aug.
Article in English | MEDLINE | ID: mdl-19649698

ABSTRACT

State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in state-space models with non-Gaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational efficiency of the approximate methods. We discuss a variety of examples, applying these direct optimization techniques to problems in spike train smoothing, stimulus decoding, parameter estimation, and inference of synaptic properties. Along the way, we point out connections to some related standard statistical methods, including spline smoothing and isotonic regression. Finally, we note that the computational methods reviewed here do not in fact depend on the state-space setting at all; instead, the key property we are exploiting involves the bandedness of certain matrices. We close by discussing some applications of this more general point of view, including Markov chain Monte Carlo methods for neural decoding and efficient estimation of spatially-varying firing rates.


Subject(s)
Computer Simulation , Models, Neurological , Models, Statistical , Neurons/physiology , Action Potentials/physiology , Animals , Retinal Ganglion Cells/physiology , Synapses/physiology
18.
J Opt Soc Am A Opt Image Sci Vis ; 26(11): B25-42, 2009 Nov.
Article in English | MEDLINE | ID: mdl-19884914

ABSTRACT

A major open problem in systems neuroscience is to understand the relationship between behavior and the detailed spiking properties of neural populations. We assess how faithfully velocity information can be decoded from a population of spiking model retinal neurons whose spatiotemporal receptive fields and ensemble spike train dynamics are closely matched to real data. We describe how to compute the optimal Bayesian estimate of image velocity given the population spike train response and show that, in the case of global translation of an image with known intensity profile, on average the spike train ensemble signals speed with a fractional standard deviation of about 2% across a specific set of stimulus conditions. We further show how to compute the Bayesian velocity estimate in the case where we only have some a priori information about the (naturalistic) spatial correlation structure of the image but do not know the image explicitly. As expected, the performance of the Bayesian decoder is shown to be less accurate with decreasing prior image information. There turns out to be a close mathematical connection between a biologically plausible "motion energy" method for decoding the velocity and the Bayesian decoder in the case that the image is not known. Simulations using the motion energy method and the Bayesian decoder with unknown image reveal that they result in fractional standard deviations of 10% and 6%, respectively, across the same set of stimulus conditions. Estimation performance is rather insensitive to the details of the precise receptive field location, correlated activity between cells, and spike timing.


Subject(s)
Motion Perception/physiology , Retina/physiology , Bayes Theorem , Computer Simulation , Humans , Models, Biological , Models, Neurological , Models, Statistical , Psychophysics , Retina/anatomy & histology , Retinal Ganglion Cells/cytology , Vision, Ocular
SELECTION OF CITATIONS
SEARCH DETAIL
...