Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
Add more filters










Publication year range
1.
PLoS Comput Biol ; 17(11): e1009517, 2021 11.
Article in English | MEDLINE | ID: mdl-34843452

ABSTRACT

Making good decisions requires updating beliefs according to new evidence. This is a dynamical process that is prone to biases: in some cases, beliefs become entrenched and resistant to new evidence (leading to primacy effects), while in other cases, beliefs fade over time and rely primarily on later evidence (leading to recency effects). How and why either type of bias dominates in a given context is an important open question. Here, we study this question in classic perceptual decision-making tasks, where, puzzlingly, previous empirical studies differ in the kinds of biases they observe, ranging from primacy to recency, despite seemingly equivalent tasks. We present a new model, based on hierarchical approximate inference and derived from normative principles, that not only explains both primacy and recency effects in existing studies, but also predicts how the type of bias should depend on the statistics of stimuli in a given task. We verify this prediction in a novel visual discrimination task with human observers, finding that each observer's temporal bias changed as the result of changing the key stimulus statistics identified by our model. The key dynamic that leads to a primacy bias in our model is an overweighting of new sensory information that agrees with the observer's existing belief-a type of 'confirmation bias'. By fitting an extended drift-diffusion model to our data we rule out an alternative explanation for primacy effects due to bounded integration. Taken together, our results resolve a major discrepancy among existing perceptual decision-making studies, and suggest that a key source of bias in human decision-making is approximate hierarchical inference.


Subject(s)
Bias , Decision Making , Perception , Humans , Models, Psychological
2.
J Neurosci ; 39(20): 3867-3881, 2019 05 15.
Article in English | MEDLINE | ID: mdl-30833509

ABSTRACT

Sensory information is encoded by populations of cortical neurons. Yet, it is unknown how this information is used for even simple perceptual choices such as discriminating orientation. To determine the computation underlying this perceptual choice, we took advantage of the robust visual adaptation in mouse primary visual cortex (V1). We first designed a stimulus paradigm in which we could vary the degree of neuronal adaptation measured in V1 during an orientation discrimination task. We then determined how adaptation affects task performance for mice of both sexes and tested which neuronal computations are most consistent with the behavioral results given the adapted population responses in V1. Despite increasing the reliability of the population representation of orientation among neurons, and improving the ability of a variety of optimal decoders to discriminate target from distractor orientations, adaptation increases animals' behavioral thresholds. Decoding the animals' choice from neuronal activity revealed that this unexpected effect on behavior could be explained by an overreliance of the perceptual choice circuit on target preferring neurons and a failure to appropriately discount the activity of neurons that prefer the distractor. Consistent with this all-positive computation, we find that animals' task performance is susceptible to subtle perturbations of distractor orientation and optogenetic suppression of neuronal activity in V1. This suggests that to solve this task the circuit has adopted a suboptimal and task-specific computation that discards important task-related information.SIGNIFICANCE STATEMENT A major goal in systems neuroscience is to understand how sensory signals are used to guide behavior. This requires determining what information in sensory cortical areas is used, and how it is combined, by downstream perceptual choice circuits. Here we demonstrate that when performing a go/no-go orientation discrimination task, mice suboptimally integrate signals from orientation tuned visual cortical neurons. While they appropriately positively weight target-preferring neurons, they fail to negatively weight distractor-preferring neurons. We propose that this all-positive computation may be adopted because of its simple learning rules and faster processing, and may be a common approach to perceptual decision-making when task conditions allow.


Subject(s)
Adaptation, Physiological , Choice Behavior/physiology , Discrimination, Psychological/physiology , Neurons/physiology , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Animals , Female , Male , Mice, Inbred C57BL , Models, Neurological , Optogenetics , Psychomotor Performance/physiology
3.
Elife ; 82019 01 08.
Article in English | MEDLINE | ID: mdl-30620334

ABSTRACT

Most behaviors such as making tea are not stereotypical but have an obvious structure. However, analytical methods to objectively extract structure from non-stereotyped behaviors are immature. In this study, we analyze the locomotion of fruit flies and show that this non-stereotyped behavior is well-described by a Hierarchical Hidden Markov Model (HHMM). HHMM shows that a fly's locomotion can be decomposed into a few locomotor features, and odors modulate locomotion by altering the time a fly spends performing different locomotor features. Importantly, although all flies in our dataset use the same set of locomotor features, individual flies vary considerably in how often they employ a given locomotor feature, and how this usage is modulated by odor. This variation is so large that the behavior of individual flies is best understood as being grouped into at least three to five distinct clusters, rather than variations around an average fly.


Subject(s)
Drosophila melanogaster/physiology , Locomotion/physiology , Odorants , Statistics as Topic , Animals , Behavior, Animal , Markov Chains , Models, Biological
4.
Nat Neurosci ; 21(10): 1442-1451, 2018 10.
Article in English | MEDLINE | ID: mdl-30224803

ABSTRACT

Actions are guided by a Bayesian-like interaction between priors based on experience and current sensory evidence. Here we unveil a complete neural implementation of Bayesian-like behavior, including adaptation of a prior. We recorded the spiking of single neurons in the smooth eye-movement region of the frontal eye fields (FEFSEM), a region that is causally involved in smooth-pursuit eye movements. Monkeys tracked moving targets in contexts that set different priors for target speed. Before the onset of target motion, preparatory activity encodes and adapts in parallel with the behavioral adaptation of the prior. During the initiation of pursuit, FEFSEM output encodes a maximum a posteriori estimate of target speed based on a reliability-weighted combination of the prior and sensory evidence. FEFSEM responses during pursuit are sufficient both to adapt a prior that may be stored in FEFSEM and, through known downstream pathways, to cause Bayesian-like behavior in pursuit.


Subject(s)
Bayes Theorem , Eye Movements/physiology , Frontal Lobe/cytology , Motion Perception/physiology , Neurons/physiology , Action Potentials/physiology , Adaptation, Physiological , Animals , Macaca mulatta , Male , Models, Neurological , Photic Stimulation , Visual Fields
5.
Neuroimage ; 162: 138-150, 2017 11 15.
Article in English | MEDLINE | ID: mdl-28882633

ABSTRACT

Real-life decision-making often involves combining multiple probabilistic sources of information under finite time and cognitive resources. To mitigate these pressures, people "satisfice", foregoing a full evaluation of all available evidence to focus on a subset of cues that allow for fast and "good-enough" decisions. Although this form of decision-making likely mediates many of our everyday choices, very little is known about the way in which the neural encoding of cue information changes when we satisfice under time pressure. Here, we combined human functional magnetic resonance imaging (fMRI) with a probabilistic classification task to characterize neural substrates of multi-cue decision-making under low (1500 ms) and high (500 ms) time pressure. Using variational Bayesian inference, we analyzed participants' choices to track and quantify cue usage under each experimental condition, which was then applied to model the fMRI data. Under low time pressure, participants performed near-optimally, appropriately integrating all available cues to guide choices. Both cortical (prefrontal and parietal cortex) and subcortical (hippocampal and striatal) regions encoded individual cue weights, and activity linearly tracked trial-by-trial variations in the amount of evidence and decision uncertainty. Under increased time pressure, participants adaptively shifted to using a satisficing strategy by discounting the least informative cue in their decision process. This strategic change in decision-making was associated with an increased involvement of the dopaminergic midbrain, striatum, thalamus, and cerebellum in representing and integrating cue values. We conclude that satisficing the probabilistic inference process under time pressure leads to a cortical-to-subcortical shift in the neural drivers of decisions.


Subject(s)
Brain/physiology , Decision Making/physiology , Adolescent , Adult , Bayes Theorem , Brain Mapping , Choice Behavior/physiology , Cues , Female , Humans , Magnetic Resonance Imaging , Male , Time Factors , Young Adult
6.
PLoS Comput Biol ; 13(8): e1005645, 2017 Aug.
Article in English | MEDLINE | ID: mdl-28827790

ABSTRACT

Experiments that study neural encoding of stimuli at the level of individual neurons typically choose a small set of features present in the world-contrast and luminance for vision, pitch and intensity for sound-and assemble a stimulus set that systematically varies along these dimensions. Subsequent analysis of neural responses to these stimuli typically focuses on regression models, with experimenter-controlled features as predictors and spike counts or firing rates as responses. Unfortunately, this approach requires knowledge in advance about the relevant features coded by a given population of neurons. For domains as complex as social interaction or natural movement, however, the relevant feature space is poorly understood, and an arbitrary a priori choice of features may give rise to confirmation bias. Here, we present a Bayesian model for exploratory data analysis that is capable of automatically identifying the features present in unstructured stimuli based solely on neuronal responses. Our approach is unique within the class of latent state space models of neural activity in that it assumes that firing rates of neurons are sensitive to multiple discrete time-varying features tied to the stimulus, each of which has Markov (or semi-Markov) dynamics. That is, we are modeling neural activity as driven by multiple simultaneous stimulus features rather than intrinsic neural dynamics. We derive a fast variational Bayesian inference algorithm and show that it correctly recovers hidden features in synthetic data, as well as ground-truth stimulus features in a prototypical neural dataset. To demonstrate the utility of the algorithm, we also apply it to cluster neural responses and demonstrate successful recovery of features corresponding to monkeys and faces in the image set.


Subject(s)
Action Potentials/physiology , Models, Neurological , Neurons/physiology , Algorithms , Animals , Bayes Theorem , Cluster Analysis , Computational Biology , Macaca , Photic Stimulation
7.
J Exp Psychol Learn Mem Cogn ; 42(12): 1937-1956, 2016 12.
Article in English | MEDLINE | ID: mdl-27253846

ABSTRACT

Much of our real-life decision making is bounded by uncertain information, limitations in cognitive resources, and a lack of time to allocate to the decision process. It is thought that humans overcome these limitations through satisficing, fast but "good-enough" heuristic decision making that prioritizes some sources of information (cues) while ignoring others. However, the decision-making strategies we adopt under uncertainty and time pressure, for example during emergencies that demand split-second choices, are presently unknown. To characterize these decision strategies quantitatively, the present study examined how people solve a novel multicue probabilistic classification task under varying time pressure, by tracking shifts in decision strategies using variational Bayesian inference. We found that under low time pressure, participants correctly weighted and integrated all available cues to arrive at near-optimal decisions. With increasingly demanding, subsecond time pressures, however, participants systematically discounted a subset of the cue information by dropping the least informative cue(s) from their decision making process. Thus, the human cognitive apparatus copes with uncertainty and severe time pressure by adopting a "drop-the-worst" cue decision making strategy that minimizes cognitive time and effort investment while preserving the consideration of the most diagnostic cue information, thus maintaining "good-enough" accuracy. This advance in our understanding of satisficing strategies could form the basis of predicting human choices in high time pressure scenarios. (PsycINFO Database Record


Subject(s)
Cues , Decision Making , Adult , Aged , Analysis of Variance , Bayes Theorem , Female , Humans , Judgment , Logistic Models , Male , Middle Aged , Probability , Psychological Tests , Reaction Time , Stress, Psychological , Time Factors , Young Adult
8.
Proc Natl Acad Sci U S A ; 110(50): 20332-7, 2013 Dec 10.
Article in English | MEDLINE | ID: mdl-24272938

ABSTRACT

Categorization is a cornerstone of perception and cognition. Computationally, categorization amounts to applying decision boundaries in the space of stimulus features. We designed a visual categorization task in which optimal performance requires observers to incorporate trial-to-trial knowledge of the level of sensory uncertainty when setting their decision boundaries. We found that humans and monkeys did adjust their decision boundaries from trial to trial as the level of sensory noise varied, with some subjects performing near optimally. We constructed a neural network that implements uncertainty-based, near-optimal adjustment of decision boundaries. Divisive normalization emerges automatically as a key neural operation in this network. Our results offer an integrated computational and mechanistic framework for categorization under uncertainty.


Subject(s)
Concept Formation/physiology , Decision Making/physiology , Haplorhini/physiology , Models, Neurological , Nerve Net , Visual Perception/physiology , Animals , Bayes Theorem , Humans , Likelihood Functions , Species Specificity
9.
Nat Neurosci ; 16(9): 1170-8, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23955561

ABSTRACT

There is strong behavioral and physiological evidence that the brain both represents probability distributions and performs probabilistic inference. Computational neuroscientists have started to shed light on how these probabilistic representations and computations might be implemented in neural circuits. One particularly appealing aspect of these theories is their generality: they can be used to model a wide range of tasks, from sensory processing to high-level cognition. To date, however, these theories have only been applied to very simple tasks. Here we discuss the challenges that will emerge as researchers start focusing their efforts on real-life computations, with a focus on probabilistic learning, structural learning and approximate inference.


Subject(s)
Brain/physiology , Models, Neurological , Neurons/physiology , Probability , Animals , Brain/cytology , Humans , Learning , Perception , Sensation
10.
Neuron ; 74(1): 30-9, 2012 Apr 12.
Article in English | MEDLINE | ID: mdl-22500627

ABSTRACT

Behavior varies from trial to trial even when the stimulus is maintained as constant as possible. In many models, this variability is attributed to noise in the brain. Here, we propose that there is another major source of variability: suboptimal inference. Importantly, we argue that in most tasks of interest, and particularly complex ones, suboptimal inference is likely to be the dominant component of behavioral variability. This perspective explains a variety of intriguing observations, including why variability appears to be larger on the sensory than on the motor side, and why our sensors are sometimes surprisingly unreliable.


Subject(s)
Behavior , Behavioral Research , Neurons/physiology , Perceptual Masking/physiology , Sensation/physiology , Animals , Brain/cytology , Brain/physiology , Field Dependence-Independence , Humans , Models, Neurological , Models, Psychological , Reproducibility of Results , Uncertainty
11.
J Neurosci ; 31(43): 15310-9, 2011 Oct 26.
Article in English | MEDLINE | ID: mdl-22031877

ABSTRACT

A wide range of computations performed by the nervous system involves a type of probabilistic inference known as marginalization. This computation comes up in seemingly unrelated tasks, including causal reasoning, odor recognition, motor control, visual tracking, coordinate transformations, visual search, decision making, and object recognition, to name just a few. The question we address here is: how could neural circuits implement such marginalizations? We show that when spike trains exhibit a particular type of statistics--associated with constant Fano factors and gain-invariant tuning curves, as is often reported in vivo--some of the more common marginalizations can be achieved with networks that implement a quadratic nonlinearity and divisive normalization, the latter being a type of nonlinear lateral inhibition that has been widely reported in neural circuits. Previous studies have implicated divisive normalization in contrast gain control and attentional modulation. Our results raise the possibility that it is involved in yet another, highly critical, computation: near optimal marginalization in a remarkably wide range of tasks.


Subject(s)
Models, Neurological , Nerve Net/physiology , Neurons/physiology , Normal Distribution , Action Potentials/physiology , Computer Simulation , Humans , Probability
12.
Nat Neurosci ; 14(6): 783-90, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21552276

ABSTRACT

The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance.


Subject(s)
Discrimination, Psychological , Models, Neurological , Visual Perception , Humans , Observation/methods , Photic Stimulation/methods , Reproducibility of Results
13.
Nat Neurosci ; 14(5): 642-8, 2011 May.
Article in English | MEDLINE | ID: mdl-21460833

ABSTRACT

Extensive training on simple tasks such as fine orientation discrimination results in large improvements in performance, a form of learning known as perceptual learning. Previous models have argued that perceptual learning is due to either sharpening and amplification of tuning curves in early visual areas or to improved probabilistic inference in later visual areas (at the decision stage). However, early theories are inconsistent with the conclusions of psychophysical experiments manipulating external noise, whereas late theories cannot explain the changes in neural responses that have been reported in cortical areas V1 and V4. Here we show that we can capture both the neurophysiological and behavioral aspects of perceptual learning by altering only the feedforward connectivity in a recurrent network of spiking neurons so as to improve probabilistic inference in early visual areas. The resulting network shows modest changes in tuning curves, in line with neurophysiological reports, along with a marked reduction in the amplitude of pairwise noise correlations.


Subject(s)
Discrimination, Psychological/physiology , Learning/physiology , Models, Neurological , Orientation/physiology , Visual Cortex/physiology , Visual Perception/physiology , Animals , Cerebral Cortex/physiology , Computer Simulation , Humans , Neural Networks, Computer , Neural Pathways/physiology , Neurons/physiology , Noise , Probability , Psychophysics , Thalamus/physiology , Visual Cortex/cytology , Visual Fields/physiology
14.
Neural Comput ; 21(11): 2991-3009, 2009 Nov.
Article in English | MEDLINE | ID: mdl-19635018

ABSTRACT

A feedforward spiking network represents a nonlinear transformation that maps a set of input spikes to a set of output spikes. This mapping transforms the joint probability distribution of incoming spikes into a joint distribution of output spikes. We present an algorithm for synaptic adaptation that aims to maximize the entropy of this output distribution, thereby creating a model for the joint distribution of the incoming point processes. The learning rule that is derived depends on the precise pre- and postsynaptic spike timings. When trained on correlated spike trains, the network learns to extract independent spike trains, thereby uncovering the underlying statistical structure and creating a more efficient representation of the incoming spike trains.


Subject(s)
Neural Networks, Computer , Neurons/physiology , Algorithms , Electrophysiology , Entropy , Excitatory Postsynaptic Potentials/physiology , Feedback , Information Theory , Models, Neurological , Neuronal Plasticity/physiology , Nonlinear Dynamics , Poisson Distribution
15.
Neuron ; 60(6): 1142-52, 2008 Dec 26.
Article in English | MEDLINE | ID: mdl-19109917

ABSTRACT

When making a decision, one must first accumulate evidence, often over time, and then select the appropriate action. Here, we present a neural model of decision making that can perform both evidence accumulation and action selection optimally. More specifically, we show that, given a Poisson-like distribution of spike counts, biological neural networks can accumulate evidence without loss of information through linear integration of neural activity and can select the most likely action through attractor dynamics. This holds for arbitrary correlations, any tuning curves, continuous and discrete variables, and sensory evidence whose reliability varies over time. Our model predicts that the neurons in the lateral intraparietal cortex involved in evidence accumulation encode, on every trial, a probability distribution which predicts the animal's performance. We present experimental evidence consistent with this prediction and discuss other predictions applicable to more general settings.


Subject(s)
Bayes Theorem , Decision Making/physiology , Models, Neurological , Neurons/physiology , Action Potentials/physiology , Animals , Computer Simulation , Haplorhini , Humans , Motion Perception/physiology , Neural Networks, Computer , Nonlinear Dynamics , Photic Stimulation , Reaction Time , Time Factors
16.
Curr Opin Neurobiol ; 18(2): 217-22, 2008 Apr.
Article in English | MEDLINE | ID: mdl-18678253

ABSTRACT

Systems neuroscience traditionally conceptualizes a population of spiking neurons as merely encoding the value of a stimulus. Yet, psychophysics has revealed that people take into account stimulus uncertainty when performing sensory or motor computations and do so in a nearly Bayes-optimal way. This suggests that neural populations do not encode just a single value but an entire probability distribution over the stimulus. Several such probabilistic codes have been proposed, including one that utilizes the structure of neural variability to enable simple neural implementations of probabilistic computations such as optimal cue integration. This approach provides a quantitative link between Bayes-optimal behaviors and specific neural operations. It allows for novel ways to evaluate probabilistic codes and for predictions for physiological population recordings.


Subject(s)
Bayes Theorem , Choice Behavior/physiology , Nerve Net/physiology , Neurons/physiology , Algorithms , Cues , Humans , Models, Neurological , Models, Statistical
17.
Neural Comput ; 19(5): 1344-61, 2007 May.
Article in English | MEDLINE | ID: mdl-17381269

ABSTRACT

From first principles, we derive a quadratic nonlinear, first-order dynamical system capable of performing exact Bayes-Markov inferences for a wide class of biologically plausible stimulus-dependent patterns of activity while simultaneously providing an online estimate of model performance. This is accomplished by constructing a dynamical system that has solutions proportional to the probability distribution over the stimulus space, but with a constant of proportionality adjusted to provide a local estimate of the probability of the recent observations of stimulus-dependent activity-given model parameters. Next, we transform this exact equation to generate nonlinear equations for the exact evolution of log likelihood and log-likelihood ratios and show that when the input has low amplitude, linear rate models for both the likelihood and the log-likelihood functions follow naturally from these equations. We use these four explicit representations of the probability distribution to argue that, in contrast to the arguments of previous work, the dynamical system for the exact evolution of the likelihood (as opposed to the log likelihood or log-likelihood ratios) not only can be mapped onto a biologically plausible network but is also more consistent with physiological observations.


Subject(s)
Markov Chains , Neural Networks, Computer , Neurons/physiology , Animals , Computer Simulation , Models, Neurological , Nonlinear Dynamics
18.
J Physiol Paris ; 100(1-3): 125-32, 2006.
Article in English | MEDLINE | ID: mdl-17067787

ABSTRACT

Previously, it was suggested that feedback connections from higher- to lower-level areas carry predictions of lower-level neural activities, whereas feedforward connections carry the residual error between the predictions and the actual lower-level activities [Rao, R.P.N., Ballard, D.H., 1999. Nature Neuroscience 2, 79-87.]. A computational model implementing the hypothesis learned simple cell receptive fields when exposed to natural images. Here, we use predictive feedback to explain tuning properties in medial superior temporal area (MST). We implement the hypothesis using a new, biologically plausible, algorithm based on matching pursuit, which retains all the features of the previous implementation, including its ability to efficiently encode input. When presented with natural images, the model developed receptive field properties as found in primary visual cortex. In addition, when exposed to visual motion input resulting from movements through space, the model learned receptive field properties resembling those in MST. These results corroborate the idea that predictive feedback is a general principle used by the visual system to efficiently encode natural input.


Subject(s)
Feedback , Learning/physiology , Models, Neurological , Visual Cortex/physiology , Visual Fields , Algorithms , Animals , Computer Simulation , Forecasting , Humans , Neural Networks, Computer , Photic Stimulation
19.
Nat Neurosci ; 9(11): 1432-8, 2006 Nov.
Article in English | MEDLINE | ID: mdl-17057707

ABSTRACT

Recent psychophysical experiments indicate that humans perform near-optimal Bayesian inference in a wide variety of tasks, ranging from cue integration to decision making to motor control. This implies that neurons both represent probability distributions and combine those distributions according to a close approximation to Bayes' rule. At first sight, it would seem that the high variability in the responses of cortical neurons would make it difficult to implement such optimal statistical inference in cortical circuits. We argue that, in fact, this variability implies that populations of neurons automatically represent probability distributions over the stimulus, a type of code we call probabilistic population codes. Moreover, we demonstrate that the Poisson-like variability observed in cortex reduces a broad class of Bayesian inference to simple linear combinations of populations of neural activity. These results hold for arbitrary probability distributions over the stimulus, for tuning curves of arbitrary shape and for realistic neuronal variability.


Subject(s)
Bayes Theorem , Cerebral Cortex/physiology , Models, Neurological , Models, Statistical , Nerve Net/physiology , Algorithms , Humans , Nerve Net/cytology , Normal Distribution , Poisson Distribution
SELECTION OF CITATIONS
SEARCH DETAIL
...