Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 48
Filter
1.
Elife ; 122023 03 07.
Article in English | MEDLINE | ID: mdl-36881019

ABSTRACT

The ability to associate sensory stimuli with abstract classes is critical for survival. How are these associations implemented in brain circuits? And what governs how neural activity evolves during abstract knowledge acquisition? To investigate these questions, we consider a circuit model that learns to map sensory input to abstract classes via gradient-descent synaptic plasticity. We focus on typical neuroscience tasks (simple, and context-dependent, categorization), and study how both synaptic connectivity and neural activity evolve during learning. To make contact with the current generation of experiments, we analyze activity via standard measures such as selectivity, correlations, and tuning symmetry. We find that the model is able to recapitulate experimental observations, including seemingly disparate ones. We determine how, in the model, the behaviour of these measures depends on details of the circuit and the task. These dependencies make experimentally testable predictions about the circuitry supporting abstract knowledge acquisition in the brain.


Subject(s)
Learning , Neurosciences , Brain , Knowledge , Neuronal Plasticity
2.
Proc Natl Acad Sci U S A ; 119(11): e2100600119, 2022 03 15.
Article in English | MEDLINE | ID: mdl-35263217

ABSTRACT

SignificanceIn this work, we explore the hypothesis that biological neural networks optimize their architecture, through evolution, for learning. We study early olfactory circuits of mammals and insects, which have relatively similar structure but a huge diversity in size. We approximate these circuits as three-layer networks and estimate, analytically, the scaling of the optimal hidden-layer size with input-layer size. We find that both longevity and information in the genome constrain the hidden-layer size, so a range of allometric scalings is possible. However, the experimentally observed allometric scalings in mammals and insects are consistent with biologically plausible values. This analysis should pave the way for a deeper understanding of both biological and artificial networks.


Subject(s)
Insecta , Learning , Mammals , Models, Neurological , Olfactory Pathways , Animals , Biological Evolution , Cell Count , Learning/physiology , Mushroom Bodies/cytology , Neural Networks, Computer , Neurons/cytology , Olfactory Pathways/cytology , Olfactory Pathways/growth & development , Piriform Cortex/cytology
3.
PLoS Comput Biol ; 18(1): e1009808, 2022 01.
Article in English | MEDLINE | ID: mdl-35100264

ABSTRACT

Sensory processing is hard because the variables of interest are encoded in spike trains in a relatively complex way. A major goal in studies of sensory processing is to understand how the brain extracts those variables. Here we revisit a common encoding model in which variables are encoded linearly. Although there are typically more variables than neurons, this problem is still solvable because only a small number of variables appear at any one time (sparse prior). However, previous solutions require all-to-all connectivity, inconsistent with the sparse connectivity seen in the brain. Here we propose an algorithm that provably reaches the MAP (maximum a posteriori) inference solution, but does so using sparse connectivity. Our algorithm is inspired by the circuit of the mouse olfactory bulb, but our approach is general enough to apply to other modalities. In addition, it should be possible to extend it to nonlinear encoding models.


Subject(s)
Algorithms , Sensory Receptor Cells/physiology , Action Potentials/physiology , Animals , Mice , Nonlinear Dynamics
4.
Nat Neurosci ; 24(4): 565-571, 2021 04.
Article in English | MEDLINE | ID: mdl-33707754

ABSTRACT

Learning, especially rapid learning, is critical for survival. However, learning is hard; a large number of synaptic weights must be set based on noisy, often ambiguous, sensory information. In such a high-noise regime, keeping track of probability distributions over weights is the optimal strategy. Here we hypothesize that synapses take that strategy; in essence, when they estimate weights, they include error bars. They then use that uncertainty to adjust their learning rates, with more uncertain weights having higher learning rates. We also make a second, independent, hypothesis: synapses communicate their uncertainty by linking it to variability in postsynaptic potential size, with more uncertainty leading to more variability. These two hypotheses cast synaptic plasticity as a problem of Bayesian inference, and thus provide a normative view of learning. They generalize known learning rules, offer an explanation for the large variability in the size of postsynaptic potentials and make falsifiable experimental predictions.


Subject(s)
Brain/physiology , Learning/physiology , Models, Neurological , Neuronal Plasticity/physiology , Neurons/physiology , Algorithms , Animals , Bayes Theorem , Humans
5.
Nat Commun ; 11(1): 3845, 2020 07 31.
Article in English | MEDLINE | ID: mdl-32737295

ABSTRACT

Many experimental studies suggest that animals can rapidly learn to identify odors and predict the rewards associated with them. However, the underlying plasticity mechanism remains elusive. In particular, it is not clear how olfactory circuits achieve rapid, data efficient learning with local synaptic plasticity. Here, we formulate olfactory learning as a Bayesian optimization process, then map the learning rules into a computational model of the mammalian olfactory circuit. The model is capable of odor identification from a small number of observations, while reproducing cellular plasticity commonly observed during development. We extend the framework to reward-based learning, and show that the circuit is able to rapidly learn odor-reward association with a plausible neural architecture. These results deepen our theoretical understanding of unsupervised learning in the mammalian brain.


Subject(s)
Conditioning, Classical/physiology , Nerve Net , Neuronal Plasticity/physiology , Olfactory Pathways/physiology , Olfactory Perception/physiology , Smell/physiology , Animals , Bayes Theorem , Computer Simulation , Mammals , Neurons/cytology , Neurons/physiology , Odorants/analysis , Olfactory Bulb/physiology , Reward
6.
Trends Neurosci ; 43(6): 363-372, 2020 06.
Article in English | MEDLINE | ID: mdl-32459990

ABSTRACT

More often than not, action potentials fail to trigger neurotransmitter release. And even when neurotransmitter is released, the resulting change in synaptic conductance is highly variable. Given the energetic cost of generating and propagating action potentials, and the importance of information transmission across synapses, this seems both wasteful and inefficient. However, synaptic noise arising from variable transmission can improve, in certain restricted conditions, information transmission. Under broader conditions, it can improve information transmission per release, a quantity that is relevant given the energetic constraints on computing in the brain. Here we discuss the role, both positive and negative, synaptic noise plays in information transmission and computation in the brain.


Subject(s)
Synapses , Synaptic Transmission , Action Potentials , Humans , Neurotransmitter Agents
7.
Neuron ; 105(1): 165-179.e8, 2020 01 08.
Article in English | MEDLINE | ID: mdl-31753580

ABSTRACT

Inhibitory neurons, which play a critical role in decision-making models, are often simplified as a single pool of non-selective neurons lacking connection specificity. This assumption is supported by observations in the primary visual cortex: inhibitory neurons are broadly tuned in vivo and show non-specific connectivity in slice. The selectivity of excitatory and inhibitory neurons within decision circuits and, hence, the validity of decision-making models are unknown. We simultaneously measured excitatory and inhibitory neurons in the posterior parietal cortex of mice judging multisensory stimuli. Surprisingly, excitatory and inhibitory neurons were equally selective for the animal's choice, both at the single-cell and population level. Further, both cell types exhibited similar changes in selectivity and temporal dynamics during learning, paralleling behavioral improvements. These observations, combined with modeling, argue against circuit architectures assuming non-selective inhibitory neurons. Instead, they argue for selective subnetworks of inhibitory and excitatory neurons that are shaped by experience to support expert decision-making.


Subject(s)
Decision Making/physiology , Learning/physiology , Nerve Net/physiology , Neurons/physiology , Animals , Glutamate Decarboxylase/genetics , Mice , Mice, Transgenic , Models, Neurological , Neural Inhibition/physiology , Parietal Lobe/physiology
8.
Nat Neurosci ; 22(11): 1761-1770, 2019 11.
Article in English | MEDLINE | ID: mdl-31659335

ABSTRACT

Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.


Subject(s)
Artificial Intelligence , Deep Learning , Neural Networks, Computer , Animals , Brain/physiology , Humans
9.
Intensive Care Med ; 45(9): 1190-1199, 2019 09.
Article in English | MEDLINE | ID: mdl-31297547

ABSTRACT

PURPOSE: Severe immune dysregulation is common in patients admitted to the intensive care unit (ICU) and is associated with adverse outcomes. Erythropoietin-stimulating agents (ESAs) have immune-modulating and anti-apoptotic effects. However, their safety and efficacy in critically ill patients remain uncertain. We evaluated whether ESAs, administered to critically unwell adult patients admitted to the ICU, reduced mortality at hospital discharge. METHODS: The search strategy was conducted according to a predetermined protocol and included OVID MEDLINE, OVID EMBASE and The Cochrane Central Register of Controlled Trials from inception until 20 May 2019. Publications were eligible for inclusion if they were randomized controlled trials (RCTs) including adult patients admitted to an ICU, that identified and reported a group receiving ESA therapy compared to a group not receiving ESA therapy and reported mortality. There were no language restrictions. RESULTS: The systematic review included 21 studies with 5452 participants. In-hospital mortality, reported in 16 studies of which only one was at low risk of bias, was lower in the ESA group (276 of 2187 patients, 12.6%) than the comparator group (339 out of 2204 patients, 15.4%), [relative risk (RR) 0.82, 95% CI 0.71-0.94, P = 0.006, I2 = 0.0%]. The RR of SAEs and thromboembolic events for the ESA and comparator groups were similar, RR 1.11 (95% CI 0.94-1.31, P = 0.228, I2 66%) and 1.22 (95% CI 0.95-1.58, P = 0.086, I2 47%), respectively. CONCLUSIONS: In heterogenous populations of critically ill adults, evidence from RCTs of mainly low or unclear quality, suggests that ESA therapy may decrease mortality.


Subject(s)
Critical Illness/therapy , Hematinics/adverse effects , Hematinics/pharmacology , Hematinics/therapeutic use , Humans , Intensive Care Units/organization & administration , Patient Safety
11.
Nat Hum Behav ; 1(11): 810-818, 2017 Nov.
Article in English | MEDLINE | ID: mdl-29152591

ABSTRACT

Confidence is the 'feeling of knowing' that accompanies decision making. Bayesian theory proposes that confidence is a function solely of the perceived probability of being correct. Empirical research has suggested, however, that different individuals may perform different computations to estimate confidence from uncertain evidence. To test this hypothesis, we collected confidence reports in a task where subjects made categorical decisions about the mean of a sequence. We found that for most individuals, confidence did indeed reflect the perceived probability of being correct. However, in approximately half of them, confidence also reflected a different probabilistic quantity: the perceived uncertainty in the estimated variable. We found that the contribution of both quantities was stable over weeks. We also observed that the influence of the perceived probability of being correct was stable across two tasks, one perceptual and one cognitive. Overall, our findings provide a computational interpretation of individual differences in human confidence.

12.
PLoS Comput Biol ; 13(4): e1005497, 2017 04.
Article in English | MEDLINE | ID: mdl-28419098

ABSTRACT

Sensory neurons give highly variable responses to stimulation, which can limit the amount of stimulus information available to downstream circuits. Much work has investigated the factors that affect the amount of information encoded in these population responses, leading to insights about the role of covariability among neurons, tuning curve shape, etc. However, the informativeness of neural responses is not the only relevant feature of population codes; of potentially equal importance is how robustly that information propagates to downstream structures. For instance, to quantify the retina's performance, one must consider not only the informativeness of the optic nerve responses, but also the amount of information that survives the spike-generating nonlinearity and noise corruption in the next stage of processing, the lateral geniculate nucleus. Our study identifies the set of covariance structures for the upstream cells that optimize the ability of information to propagate through noisy, nonlinear circuits. Within this optimal family are covariances with "differential correlations", which are known to reduce the information encoded in neural population activities. Thus, covariance structures that maximize information in neural population codes, and those that maximize the ability of this information to propagate, can be very different. Moreover, redundancy is neither necessary nor sufficient to make population codes robust against corruption by noise: redundant codes can be very fragile, and synergistic codes can-in some cases-optimize robustness against noise.


Subject(s)
Models, Neurological , Nerve Net/physiology , Sensory Receptor Cells/physiology , Computational Biology , Computer Simulation
13.
Neuron ; 93(3): 491-507, 2017 Feb 08.
Article in English | MEDLINE | ID: mdl-28182905

ABSTRACT

The two basic processes underlying perceptual decisions-how neural responses encode stimuli, and how they inform behavioral choices-have mainly been studied separately. Thus, although many spatiotemporal features of neural population activity, or "neural codes," have been shown to carry sensory information, it is often unknown whether the brain uses these features for perception. To address this issue, we propose a new framework centered on redefining the neural code as the neural features that carry sensory information used by the animal to drive appropriate behavior; that is, the features that have an intersection between sensory and choice information. We show how this framework leads to a new statistical analysis of neural activity recorded during behavior that can identify such neural codes, and we discuss how to combine intersection-based analysis of neural recordings with intervention on neural activity to determine definitively whether specific neural activity features are involved in a task.


Subject(s)
Behavior, Animal/physiology , Brain/physiology , Optogenetics , Perception/physiology , Statistics as Topic , Animals , Choice Behavior
14.
Nat Neurosci ; 20(1): 98-106, 2017 01.
Article in English | MEDLINE | ID: mdl-27918530

ABSTRACT

The olfactory system faces a hard problem: on the basis of noisy information from olfactory receptor neurons (the neurons that transduce chemicals to neural activity), it must figure out which odors are present in the world. Odors almost never occur in isolation, and different odors excite overlapping populations of olfactory receptor neurons, so the central challenge of the olfactory system is to demix its input. Because of noise and the large number of possible odors, demixing is fundamentally a probabilistic inference task. We propose that the early olfactory system uses approximate Bayesian inference to solve it. The computations involve a dynamical loop between the olfactory bulb and the piriform cortex, with cortex explaining incoming activity from the olfactory receptor neurons in terms of a mixture of odors. The model is compatible with known anatomy and physiology, including pattern decorrelation, and it performs better than other models at demixing odors.


Subject(s)
Odorants , Olfactory Bulb/physiology , Olfactory Pathways/physiology , Olfactory Receptor Neurons/physiology , Piriform Cortex/physiology , Animals , Bayes Theorem , Mice , Neurons/physiology
15.
Nat Neurosci ; 20(1): 6-8, 2016 12 27.
Article in English | MEDLINE | ID: mdl-28025982
16.
PLoS Comput Biol ; 12(12): e1005110, 2016 12.
Article in English | MEDLINE | ID: mdl-27997544

ABSTRACT

Zipf's law, which states that the probability of an observation is inversely proportional to its rank, has been observed in many domains. While there are models that explain Zipf's law in each of them, those explanations are typically domain specific. Recently, methods from statistical physics were used to show that a fairly broad class of models does provide a general explanation of Zipf's law. This explanation rests on the observation that real world data is often generated from underlying causes, known as latent variables. Those latent variables mix together multiple models that do not obey Zipf's law, giving a model that does. Here we extend that work both theoretically and empirically. Theoretically, we provide a far simpler and more intuitive explanation of Zipf's law, which at the same time considerably extends the class of models to which this explanation can apply. Furthermore, we also give methods for verifying whether this explanation applies to a particular dataset. Empirically, these advances allowed us extend this explanation to important classes of data, including word frequencies (the first domain in which Zipf's law was discovered), data with variable sequence length, and multi-neuron spiking activity.


Subject(s)
Models, Theoretical , Action Potentials , Databases, Factual , Entropy , Language , Models, Neurological
17.
PLoS Comput Biol ; 11(10): e1004519, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26517475

ABSTRACT

Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people's confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality.


Subject(s)
Bayes Theorem , Choice Behavior , Decision Support Techniques , Heuristics , Models, Statistical , Visual Perception , Computer Simulation , Humans
18.
Nat Neurosci ; 17(10): 1410-7, 2014 Oct.
Article in English | MEDLINE | ID: mdl-25195105

ABSTRACT

Computational strategies used by the brain strongly depend on the amount of information that can be stored in population activity, which in turn strongly depends on the pattern of noise correlations. In vivo, noise correlations tend to be positive and proportional to the similarity in tuning properties. Such correlations are thought to limit information, which has led to the suggestion that decorrelation increases information. In contrast, we found, analytically and numerically, that decorrelation does not imply an increase in information. Instead, the only information-limiting correlations are what we refer to as differential correlations: correlations proportional to the product of the derivatives of the tuning curves. Unfortunately, differential correlations are likely to be very small and buried under correlations that do not limit information, making them particularly difficult to detect. We found, however, that the effect of differential correlations on information can be detected with relatively simple decoders.


Subject(s)
Models, Neurological , Neurons/physiology , Brain/cytology , Computer Simulation , Humans , Statistics as Topic
19.
Conscious Cogn ; 26: 13-23, 2014 May.
Article in English | MEDLINE | ID: mdl-24650632

ABSTRACT

In a range of contexts, individuals arrive at collective decisions by sharing confidence in their judgements. This tendency to evaluate the reliability of information by the confidence with which it is expressed has been termed the 'confidence heuristic'. We tested two ways of implementing the confidence heuristic in the context of a collective perceptual decision-making task: either directly, by opting for the judgement made with higher confidence, or indirectly, by opting for the faster judgement, exploiting an inverse correlation between confidence and reaction time. We found that the success of these heuristics depends on how similar individuals are in terms of the reliability of their judgements and, more importantly, that for dissimilar individuals such heuristics are dramatically inferior to interaction. Interaction allows individuals to alleviate, but not fully resolve, differences in the reliability of their judgements. We discuss the implications of these findings for models of confidence and collective decision-making.


Subject(s)
Decision Making/physiology , Interpersonal Relations , Judgment/physiology , Negotiating , Adult , Humans , Male , Negotiating/psychology , Young Adult
20.
Psychol Rev ; 121(1): 96-123, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24490790

ABSTRACT

We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making.


Subject(s)
Decision Making/physiology , Models, Statistical , Neurobiology , Perception/physiology , Probability , Algorithms , Binomial Distribution , Computer Simulation , Data Interpretation, Statistical , Humans , Neuropsychological Tests/statistics & numerical data , Probability Learning , Stochastic Processes
SELECTION OF CITATIONS
SEARCH DETAIL
...