Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 87
Filter
1.
Nat Commun ; 15(1): 5856, 2024 Jul 12.
Article in English | MEDLINE | ID: mdl-38997276

ABSTRACT

The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference learning (TD) learning, whereby certain units signal reward prediction errors (RPE). The TD algorithm has been traditionally mapped onto the dopaminergic system, as firing properties of dopamine neurons can resemble RPEs. However, certain predictions of TD learning are inconsistent with experimental results, and previous implementations of the algorithm have made unscalable assumptions regarding stimulus-specific fixed temporal bases. We propose an alternate framework to describe dopamine signaling in the brain, FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, dopamine release is similar, but not identical to RPE, leading to predictions that contrast to those of TD. While FLEX itself is a general theoretical framework, we describe a specific, biophysically plausible implementation, the results of which are consistent with a preponderance of both existing and reanalyzed experimental data.


Subject(s)
Algorithms , Dopamine , Dopaminergic Neurons , Reward , Dopaminergic Neurons/physiology , Dopaminergic Neurons/metabolism , Dopamine/metabolism , Animals , Learning/physiology , Models, Neurological , Humans , Reinforcement, Psychology , Brain/physiology , Brain/metabolism , Neuronal Plasticity/physiology , Time Factors
2.
Sci Adv ; 10(26): eadl0030, 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38924398

ABSTRACT

How can short-lived molecules selectively maintain the potentiation of activated synapses to sustain long-term memory? Here, we find kidney and brain expressed adaptor protein (KIBRA), a postsynaptic scaffolding protein genetically linked to human memory performance, complexes with protein kinase Mzeta (PKMζ), anchoring the kinase's potentiating action to maintain late-phase long-term potentiation (late-LTP) at activated synapses. Two structurally distinct antagonists of KIBRA-PKMζ dimerization disrupt established late-LTP and long-term spatial memory, yet neither measurably affects basal synaptic transmission. Neither antagonist affects PKMζ-independent LTP or memory that are maintained by compensating PKCs in ζ-knockout mice; thus, both agents require PKMζ for their effect. KIBRA-PKMζ complexes maintain 1-month-old memory despite PKMζ turnover. Therefore, it is not PKMζ alone, nor KIBRA alone, but the continual interaction between the two that maintains late-LTP and long-term memory.


Subject(s)
Intracellular Signaling Peptides and Proteins , Long-Term Potentiation , Mice, Knockout , Protein Kinase C , Animals , Protein Kinase C/metabolism , Protein Kinase C/genetics , Mice , Humans , Intracellular Signaling Peptides and Proteins/metabolism , Intracellular Signaling Peptides and Proteins/genetics , Memory/physiology , Memory, Long-Term/physiology , Synapses/metabolism , Synapses/physiology , Protein Binding , Phosphoproteins
3.
Res Sq ; 2023 Sep 19.
Article in English | MEDLINE | ID: mdl-37790466

ABSTRACT

The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference (TD) reinforcement learning. The TD framework predicts that some neuronal elements should represent the reward prediction error (RPE), which means they signal the difference between the expected future rewards and the actual rewards. The prominence of the TD theory arises from the observation that firing properties of dopaminergic neurons in the ventral tegmental area appear similar to those of RPE model-neurons in TD learning. Previous implementations of TD learning assume a fixed temporal basis for each stimulus that might eventually predict a reward. Here we show that such a fixed temporal basis is implausible and that certain predictions of TD learning are inconsistent with experiments. We propose instead an alternative theoretical framework, coined FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, feature specific representations of time are learned, allowing for neural representations of stimuli to adjust their timing and relation to rewards in an online manner. In FLEX dopamine acts as an instructive signal which helps build temporal models of the environment. FLEX is a general theoretical framework that has many possible biophysical implementations. In order to show that FLEX is a feasible approach, we present a specific biophysically plausible model which implements the principles of FLEX. We show that this implementation can account for various reinforcement learning paradigms, and that its results and predictions are consistent with a preponderance of both existing and reanalyzed experimental data.

5.
eNeuro ; 9(3)2022.
Article in English | MEDLINE | ID: mdl-35443991

ABSTRACT

Activity-dependent modifications of synaptic efficacies are a cellular substrate of learning and memory. Experimental evidence shows that these modifications are synapse specific and that the long-lasting effects are associated with the sustained increase in concentration of specific proteins like PKMζ However, such proteins are likely to diffuse away from their initial synaptic location and spread out to neighboring synapses, potentially compromising synapse specificity. In this article, we address the issue of synapse specificity during memory maintenance. Assuming that the long-term maintenance of synaptic plasticity is accomplished by a molecular switch, we carry out analytical calculations and perform simulations using the reaction-diffusion package in NEURON to determine the limits of synapse specificity during maintenance. Moreover, we explore the effects of the diffusion and degradation rates of proteins and of the geometrical characteristics of dendritic spines on synapse specificity. We conclude that the necessary conditions for synaptic specificity during maintenance require that molecular switches reside in dendritic spines. The requirement for synaptic specificity when the molecular switch resides in spines still imposes strong limits on the diffusion and turnover of rates of maintenance molecules, as well as on the morphologic properties of synaptic spines. These constraints are quite general and apply to most existing models suggested for maintenance. The parameter values can be experimentally evaluated, and if they do not fit the appropriate predicted range, the validity of this class of maintenance models would be challenged.


Subject(s)
Long-Term Potentiation , Neuronal Plasticity , Dendritic Spines/metabolism , Diffusion , Hippocampus , Long-Term Potentiation/physiology , Neuronal Plasticity/physiology , Neurons/physiology , Synapses/metabolism
6.
J Comput Neurosci ; 50(1): 121-132, 2022 02.
Article in English | MEDLINE | ID: mdl-34601665

ABSTRACT

Recurrent neural networks of spiking neurons can exhibit long lasting and even persistent activity. Such networks are often not robust and exhibit spike and firing rate statistics that are inconsistent with experimental observations. In order to overcome this problem most previous models had to assume that recurrent connections are dominated by slower NMDA type excitatory receptors. Usually, the single neurons within these networks are very simple leaky integrate and fire neurons or other low dimensional model neurons. However real neurons are much more complex, and exhibit a plethora of active conductances which are recruited both at the sub and supra threshold regimes. Here we show that by including a small number of additional active conductances we can produce recurrent networks that are both more robust and exhibit firing-rate statistics that are more consistent with experimental results. We show that this holds both for bi-stable recurrent networks, which are thought to underlie working memory and for slowly decaying networks which might underlie the estimation of interval timing. We also show that by including these conductances, such networks can be trained to using a simple learning rule to predict temporal intervals that are an order of magnitude larger than those that can be trained in networks of leaky integrate and fire neurons.


Subject(s)
Models, Neurological , Neurons , Action Potentials/physiology , Learning , Neural Networks, Computer , Neuronal Plasticity/physiology , Neurons/physiology
7.
Front Comput Neurosci ; 15: 640235, 2021.
Article in English | MEDLINE | ID: mdl-33732128

ABSTRACT

Traditional synaptic plasticity experiments and models depend on tight temporal correlations between pre- and postsynaptic activity. These tight temporal correlations, on the order of tens of milliseconds, are incompatible with significantly longer behavioral time scales, and as such might not be able to account for plasticity induced by behavior. Indeed, recent findings in hippocampus suggest that rapid, bidirectional synaptic plasticity which modifies place fields in CA1 operates at behavioral time scales. These experimental results suggest that presynaptic activity generates synaptic eligibility traces both for potentiation and depression, which last on the order of seconds. These traces can be converted to changes in synaptic efficacies by the activation of an instructive signal that depends on naturally occurring or experimentally induced plateau potentials. We have developed a simple mathematical model that is consistent with these observations. This model can be fully analyzed to find the fixed points of induced place fields and how these fixed points depend on system parameters such as the size and shape of presynaptic place fields, the animal's velocity during induction, and the parameters of the plasticity rule. We also make predictions about the convergence time to these fixed points, both for induced and pre-existing place fields.

8.
Elife ; 102021 03 18.
Article in English | MEDLINE | ID: mdl-33734085

ABSTRACT

Multiple brain regions are able to learn and express temporal sequences, and this functionality is an essential component of learning and memory. We propose a substrate for such representations via a network model that learns and recalls discrete sequences of variable order and duration. The model consists of a network of spiking neurons placed in a modular microcolumn based architecture. Learning is performed via a biophysically realistic learning rule that depends on synaptic 'eligibility traces'. Before training, the network contains no memory of any particular sequence. After training, presentation of only the first element in that sequence is sufficient for the network to recall an entire learned representation of the sequence. An extended version of the model also demonstrates the ability to successfully learn and recall non-Markovian sequences. This model provides a possible framework for biologically plausible sequence learning and memory, in agreement with recent experimental results.


Subject(s)
Action Potentials/physiology , Learning/physiology , Models, Neurological , Neural Networks, Computer , Neurons/physiology , Biophysical Phenomena , Spatio-Temporal Analysis
9.
PLoS One ; 14(12): e0225756, 2019.
Article in English | MEDLINE | ID: mdl-31860640

ABSTRACT

Current models of word-production in Broca's area (i.e. left ventro-lateral prefrontal cortex, VLPFC) posit that sequential and staggered semantic, lexical, phonological and articulatory processes precede articulation. Using millisecond-resolution intra-cranial recordings, we evaluated spatiotemporal dynamics and high frequency functional interconnectivity between left VLPFC regions during single-word production. Through the systematic variation of retrieval, selection, and phonological loads, we identified specific activation profiles and functional coupling patterns between these regions that fit within current psycholinguistic theories of word production. However, network interactions underpinning these processes activate in parallel (not sequentially), while the processes themselves are indexed by specific changes in network state. We found evidence that suggests that pars orbitalis is coupled with pars triangularis during lexical retrieval, while lexical selection is terminated via coupled activity with M1 at articulation onset. Taken together, this work reveals that speech production relies on very specific inter-regional couplings in rapid sequence in the language dominant hemisphere.


Subject(s)
Broca Area/physiology , Nerve Net/physiology , Vocabulary , Acoustic Stimulation , Adult , Female , Gamma Rhythm/physiology , Humans , Language , Male , Reaction Time , Speech/physiology
10.
J Vis ; 17(8): 6, 2017 07 01.
Article in English | MEDLINE | ID: mdl-28672372

ABSTRACT

Estimation of perceptual variables is imprecise and prone to errors. Although the properties of these perceptual errors are well characterized, the physiological basis for these errors is unknown. One previously proposed explanation for these errors is the trial-by-trial variability of the responses of sensory neurons that encode the percept. In order to test this hypothesis, we developed a mathematical formalism that allows us to find the statistical characteristics of the physiological system responsible for perceptual errors, as well as the time scale over which the visual information is integrated. Crucially, these characteristics can be estimated solely from a behavioral experiment performed here. We demonstrate that the physiological basis of perceptual error has a constant level of noise (i.e., independent of stimulus intensity and duration). By comparing these results to previous physiological measurements, we show that perceptual errors cannot be due to the variability during the encoding stage. We also find that the time window over which perceptual evidence is integrated lasts no more than ∼230 ms. Finally, we discuss sources of error that may be consistent with our behavioral measurements.


Subject(s)
Contrast Sensitivity/physiology , Perceptual Disorders/physiopathology , Sensory Receptor Cells/physiology , Visual Perception/physiology , Bayes Theorem , Humans , Models, Theoretical
11.
Neurobiol Learn Mem ; 138: 135-144, 2017 Feb.
Article in English | MEDLINE | ID: mdl-27417578

ABSTRACT

PKMζ is an autonomously active PKC isoform that is thought to maintain both LTP and long-term memory. Whereas persistent increases in PKMζ protein sustain the kinase's action in LTP, the molecular mechanism for the persistent action of PKMζ during long-term memory has not been characterized. PKMζ inhibitors disrupt spatial memory when introduced into the dorsal hippocampus from 1day to 1month after training. Therefore, if the mechanisms of PKMζ's persistent action in LTP maintenance and long-term memory were similar, persistent increases in PKMζ would last for the duration of the memory, far longer than most other learning-induced gene products. Here we find that spatial conditioning by aversive active place avoidance or appetitive radial arm maze induces PKMζ increases in dorsal hippocampus that persist from 1day to 1month, coinciding with the strength and duration of memory retention. Suppressing the increase by intrahippocampal injections of PKMζ-antisense oligodeoxynucleotides prevents the formation of long-term memory. Thus, similar to LTP maintenance, the persistent increase in the amount of autonomously active PKMζ sustains the kinase's action during long-term and remote spatial memory maintenance.


Subject(s)
Hippocampus/metabolism , Long-Term Potentiation/physiology , Memory, Long-Term/physiology , Protein Kinase C/metabolism , Spatial Memory/physiology , Animals , Avoidance Learning/physiology , Conditioning, Operant/physiology , Excitatory Postsynaptic Potentials , Male , Rats , Rats, Long-Evans , Retention, Psychology/physiology
12.
Article in English | MEDLINE | ID: mdl-28018206

ABSTRACT

The ability to maximize reward and avoid punishment is essential for animal survival. Reinforcement learning (RL) refers to the algorithms used by biological or artificial systems to learn how to maximize reward or avoid negative outcomes based on past experiences. While RL is also important in machine learning, the types of mechanistic constraints encountered by biological machinery might be different than those for artificial systems. Two major problems encountered by RL are how to relate a stimulus with a reinforcing signal that is delayed in time (temporal credit assignment), and how to stop learning once the target behaviors are attained (stopping rule). To address the first problem synaptic eligibility traces were introduced, bridging the temporal gap between a stimulus and its reward. Although, these were mere theoretical constructs, recent experiments have provided evidence of their existence. These experiments also reveal that the presence of specific neuromodulators converts the traces into changes in synaptic efficacy. A mechanistic implementation of the stopping rule usually assumes the inhibition of the reward nucleus; however, recent experimental results have shown that learning terminates at the appropriate network state even in setups where the reward nucleus cannot be inhibited. In an effort to describe a learning rule that solves the temporal credit assignment problem and implements a biologically plausible stopping rule, we proposed a model based on two separate synaptic eligibility traces, one for long-term potentiation (LTP) and one for long-term depression (LTD), each obeying different dynamics and having different effective magnitudes. The model has been shown to successfully generate stable learning in recurrent networks. Although, the model assumes the presence of a single neuromodulator, evidence indicates that there are different neuromodulators for expressing the different traces. What could be the role of different neuromodulators for expressing the LTP and LTD traces? Here we expand on our previous model to include several neuromodulators, and illustrate through various examples how different these contribute to learning reward-timing within a wide set of training paradigms and propose further roles that multiple neuromodulators can play in encoding additional information of the rewarding signal.

13.
Elife ; 52016 05 17.
Article in English | MEDLINE | ID: mdl-27187150

ABSTRACT

PKMζ is a persistently active PKC isoform proposed to maintain late-LTP and long-term memory. But late-LTP and memory are maintained without PKMζ in PKMζ-null mice. Two hypotheses can account for these findings. First, PKMζ is unimportant for LTP or memory. Second, PKMζ is essential for late-LTP and long-term memory in wild-type mice, and PKMζ-null mice recruit compensatory mechanisms. We find that whereas PKMζ persistently increases in LTP maintenance in wild-type mice, PKCι/λ, a gene-product closely related to PKMζ, persistently increases in LTP maintenance in PKMζ-null mice. Using a pharmacogenetic approach, we find PKMζ-antisense in hippocampus blocks late-LTP and spatial long-term memory in wild-type mice, but not in PKMζ-null mice without the target mRNA. Conversely, a PKCι/λ-antagonist disrupts late-LTP and spatial memory in PKMζ-null mice but not in wild-type mice. Thus, whereas PKMζ is essential for wild-type LTP and long-term memory, persistent PKCι/λ activation compensates for PKMζ loss in PKMζ-null mice.


Subject(s)
Hippocampus/physiology , Long-Term Potentiation , Memory, Long-Term , Protein Kinase C/metabolism , Animals , Mice , Mice, Knockout , Pharmacogenetics , Spatial Memory
14.
J Comput Neurosci ; 39(3): 235-54, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26334992

ABSTRACT

Neuronal circuits can learn and replay firing patterns evoked by sequences of sensory stimuli. After training, a brief cue can trigger a spatiotemporal pattern of neural activity similar to that evoked by a learned stimulus sequence. Network models show that such sequence learning can occur through the shaping of feedforward excitatory connectivity via long term plasticity. Previous models describe how event order can be learned, but they typically do not explain how precise timing can be recalled. We propose a mechanism for learning both the order and precise timing of event sequences. In our recurrent network model, long term plasticity leads to the learning of the sequence, while short term facilitation enables temporally precise replay of events. Learned synaptic weights between populations determine the time necessary for one population to activate another. Long term plasticity adjusts these weights so that the trained event times are matched during playback. While we chose short term facilitation as a time-tracking process, we also demonstrate that other mechanisms, such as spike rate adaptation, can fulfill this role. We also analyze the impact of trial-to-trial variability, showing how observational errors as well as neuronal noise result in variability in learned event times. The dynamics of the playback process determines how stochasticity is inherited in learned sequence timings. Future experiments that characterize such variability can therefore shed light on the neural mechanisms of sequence learning.


Subject(s)
Computer Simulation , Models, Neurological , Nerve Net/physiology , Neural Networks, Computer , Time , Humans , Learning/physiology , Machine Learning , Neuronal Plasticity , Pyramidal Cells/physiology , Synapses/physiology , Time Perception/physiology
15.
J Neurosci ; 35(37): 12659-72, 2015 Sep 16.
Article in English | MEDLINE | ID: mdl-26377457

ABSTRACT

Many actions performed by animals and humans depend on an ability to learn, estimate, and produce temporal intervals of behavioral relevance. Exemplifying such learning of cued expectancies is the observation of reward-timing activity in the primary visual cortex (V1) of rodents, wherein neural responses to visual cues come to predict the time of future reward as behaviorally experienced in the past. These reward-timing responses exhibit significant heterogeneity in at least three qualitatively distinct classes: sustained increase or sustained decrease in firing rate until the time of expected reward, and a class of cells that reach a peak in firing at the expected delay. We elaborate upon our existing model by including inhibitory and excitatory units while imposing simple connectivity rules to demonstrate what role these inhibitory elements and the simple architectures play in sculpting the response dynamics of the network. We find that simply adding inhibition is not sufficient for obtaining the different distinct response classes, and that a broad distribution of inhibitory projections is necessary for obtaining peak-type responses. Furthermore, although changes in connection strength that modulate the effects of inhibition onto excitatory units have a strong impact on the firing rate profile of these peaked responses, the network exhibits robustness in its overall ability to predict the expected time of reward. Finally, we demonstrate how the magnitude of expected reward can be encoded at the expected delay in the network and how peaked responses express this reward expectancy. SIGNIFICANCE STATEMENT: Heterogeneity in single-neuron responses is a common feature of neuronal systems, although sometimes, in theoretical approaches, it is treated as a nuisance and seldom considered as conveying a different aspect of a signal. In this study, we focus on the heterogeneous responses in the primary visual cortex of rodents trained with a predictable delayed reward time. We describe under what conditions this heterogeneity can arise by self-organization, and what information it can convey. This study, while focusing on a specific system, provides insight onto how heterogeneity can arise in general while also shedding light onto mechanisms of reinforcement learning using realistic biological assumptions.


Subject(s)
Computer Simulation , Learning/physiology , Nerve Net/physiology , Neural Networks, Computer , Neurons/physiology , Reinforcement, Psychology , Reward , Visual Cortex/physiology , Animals , Membrane Potentials , Models, Neurological , Neuronal Plasticity , Synaptic Transmission , Visual Cortex/ultrastructure
16.
Learn Mem ; 22(7): 344-53, 2015 Jul.
Article in English | MEDLINE | ID: mdl-26077687

ABSTRACT

Memories that last a lifetime are thought to be stored, at least in part, as persistent enhancement of the strength of particular synapses. The synaptic mechanism of these persistent changes, late long-term potentiation (L-LTP), depends on the state and number of specific synaptic proteins. Synaptic proteins, however, have limited dwell times due to molecular turnover and diffusion, leading to a fundamental question: how can this transient molecular machinery store memories lasting a lifetime? Because the persistent changes in efficacy are synapse-specific, the underlying molecular mechanisms must to a degree reside locally in synapses. Extensive experimental evidence points to atypical protein kinase C (aPKC) isoforms as key components involved in memory maintenance. Furthermore, it is evident that establishing long-term memory requires new protein synthesis. However, a comprehensive model has not been developed describing how these components work to preserve synaptic efficacies over time. We propose a molecular model that can account for key empirical properties of L-LTP, including its protein synthesis dependence, dependence on aPKCs, and synapse-specificity. Simulations and empirical data suggest that either of the two aPKC subtypes in hippocampal neurons, PKMζ and PKCι/λ, can maintain L-LTP, making the system more robust. Given genetic compensation at the level of synthesis of these PKC subtypes as in knockout mice, this system is able to maintain L-LTP and memory when one of the pathways is eliminated.


Subject(s)
Hippocampus/physiology , Long-Term Potentiation/physiology , Memory/physiology , Models, Molecular , Models, Neurological , Protein Kinase C/metabolism , Animals , Computer Simulation , Feedback, Physiological/physiology , Isoenzymes , Kinetics , Neurons/physiology , Phosphorylation , Protein Biosynthesis , Protein Kinase C/antagonists & inhibitors
18.
Neuron ; 86(1): 319-30, 2015 Apr 08.
Article in English | MEDLINE | ID: mdl-25819611

ABSTRACT

Most behaviors are generated in three steps: sensing the external world, processing that information to instruct decision-making, and producing a motor action. Sensory areas, especially primary sensory cortices, have long been held to be involved only in the first step of this sequence. Here, we develop a visually cued interval timing task that requires rats to decide when to perform an action following a brief visual stimulus. Using single-unit recordings and optogenetics in this task, we show that activity generated by the primary visual cortex (V1) embodies the target interval and may instruct the decision to time the action on a trial-by-trial basis. A spiking neuronal model of local recurrent connections in V1 produces neural responses that predict and drive the timing of future actions, rationalizing our observations. Our data demonstrate that the primary visual cortex may contribute to the instruction of visually cued timed actions.


Subject(s)
Cues , Neurons/physiology , Time Perception/physiology , Visual Cortex/cytology , Visual Cortex/physiology , Action Potentials/physiology , Animals , Channelrhodopsins , Male , Models, Neurological , Optogenetics , Photic Stimulation , Rats , Rats, Long-Evans , Transduction, Genetic
19.
Front Hum Neurosci ; 8: 438, 2014.
Article in English | MEDLINE | ID: mdl-24994976

ABSTRACT

The "Scalar Timing Law," which is a temporal domain generalization of the well known Weber Law, states that the errors estimating temporal intervals scale linearly with the durations of the intervals. Linear scaling has been studied extensively in human and animal models and holds over several orders of magnitude, though to date there is no agreed upon explanation for its physiological basis. Starting from the assumption that behavioral variability stems from neural variability, this work shows how to derive firing rate functions that are consistent with scalar timing. We show that firing rate functions with a log-power form, and a set of parameters that depend on spike count statistics, can account for scalar timing. Our derivation depends on a linear approximation, but we use simulations to validate the theory and show that log-power firing rate functions result in scalar timing over a large range of times and parameters. Simulation results match the predictions of our model, though our initial formulation results in a slight bias toward overestimation that can be corrected using a simple iterative approach to learn a decision threshold.

20.
Phys Rev Lett ; 110(16): 168102, 2013 Apr 19.
Article in English | MEDLINE | ID: mdl-23679640

ABSTRACT

Weber's law, first characterized in the 19th century, states that errors estimating the magnitude of perceptual stimuli scale linearly with stimulus intensity. This linear relationship is found in most sensory modalities, generalizes to temporal interval estimation, and even applies to some abstract variables. Despite its generality and long experimental history, the neural basis of Weber's law remains unknown. This work presents a simple theory explaining the conditions under which Weber's law can result from neural variability and predicts that the tuning curves of neural populations which adhere to Weber's law will have a log-power form with parameters that depend on spike-count statistics. The prevalence of Weber's law suggests that it might be optimal in some sense. We examine this possibility, using variational calculus, and show that Weber's law is optimal only when observed real-world variables exhibit power-law statistics with a specific exponent. Our theory explains how physiology gives rise to the behaviorally characterized Weber's law and may represent a general governing principle relating perception to neural activity.


Subject(s)
Models, Neurological , Sensory Receptor Cells/physiology , Action Potentials/physiology , Models, Statistical , Stochastic Processes
SELECTION OF CITATIONS
SEARCH DETAIL
...