Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
Add more filters










Publication year range
1.
Adv Appl Math ; 1542024 Mar.
Article in English | MEDLINE | ID: mdl-38250671

ABSTRACT

Combinatorial threshold-linear networks (CTLNs) are a special class of recurrent neural networks whose dynamics are tightly controlled by an underlying directed graph. Recurrent networks have long been used as models for associative memory and pattern completion, with stable fixed points playing the role of stored memory patterns in the network. In prior work, we showed that target-free cliques of the graph correspond to stable fixed points of the dynamics, and we conjectured that these are the only stable fixed points possible [1, 2]. In this paper, we prove that the conjecture holds in a variety of special cases, including for networks with very strong inhibition and graphs of size n≤4. We also provide further evi-dence for the conjecture by showing that sparse graphs and graphs that are nearly cliques can never support stable fixed points. Finally, we translate some results from extremal com-binatorics to obtain an upper bound on the number of stable fixed points of CTLNs in cases where the conjecture holds.

3.
PLoS One ; 17(3): e0264456, 2022.
Article in English | MEDLINE | ID: mdl-35245322

ABSTRACT

Combinatorial threshold-linear networks (CTLNs) are a special class of inhibition-dominated TLNs defined from directed graphs. Like more general TLNs, they display a wide variety of nonlinear dynamics including multistability, limit cycles, quasiperiodic attractors, and chaos. In prior work, we have developed a detailed mathematical theory relating stable and unstable fixed points of CTLNs to graph-theoretic properties of the underlying network. Here we find that a special type of fixed points, corresponding to core motifs, are predictive of both static and dynamic attractors. Moreover, the attractors can be found by choosing initial conditions that are small perturbations of these fixed points. This motivates us to hypothesize that dynamic attractors of a network correspond to unstable fixed points supported on core motifs. We tested this hypothesis on a large family of directed graphs of size n = 5, and found remarkable agreement. Furthermore, we discovered that core motifs with similar embeddings give rise to nearly identical attractors. This allowed us to classify attractors based on structurally-defined graph families. Our results suggest that graphical properties of the connectivity can be used to predict a network's complex repertoire of nonlinear dynamics.


Subject(s)
Nonlinear Dynamics
4.
SIAM J Appl Dyn Syst ; 21(2): 1597-1630, 2022.
Article in English | MEDLINE | ID: mdl-37485069

ABSTRACT

Sequences of neural activity arise in many brain areas, including cortex, hippocampus, and central pattern generator circuits that underlie rhythmic behaviors like locomotion. While network architectures supporting sequence generation vary considerably, a common feature is an abundance of inhibition. In this work, we focus on architectures that support sequential activity in recurrently connected networks with inhibition-dominated dynamics. Specifically, we study emergent sequences in a special family of threshold-linear networks, called combinatorial threshold-linear networks (CTLNs), whose connectivity matrices are defined from directed graphs. Such networks naturally give rise to an abundance of sequences whose dynamics are tightly connected to the underlying graph. We find that architectures based on generalizations of cycle graphs produce limit cycle attractors that can be activated to generate transient or persistent (repeating) sequences. Each architecture type gives rise to an infinite family of graphs that can be built from arbitrary component subgraphs. Moreover, we prove a number of graph rules for the corresponding CTLNs in each family. The graph rules allow us to strongly constrain, and in some cases fully determine, the fixed points of the network in terms of the fixed points of the component subnetworks. Finally, we also show how the structure of certain architectures gives insight into the sequential dynamics of the corresponding attractor.

5.
J Pure Appl Algebra ; 223(9): 3919-3940, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31534273

ABSTRACT

A convex code is a binary code generated by the pattern of intersections of a collection of open convex sets in some Euclidean space. Convex codes are relevant to neuroscience as they arise from the activity of neurons that have convex receptive fields. In this paper, we use algebraic methods to determine if a code is convex. Specifically, we use the neural ideal of a code, which is a generalization of the Stanley-Reisner ideal. Using the neural ideal together with its standard generating set, the canonical form, we provide algebraic signatures of certain families of codes that are non-convex. We connect these signatures to the precise conditions on the arrangement of sets that prevent the codes from being convex. Finally, we also provide algebraic signatures for some families of codes that are convex, including the class of intersection-complete codes. These results allow us to detect convexity and non-convexity in a variety of situations, and point to some interesting open questions.

6.
Curr Opin Neurobiol ; 58: 11-20, 2019 10.
Article in English | MEDLINE | ID: mdl-31319287

ABSTRACT

We review recent work relating network connectivity to the dynamics of neural activity. While concepts stemming from network science provide a valuable starting point, the interpretation of graph-theoretic structures and measures can be highly dependent on the dynamics associated to the network. Properties that are quite meaningful for linear dynamics, such as random walk and network flow models, may be of limited relevance in the neuroscience setting. Theoretical and computational neuroscience are playing a vital role in understanding the relationship between network connectivity and the nonlinear dynamics associated to neural networks.


Subject(s)
Models, Neurological , Neurosciences , Nerve Net , Neural Networks, Computer , Nonlinear Dynamics
7.
Neural Comput ; 31(1): 94-155, 2019 01.
Article in English | MEDLINE | ID: mdl-30462583

ABSTRACT

Threshold-linear networks (TLNs) are models of neural networks that consist of simple, perceptron-like neurons and exhibit nonlinear dynamics determined by the network's connectivity. The fixed points of a TLN, including both stable and unstable equilibria, play a critical role in shaping its emergent dynamics. In this work, we provide two novel characterizations for the set of fixed points of a competitive TLN: the first is in terms of a simple sign condition, while the second relies on the concept of domination. We apply these results to a special family of TLNs, called combinatorial threshold-linear networks (CTLNs), whose connectivity matrices are defined from directed graphs. This leads us to prove a series of graph rules that enable one to determine fixed points of a CTLN by analyzing the underlying graph. In addition, we study larger networks composed of smaller building block subnetworks and prove several theorems relating the fixed points of the full network to those of its components. Our results provide the foundation for a kind of graphical calculus to infer features of the dynamics from a network's connectivity.


Subject(s)
Models, Neurological , Nerve Net/physiology , Neural Networks, Computer , Neurons/physiology , Humans , Nonlinear Dynamics
8.
Neural Comput ; 28(12): 2825-2852, 2016 12.
Article in English | MEDLINE | ID: mdl-27391688

ABSTRACT

Threshold-linear networks are a common class of firing rate models that describe recurrent interactions among neurons. Unlike their linear counterparts, these networks generically possess multiple stable fixed points (steady states), making them viable candidates for memory encoding and retrieval. In this work, we characterize stable fixed points of general threshold-linear networks with constant external drive and discover constraints on the coexistence of fixed points involving different subsets of active neurons. In the case of symmetric networks, we prove the following antichain property: if a set of neurons [Formula: see text] is the support of a stable fixed point, then no proper subset or superset of [Formula: see text] can support a stable fixed point. Symmetric threshold-linear networks thus appear to be well suited for pattern completion, since the dynamics are guaranteed not to get stuck in a subset or superset of a stored pattern. We also show that for any graph G, we can construct a network whose stable fixed points correspond precisely to the maximal cliques of G. As an application, we design network decoders for place field codes and demonstrate their efficacy for error correction and pattern completion. The proofs of our main results build on the theory of permitted sets in threshold-linear networks, including recently developed connections to classical distance geometry.


Subject(s)
Action Potentials , Models, Neurological , Neurons/physiology , Algorithms , Animals
9.
Synapse ; 70(1): 1-14, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26541100

ABSTRACT

Postsynaptic responses are a product of quantal amplitude (Q), size of the releasable vesicle pool (N), and release probability (P). Voltage-dependent changes in presynaptic Ca(2+) entry alter postsynaptic responses primarily by changing P but have also been shown to influence N. With simultaneous whole cell recordings from cone photoreceptors and horizontal cells in tiger salamander retinal slices, we measured N and P at cone ribbon synapses by using a train of depolarizing pulses to stimulate release and deplete the pool. We developed an analytical model that calculates the total pool size contributing to release under different stimulus conditions by taking into account the prior history of release and empirically determined properties of replenishment. The model provided a formula that calculates vesicle pool size from measurements of the initial postsynaptic response and limiting rate of release evoked by a train of pulses, the fraction of release sites available for replenishment, and the time constant for replenishment. Results of the model showed that weak and strong depolarizing stimuli evoked release with differing probabilities but the same size vesicle pool. Enhancing intraterminal Ca(2+) spread by lowering Ca(2+) buffering or applying BayK8644 did not increase PSCs evoked with strong test steps, showing there is a fixed upper limit to pool size. Together, these results suggest that light-evoked changes in cone membrane potential alter synaptic release solely by changing release probability.


Subject(s)
Membrane Potentials/physiology , Retina/physiology , Retinal Cone Photoreceptor Cells/physiology , Retinal Horizontal Cells/physiology , Synapses/physiology , Synaptic Vesicles/metabolism , 3-Pyridinecarboxylic acid, 1,4-dihydro-2,6-dimethyl-5-nitro-4-(2-(trifluoromethyl)phenyl)-, Methyl ester/pharmacology , Ambystoma , Animals , Calcium/metabolism , Calcium Channel Agonists/pharmacology , Calcium Channels/metabolism , Female , Kinetics , Male , Membrane Potentials/drug effects , Models, Neurological , Patch-Clamp Techniques , Probability , Retina/drug effects , Retinal Cone Photoreceptor Cells/drug effects , Retinal Horizontal Cells/drug effects , Synapses/drug effects , Synaptic Vesicles/drug effects , Tissue Culture Techniques
10.
Proc Natl Acad Sci U S A ; 112(44): 13455-60, 2015 Nov 03.
Article in English | MEDLINE | ID: mdl-26487684

ABSTRACT

Detecting meaningful structure in neural activity and connectivity data is challenging in the presence of hidden nonlinearities, where traditional eigenvalue-based methods may be misleading. We introduce a novel approach to matrix analysis, called clique topology, that extracts features of the data invariant under nonlinear monotone transformations. These features can be used to detect both random and geometric structure, and depend only on the relative ordering of matrix entries. We then analyzed the activity of pyramidal neurons in rat hippocampus, recorded while the animal was exploring a 2D environment, and confirmed that our method is able to detect geometric organization using only the intrinsic pattern of neural correlations. Remarkably, we found similar results during nonspatial behaviors such as wheel running and rapid eye movement (REM) sleep. This suggests that the geometric structure of correlations is shaped by the underlying hippocampal circuits and is not merely a consequence of position coding. We propose that clique topology is a powerful new tool for matrix analysis in biological settings, where the relationship of observed quantities to more meaningful variables is often nonlinear and unknown.


Subject(s)
Hippocampus/physiology , Motor Activity/physiology , Neurophysiology/methods , Pyramidal Cells/physiology , Sleep, REM/physiology , Algorithms , Animals , Computer Simulation , Hippocampus/cytology , Models, Neurological , Neural Pathways/physiology , Rats
11.
J Gen Physiol ; 144(5): 357-78, 2014 Nov.
Article in English | MEDLINE | ID: mdl-25311636

ABSTRACT

At the first synapse in the vertebrate visual pathway, light-evoked changes in photoreceptor membrane potential alter the rate of glutamate release onto second-order retinal neurons. This process depends on the synaptic ribbon, a specialized structure found at various sensory synapses, to provide a supply of primed vesicles for release. Calcium (Ca(2+)) accelerates the replenishment of vesicles at cone ribbon synapses, but the mechanisms underlying this acceleration and its functional implications for vision are unknown. We studied vesicle replenishment using paired whole-cell recordings of cones and postsynaptic neurons in tiger salamander retinas and found that it involves two kinetic mechanisms, the faster of which was diminished by calmodulin (CaM) inhibitors. We developed an analytical model that can be applied to both conventional and ribbon synapses and showed that vesicle resupply is limited by a simple time constant, τ = 1/(Dρδs), where D is the vesicle diffusion coefficient, δ is the vesicle diameter, ρ is the vesicle density, and s is the probability of vesicle attachment. The combination of electrophysiological measurements, modeling, and total internal reflection fluorescence microscopy of single synaptic vesicles suggested that CaM speeds replenishment by enhancing vesicle attachment to the ribbon. Using electroretinogram and whole-cell recordings of light responses, we found that enhanced replenishment improves the ability of cone synapses to signal darkness after brief flashes of light and enhances the amplitude of responses to higher-frequency stimuli. By accelerating the resupply of vesicles to the ribbon, CaM extends the temporal range of synaptic transmission, allowing cones to transmit higher-frequency visual information to downstream neurons. Thus, the ability of the visual system to encode time-varying stimuli is shaped by the dynamics of vesicle replenishment at photoreceptor synaptic ribbons.


Subject(s)
Calmodulin/metabolism , Exocytosis , Retinal Cone Photoreceptor Cells/metabolism , Synapses/metabolism , Synaptic Transmission , Ambystoma , Animals , Female , Male , Retinal Cone Photoreceptor Cells/physiology , Synapses/physiology , Synaptic Vesicles/metabolism , Urodela
12.
Neural Comput ; 25(11): 2858-903, 2013 Nov.
Article in English | MEDLINE | ID: mdl-23895048

ABSTRACT

Networks of neurons in the brain encode preferred patterns of neural activity via their synaptic connections. Despite receiving considerable attention, the precise relationship between network connectivity and encoded patterns is still poorly understood. Here we consider this problem for networks of threshold-linear neurons whose computational function is to learn and store a set of binary patterns (e.g., a neural code) as "permitted sets" of the network. We introduce a simple encoding rule that selectively turns "on" synapses between neurons that coappear in one or more patterns. The rule uses synapses that are binary, in the sense of having only two states ("on" or "off"), but also heterogeneous, with weights drawn from an underlying synaptic strength matrix S. Our main results precisely describe the stored patterns that result from the encoding rule, including unintended "spurious" states, and give an explicit characterization of the dependence on S. In particular, we find that binary patterns are successfully stored in these networks when the excitatory connections between neurons are geometrically balanced--i.e., they satisfy a set of geometric constraints. Furthermore, we find that certain types of neural codes are natural in the context of these networks, meaning that the full code can be accurately learned from a highly undersampled set of patterns. Interestingly, many commonly observed neural codes in cortical and hippocampal areas are natural in this sense. As an application, we construct networks that encode hippocampal place field codes nearly exactly, following presentation of only a small fraction of patterns. To obtain our results, we prove new theorems using classical ideas from convex and distance geometry, such as Cayley-Menger determinants, revealing a novel connection between these areas of mathematics and coding properties of neural networks.


Subject(s)
Brain/physiology , Models, Neurological , Neural Networks, Computer , Neurons/physiology , Animals , Humans , Nerve Net/physiology
13.
Bull Math Biol ; 75(9): 1571-611, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23771614

ABSTRACT

Neurons in the brain represent external stimuli via neural codes. These codes often arise from stereotyped stimulus-response maps, associating to each neuron a convex receptive field. An important problem confronted by the brain is to infer properties of a represented stimulus space without knowledge of the receptive fields, using only the intrinsic structure of the neural code. How does the brain do this? To address this question, it is important to determine what stimulus space features can--in principle--be extracted from neural codes. This motivates us to define the neural ring and a related neural ideal, algebraic objects that encode the full combinatorial data of a neural code. Our main finding is that these objects can be expressed in a "canonical form" that directly translates to a minimal description of the receptive field structure intrinsic to the code. We also find connections to Stanley-Reisner rings, and use ideas similar to those in the theory of monomial ideals to obtain an algorithm for computing the primary decomposition of pseudo-monomial ideals. This allows us to algorithmically extract the canonical form associated to any neural code, providing the groundwork for inferring stimulus space features from neural activity alone.


Subject(s)
Models, Neurological , Neurons/physiology , Algorithms , Animals , Brain/cytology , Brain/physiology , Computational Biology , Mathematical Concepts , Nerve Net/physiology
14.
Neural Comput ; 25(7): 1891-925, 2013 Jul.
Article in English | MEDLINE | ID: mdl-23724797

ABSTRACT

Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.


Subject(s)
Information Theory , Mathematics , Models, Neurological , Neurons/physiology , Animals , Humans , Nerve Net/physiology
15.
Bull Math Biol ; 74(3): 590-614, 2012 Mar.
Article in English | MEDLINE | ID: mdl-21826564

ABSTRACT

Networks of neurons in some brain areas are flexible enough to encode new memories quickly. Using a standard firing rate model of recurrent networks, we develop a theory of flexible memory networks. Our main results characterize networks having the maximal number of flexible memory patterns, given a constraint graph on the network's connectivity matrix. Modulo a mild topological condition, we find a close connection between maximally flexible networks and rank 1 matrices. The topological condition is H (1)(X;ℤ)=0, where X is the clique complex associated to the network's constraint graph; this condition is generically satisfied for large random networks that are not overly sparse. In order to prove our main results, we develop some matrix-theoretic tools and present them in a self-contained section independent of the neuroscience context.


Subject(s)
Memory/physiology , Models, Neurological , Nerve Net/physiology , Humans , Learning/physiology , Neurons/physiology
16.
J Neurosci ; 31(8): 2828-34, 2011 Feb 23.
Article in English | MEDLINE | ID: mdl-21414904

ABSTRACT

Hippocampal neurons can display reliable and long-lasting sequences of transient firing patterns, even in the absence of changing external stimuli. We suggest that time-keeping is an important function of these sequences, and propose a network mechanism for their generation. We show that sequences of neuronal assemblies recorded from rat hippocampal CA1 pyramidal cells can reliably predict elapsed time (15-20 s) during wheel running with a precision of 0.5 s. In addition, we demonstrate the generation of multiple reliable, long-lasting sequences in a recurrent network model. These sequences are generated in the presence of noisy, unstructured inputs to the network, mimicking stationary sensory input. Identical initial conditions generate similar sequences, whereas different initial conditions give rise to distinct sequences. The key ingredients responsible for sequence generation in the model are threshold-adaptation and a Mexican-hat-like pattern of connectivity among pyramidal cells. This pattern may arise from recurrent systems such as the hippocampal CA3 region or the entorhinal cortex. We hypothesize that mechanisms that evolved for spatial navigation also support tracking of elapsed time in behaviorally relevant contexts.


Subject(s)
Action Potentials/physiology , CA1 Region, Hippocampal/physiology , Nerve Net/physiology , Neural Pathways/physiology , Neurons/physiology , Time Perception/physiology , Animals , CA1 Region, Hippocampal/cytology , Nerve Net/cytology , Neural Pathways/cytology , Neurons/cytology , Rats
17.
Hear Res ; 271(1-2): 37-53, 2011 Jan.
Article in English | MEDLINE | ID: mdl-20603208

ABSTRACT

Recordings of single neurons have yielded great insights into the way acoustic stimuli are represented in auditory cortex. However, any one neuron functions as part of a population whose combined activity underlies cortical information processing. Here we review some results obtained by recording simultaneously from auditory cortical populations and individual morphologically identified neurons, in urethane-anesthetized and unanesthetized passively listening rats. Auditory cortical populations produced structured activity patterns both in response to acoustic stimuli, and spontaneously without sensory input. Population spike time patterns were broadly conserved across multiple sensory stimuli and spontaneous events, exhibiting a generally conserved sequential organization lasting approximately 100 ms. Both spontaneous and evoked events exhibited sparse, spatially localized activity in layer 2/3 pyramidal cells, and densely distributed activity in larger layer 5 pyramidal cells and putative interneurons. Laminar propagation differed however, with spontaneous activity spreading upward from deep layers and slowly across columns, but sensory responses initiating in presumptive thalamorecipient layers, spreading rapidly across columns. In both unanesthetized and urethanized rats, global activity fluctuated between "desynchronized" state characterized by low amplitude, high-frequency local field potentials and a "synchronized" state of larger, lower-frequency waves. Computational studies suggested that responses could be predicted by a simple dynamical system model fitted to the spontaneous activity immediately preceding stimulus presentation. Fitting this model to the data yielded a nonlinear self-exciting system model in synchronized states and an approximately linear system in desynchronized states. We comment on the significance of these results for auditory cortical processing of acoustic and non-acoustic information.


Subject(s)
Auditory Cortex/cytology , Auditory Cortex/physiology , Models, Neurological , Acoustic Stimulation , Anesthesia , Animals , Behavior, Animal , Evoked Potentials, Auditory , Membrane Potentials , Neurons/physiology , Rats
18.
Eur J Neurosci ; 30(9): 1767-78, 2009 Nov.
Article in English | MEDLINE | ID: mdl-19840110

ABSTRACT

Neural representations of even temporally unstructured stimuli can show complex temporal dynamics. In many systems, neuronal population codes show 'progressive differentiation', whereby population responses to different stimuli grow further apart during a stimulus presentation. Here we analysed the response of auditory cortical populations in rats to extended tones. At onset (up to 300 ms), tone responses involved strong excitation of a large number of neurons; during sustained responses (after 500 ms) overall firing rate decreased, but most cells still showed statistically significant rate modulation. Population vector trajectories evoked by different tone frequencies expanded rapidly along an initially similar trajectory in the first tens of milliseconds after tone onset, later diverging to smaller amplitude fixed points corresponding to sustained responses. The angular difference between onset and sustained responses to the same tone was greater than between different tones in the same stimulus epoch. No clear orthogonalization of responses was found with time, and predictability of the stimulus from population activity also decreased during this period compared with onset. The question of whether population activity grew more or less sparse with time depended on the precise mathematical sense given to this term. We conclude that auditory cortical population responses to tones differ from those reported in many other systems, with progressive differentiation not seen for sustained stimuli. Sustained acoustic stimuli are typically not behaviorally salient: we hypothesize that the dynamics we observe may instead allow an animal to maintain a representation of such sounds, at low energetic cost.


Subject(s)
Acoustic Stimulation , Auditory Cortex/physiology , Neurons/physiology , Pitch Perception/physiology , Action Potentials/physiology , Animals , Auditory Cortex/cytology , Auditory Pathways/physiology , Data Interpretation, Statistical , Electrophysiology , Rats , Rats, Sprague-Dawley , Time Factors
19.
J Neurosci ; 29(34): 10600-12, 2009 Aug 26.
Article in English | MEDLINE | ID: mdl-19710313

ABSTRACT

The responses of neocortical cells to sensory stimuli are variable and state dependent. It has been hypothesized that intrinsic cortical dynamics play an important role in trial-to-trial variability; the precise nature of this dependence, however, is poorly understood. We show here that in auditory cortex of urethane-anesthetized rats, population responses to click stimuli can be quantitatively predicted on a trial-by-trial basis by a simple dynamical system model estimated from spontaneous activity immediately preceding stimulus presentation. Changes in cortical state correspond consistently to changes in model dynamics, reflecting a nonlinear, self-exciting system in synchronized states and an approximately linear system in desynchronized states. We propose that the complex and state-dependent pattern of trial-to-trial variability can be explained by a simple principle: sensory responses are shaped by the same intrinsic dynamics that govern ongoing spontaneous activity.


Subject(s)
Anesthetics, Intravenous/pharmacology , Auditory Cortex , Evoked Potentials, Auditory/physiology , Models, Neurological , Nonlinear Dynamics , Urethane/pharmacology , Acoustic Stimulation/methods , Animals , Auditory Cortex/cytology , Auditory Cortex/drug effects , Auditory Cortex/physiology , Electric Stimulation/methods , Neural Pathways/physiology , Neurons/drug effects , Neurons/physiology , Pedunculopontine Tegmental Nucleus/physiology , Rats , Rats, Sprague-Dawley
20.
PLoS Comput Biol ; 4(10): e1000205, 2008 Oct.
Article in English | MEDLINE | ID: mdl-18974826

ABSTRACT

An important task of the brain is to represent the outside world. It is unclear how the brain may do this, however, as it can only rely on neural responses and has no independent access to external stimuli in order to "decode" what those responses mean. We investigate what can be learned about a space of stimuli using only the action potentials (spikes) of cells with stereotyped -- but unknown -- receptive fields. Using hippocampal place cells as a model system, we show that one can (1) extract global features of the environment and (2) construct an accurate representation of space, up to an overall scale factor, that can be used to track the animal's position. Unlike previous approaches to reconstructing position from place cell activity, this information is derived without knowing place fields or any other functions relating neural responses to position. We find that simply knowing which groups of cells fire together reveals a surprising amount of structure in the underlying stimulus space; this may enable the brain to construct its own internal representations.


Subject(s)
Action Potentials/physiology , Models, Neurological , Nerve Net/physiology , Pattern Recognition, Physiological/physiology , Animals , Conditioning, Classical/physiology , Conditioning, Operant/physiology , Generalization, Stimulus/physiology , Hippocampus/cytology , Hippocampus/physiology , Neurons/physiology , Orientation/physiology , Set, Psychology , Space Perception/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...