Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 36
Filter
Add more filters










Publication year range
1.
Cereb Cortex Commun ; 3(1): tgab052, 2022.
Article in English | MEDLINE | ID: mdl-35047822

ABSTRACT

Place and head-direction (HD) cells are fundamental to maintaining accurate representations of location and heading in the mammalian brain across sensory conditions, and are thought to underlie path integration-the ability to maintain an accurate representation of location and heading during motion in the dark. Substantial evidence suggests that both populations of spatial cells function as attractor networks, but their developmental mechanisms are poorly understood. We present simulations of a fully self-organizing attractor network model of this process using well-established neural mechanisms. We show that the differential development of the two cell types can be explained by their different idiothetic inputs, even given identical visual signals: HD cells develop when the population receives angular head velocity input, whereas place cells develop when the idiothetic input encodes planar velocity. Our model explains the functional importance of conjunctive "state-action" cells, implying that signal propagation delays and a competitive learning mechanism are crucial for successful development. Consequently, we explain how insufficiently rich environments result in pathology: place cell development requires proximal landmarks; conversely, HD cells require distal landmarks. Finally, our results suggest that both networks are instantiations of general mechanisms, and we describe their implications for the neurobiology of spatial processing.

2.
Network ; 31(1-4): 37-141, 2020.
Article in English | MEDLINE | ID: mdl-32746663

ABSTRACT

Many researchers have tried to model how environmental knowledge is learned by the brain and used in the form of cognitive maps. However, previous work was limited in various important ways: there was little consensus on how these cognitive maps were formed and represented, the planning mechanism was inherently limited to performing relatively simple tasks, and there was little consideration of how these mechanisms would scale up. This paper makes several significant advances. Firstly, the planning mechanism used by the majority of previous work propagates a decaying signal through the network to create a gradient that points towards the goal. However, this decaying signal limited the scale and complexity of tasks that can be solved in this manner. Here we propose several ways in which a network can can self-organize a novel planning mechanism that does not require decaying activity. We also extend this model with a hierarchical planning mechanism: a layer of cells that identify frequently-used sequences of actions and reuse them to significantly increase the efficiency of planning. We speculate that our results may explain the apparent ability of humans and animals to perform model-based planning on both small and large scales without a noticeable loss of efficiency.


Subject(s)
Algorithms , Brain Mapping/methods , Brain/physiology , Cognition/physiology , Neural Networks, Computer , Animals , Humans
3.
Front Neural Circuits ; 14: 30, 2020.
Article in English | MEDLINE | ID: mdl-32528255

ABSTRACT

The responses of many cortical neurons to visual stimuli are modulated by the position of the eye. This form of gain modulation by eye position does not change the retinotopic selectivity of the responses, but only changes the amplitude of the responses. Particularly in the case of cortical responses, this form of eye position gain modulation has been observed to be multiplicative. Multiplicative gain modulated responses are crucial to encode information that is relevant to high-level visual functions, such as stable spatial awareness, eye movement planning, visual-motor behaviors, and coordinate transformation. Here we first present a hardwired model of different functional forms of gain modulation, including peaked and monotonic modulation by eye position. We use a biologically realistic Gaussian function to model the influence of the position of the eye on the internal activation of visual neurons. Next we show how different functional forms of gain modulation by eye position may develop in a self-organizing neural network model of visual neurons. A further contribution of our work is the investigation of the influence of the width of the eye position tuning curve on the development of a variety of forms of eye position gain modulation. Our simulation results show how the width of the eye position tuning curve affects the development of different forms of gain modulation of visual responses by the position of the eye.


Subject(s)
Eye Movements/physiology , Neural Networks, Computer , Neurons/physiology , Visual Cortex/cytology , Visual Cortex/physiology , Visual Fields/physiology , Humans , Normal Distribution , Photic Stimulation/methods , Visual Perception/physiology
4.
PLoS One ; 13(11): e0207961, 2018.
Article in English | MEDLINE | ID: mdl-30496225

ABSTRACT

We study a self-organising neural network model of how visual representations in the primate dorsal visual pathway are transformed from an eye-centred to head-centred frame of reference. The model has previously been shown to robustly develop head-centred output neurons with a standard trace learning rule, but only under limited conditions. Specifically it fails when incorporating visual input neurons with monotonic gain modulation by eye-position. Since eye-centred neurons with monotonic gain modulation are so common in the dorsal visual pathway, it is an important challenge to show how efferent synaptic connections from these neurons may self-organise to produce head-centred responses in a subpopulation of postsynaptic neurons. We show for the first time how a variety of modified, yet still biologically plausible, versions of the standard trace learning rule enable the model to perform a coordinate transformation from eye-centred to head-centred reference frames when the visual input neurons have monotonic gain modulation by eye-position.


Subject(s)
Visual Pathways/anatomy & histology , Visual Pathways/physiology , Visual Perception/physiology , Algorithms , Animals , Eye Movements/physiology , Learning , Models, Neurological , Nerve Net/physiology , Neural Networks, Computer , Neurons , Primates/physiology , Vision, Ocular/physiology
5.
Network ; 29(1-4): 37-69, 2018.
Article in English | MEDLINE | ID: mdl-30905280

ABSTRACT

The head direction (HD) system signals HD in an allocentric frame of reference. The system is able to update firing based on internally derived information about self-motion, a process known as path integration. Of particular interest is how path integration might maintain concordance between true HD and internally represented HD. Here we present a self-sustaining two-layer model, capable of self-organizing, which produces extremely accurate path integration. The implications of this work for future investigations of HD system path integration are discussed.


Subject(s)
Models, Neurological , Motion Perception/physiology , Neural Pathways/physiology , Neurons/physiology , Space Perception/physiology , Action Potentials/physiology , Animals , Computer Simulation , Head , Head Movements/physiology , Humans , Nerve Net/physiology , Nonlinear Dynamics
6.
PLoS One ; 12(5): e0178304, 2017.
Article in English | MEDLINE | ID: mdl-28562618

ABSTRACT

A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet.


Subject(s)
Hand , Learning/physiology , Primates/physiology , Visual Pathways/physiology , Animals , Models, Neurological , Nerve Net
7.
Psychol Rev ; 124(2): 154-167, 2017 03.
Article in English | MEDLINE | ID: mdl-28068117

ABSTRACT

We use an established neural network model of the primate visual system to show how neurons might learn to encode the gender of faces. The model consists of a hierarchy of 4 competitive neuronal layers with associatively modifiable feedforward synaptic connections between successive layers. During training, the network was presented with many realistic images of male and female faces, during which the synaptic connections are modified using biologically plausible local associative learning rules. After training, we found that different subsets of output neurons have learned to respond exclusively to either male or female faces. With the inclusion of short range excitation within each neuronal layer to implement a self-organizing map architecture, neurons representing either male or female faces were clustered together in the output layer. This learning process is entirely unsupervised, as the gender of the face images is not explicitly labeled and provided to the network as a supervisory training signal. These simulations are extended to training the network on rotating faces. It is found that by using a trace learning rule incorporating a temporal memory trace of recent neuronal activity, neurons responding selectively to either male or female faces were also able to learn to respond invariantly over different views of the faces. This kind of trace learning has been previously shown to operate within the primate visual system by neurophysiological and psychophysical studies. The computer simulations described here predict that similar neurons encoding the gender of faces will be present within the primate visual system. (PsycINFO Database Record


Subject(s)
Facial Recognition/physiology , Learning/physiology , Neural Networks, Computer , Neurons/physiology , Pattern Recognition, Visual/physiology , Primates , Sex , Visual Pathways/physiology , Animals , Brain/physiology , Computer Simulation , Female , Male , Visual Perception/physiology
8.
Psychol Rev ; 123(6): 696-739, 2016 11.
Article in English | MEDLINE | ID: mdl-27797539

ABSTRACT

Experimental studies have shown that neurons at an intermediate stage of the primate ventral visual pathway, occipital face area, encode individual facial parts such as eyes and nose while neurons in the later stages, middle face patches, are selective to the full face by encoding the spatial relations between facial features. We have performed a computer modeling study to investigate how these cell firing properties may develop through unsupervised visually guided learning. A hierarchical neural network model of the primate's ventral visual pathway is trained by presenting many randomly generated faces to the network while a local learning rule modifies the strengths of the synaptic connections between neurons in successive layers. After training, the model is found to have developed the experimentally observed cell firing properties. In particular, we have shown how the visual system forms separate representations of facial features such as the eyes, nose, and mouth as well as monotonically tuned representations of the spatial relationships between these facial features. We also demonstrated how the primate brain learns to represent facial expression independently of facial identity. Furthermore, based on the simulation results, we propose that neurons encoding different global attributes simply represent different spatial relationships between local features with monotonic tuning curves or particular combinations of these spatial relations. (PsycINFO Database Record


Subject(s)
Facial Recognition/physiology , Neural Networks, Computer , Visual Pathways/physiology , Animals , Brain/physiology , Computer Simulation , Facial Expression , Humans , Neurons/physiology , Primates
9.
Neurobiol Learn Mem ; 136: 147-165, 2016 Dec.
Article in English | MEDLINE | ID: mdl-27743879

ABSTRACT

As Rubin's famous vase demonstrates, our visual perception tends to assign luminance contrast borders to one or other of the adjacent image regions. Experimental evidence for the neuronal coding of such border-ownership in the primate visual system has been reported in neurophysiology. We have investigated exactly how such neural circuits may develop through visually-guided learning. More specifically, we have investigated through computer simulation how top-down connections may play a fundamental role in the development of border ownership representations in the early cortical visual layers V1/V2. Our model consists of a hierarchy of competitive neuronal layers, with both bottom-up and top-down synaptic connections between successive layers, and the synaptic connections are self-organised by a biologically plausible, temporal trace learning rule during training on differently shaped visual objects. The simulations reported in this paper have demonstrated that top-down connections may help to guide competitive learning in lower layers, thus driving the formation of lower level (border ownership) visual representations in V1/V2 that are modulated by higher level (object boundary element) representations in V4. Lastly we investigate the limitations of our model in the more general situation where multiple objects are presented to the network simultaneously.


Subject(s)
Computer Simulation , Learning/physiology , Neural Networks, Computer , Visual Cortex/physiology , Visual Perception/physiology , Animals , Humans
10.
J Physiol ; 594(22): 6527-6534, 2016 11 15.
Article in English | MEDLINE | ID: mdl-27479741

ABSTRACT

Maintaining a sense of direction requires combining information from static environmental landmarks with dynamic information about self-motion. This is accomplished by the head direction system, whose neurons - head direction cells - encode specific head directions. When the brain integrates information in sensory domains, this process is almost always 'optimal' - that is, inputs are weighted according to their reliability. Evidence suggests cue combination by head direction cells may also be optimal. The simplicity of the head direction signal, together with the detailed knowledge we have about the anatomy and physiology of the underlying circuit, therefore makes this system a tractable model with which to discover how optimal cue combination occurs at a neural level. In the head direction system, cue interactions are thought to occur on an attractor network of interacting head direction neurons, but attractor dynamics predict a winner-take-all decision between cues, rather than optimal combination. However, optimal cue combination in an attractor could be achieved via plasticity in the feedforward connections from external sensory cues (i.e. the landmarks) onto the ring attractor. Short-term plasticity would allow rapid re-weighting that adjusts the final state of the network in accordance with cue reliability (reflected in the connection strengths), while longer term plasticity would allow long-term learning about this reliability. Although these principles were derived to model the head direction system, they could potentially serve to explain optimal cue combination in other sensory systems more generally.


Subject(s)
Head/physiology , Learning/physiology , Sensation/physiology , Animals , Brain/physiology , Cues , Humans , Models, Neurological , Motion Perception/physiology , Neurons/physiology , Space Perception/physiology
11.
Network ; 27(1): 29-51, 2016.
Article in English | MEDLINE | ID: mdl-27253452

ABSTRACT

Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant's gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views.


Subject(s)
Ocular Physiological Phenomena , Primates , Animals , Hand , Humans , Learning , Neural Networks, Computer , Neurons
12.
Front Comput Neurosci ; 10: 24, 2016.
Article in English | MEDLINE | ID: mdl-27047368

ABSTRACT

Attempting to explain the perceptual qualities of pitch has proven to be, and remains, a difficult problem. The wide range of sounds which elicit pitch and a lack of agreement across neurophysiological studies on how pitch is encoded by the brain have made this attempt more difficult. In describing the potential neural mechanisms by which pitch may be processed, a number of neural networks have been proposed and implemented. However, no unsupervised neural networks with biologically accurate cochlear inputs have yet been demonstrated. This paper proposes a simple system in which pitch representing neurons are produced in a biologically plausible setting. Purely unsupervised regimes of neural network learning are implemented and these prove to be sufficient in identifying the pitch of sounds with a variety of spectral profiles, including sounds with missing fundamental frequencies and iterated rippled noises.

13.
Vision Res ; 119: 16-28, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26774861

ABSTRACT

In order to develop transformation invariant representations of objects, the visual system must make use of constraints placed upon object transformation by the environment. For example, objects transform continuously from one point to another in both space and time. These two constraints have been exploited separately in order to develop translation and view invariance in a hierarchical multilayer model of the primate ventral visual pathway in the form of continuous transformation learning and temporal trace learning. We show for the first time that these two learning rules can work cooperatively in the model. Using these two learning rules together can support the development of invariance in cells and help maintain object selectivity when stimuli are presented over a large number of locations or when trained separately over a large number of viewing angles.


Subject(s)
Form Perception/physiology , Learning/physiology , Models, Neurological , Visual Cortex/physiology , Visual Pathways/physiology , Animals , Computer Simulation , Humans , Photic Stimulation , Primates
14.
Front Comput Neurosci ; 9: 147, 2015.
Article in English | MEDLINE | ID: mdl-26696876

ABSTRACT

Neurons that respond to visual targets in a hand-centered frame of reference have been found within various areas of the primate brain. We investigate how hand-centered visual representations may develop in a neural network model of the primate visual system called VisNet, when the model is trained on images of the hand seen against natural visual scenes. The simulations show how such neurons may develop through a biologically plausible process of unsupervised competitive learning and self-organization. In an advance on our previous work, the visual scenes consisted of multiple targets presented simultaneously with respect to the hand. Three experiments are presented. First, VisNet was trained with computerized images consisting of a realistic image of a hand and a variety of natural objects, presented in different textured backgrounds during training. The network was then tested with just one textured object near the hand in order to verify if the output cells were capable of building hand-centered representations with a single localized receptive field. We explain the underlying principles of the statistical decoupling that allows the output cells of the network to develop single localized receptive fields even when the network is trained with multiple objects. In a second simulation we examined how some of the cells with hand-centered receptive fields decreased their shape selectivity and started responding to a localized region of hand-centered space as the number of objects presented in overlapping locations during training increases. Lastly, we explored the same learning principles training the network with natural visual scenes collected by volunteers. These results provide an important step in showing how single, localized, hand-centered receptive fields could emerge under more ecologically realistic visual training conditions.

15.
Front Comput Neurosci ; 9: 100, 2015.
Article in English | MEDLINE | ID: mdl-26300766

ABSTRACT

Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognize the whole object.

16.
Article in English | MEDLINE | ID: mdl-25705190

ABSTRACT

Head direction cells fire to signal the direction in which an animal's head is pointing. They are able to track head direction using only internally-derived information (path integration)In this simulation study we investigate the factors that affect path integration accuracy. Specifically, two major limiting factors are identified: rise time, the time after stimulation it takes for a neuron to start firing, and the presence of symmetric non-offset within-layer recurrent collateral connectivity. On the basis of the latter, the important prediction is made that head direction cell regions directly involved in path integration will not contain this type of connectivity; giving a theoretical explanation for architectural observations. Increased neuronal rise time is found to slow path integration, and the slowing effect for a given rise time is found to be more severe in the context of short conduction delays. Further work is suggested on the basis of our findings, which represent a valuable contribution to understanding of the head direction cell system.

17.
Article in English | MEDLINE | ID: mdl-25717301

ABSTRACT

We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions.

18.
Biol Cybern ; 109(2): 215-39, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25488769

ABSTRACT

Learning to recognise objects and faces is an important and challenging problem tackled by the primate ventral visual system. One major difficulty lies in recognising an object despite profound differences in the retinal images it projects, due to changes in view, scale, position and other identity-preserving transformations. Several models of the ventral visual system have been successful in coping with these issues, but have typically been privileged by exposure to only one object at a time. In natural scenes, however, the challenges of object recognition are typically further compounded by the presence of several objects which should be perceived as distinct entities. In the present work, we explore one possible mechanism by which the visual system may overcome these two difficulties simultaneously, through segmenting unseen (artificial) stimuli using information about their category encoded in plastic lateral connections. We demonstrate that these experience-guided lateral interactions robustly organise input representations into perceptual cycles, allowing feed-forward connections trained with spike-timing-dependent plasticity to form independent, translation-invariant output representations. We present these simulations as a functional explanation for the role of plasticity in the lateral connectivity of visual cortex.


Subject(s)
Learning/physiology , Models, Neurological , Nerve Net/physiology , Neuronal Plasticity/physiology , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Action Potentials , Animals , Computer Simulation , Cues , Interneurons/physiology , Neural Networks, Computer , Neurons/physiology , Photic Stimulation , Primates
19.
Network ; 25(3): 116-36, 2014.
Article in English | MEDLINE | ID: mdl-24992518

ABSTRACT

We have studied the development of head-centered visual responses in an unsupervised self-organizing neural network model which was trained under ecological training conditions. Four independent spatio-temporal characteristics of the training stimuli were explored to investigate the feasibility of the self-organization under more ecological conditions. First, the number of head-centered visual training locations was varied over a broad range. Model performance improved as the number of training locations approached the continuous sampling of head-centered space. Second, the model depended on periods of time where visual targets remained stationary in head-centered space while it performed saccades around the scene, and the severity of this constraint was explored by introducing increasing levels of random eye movement and stimulus dynamics. Model performance was robust over a range of randomization. Third, the model was trained on visual scenes where multiple simultaneous targets where always visible. Model self-organization was successful, despite never being exposed to a visual target in isolation. Fourth, the duration of fixations during training were made stochastic. With suitable changes to the learning rule, it self-organized successfully. These findings suggest that the fundamental learning mechanism upon which the model rests is robust to the many forms of stimulus variability under ecological training conditions.


Subject(s)
Models, Neurological , Neural Networks, Computer , Visual Perception/physiology
20.
Article in English | MEDLINE | ID: mdl-24659956

ABSTRACT

Although many computational models have been proposed to explain orientation maps in primary visual cortex (V1), it is not yet known how similar clusters of color-selective neurons in macaque V1/V2 are connected and develop. In this work, we address the problem of understanding the cortical processing of color information with a possible mechanism of the development of the patchy distribution of color selectivity via computational modeling. Each color input is decomposed into a red, green, and blue representation and transmitted to the visual cortex via a simulated optic nerve in a luminance channel and red-green and blue-yellow opponent color channels. Our model of the early visual system consists of multiple topographically-arranged layers of excitatory and inhibitory neurons, with sparse intra-layer connectivity and feed-forward connectivity between layers. Layers are arranged based on anatomy of early visual pathways, and include a retina, lateral geniculate nucleus, and layered neocortex. Each neuron in the V1 output layer makes synaptic connections to neighboring neurons and receives the three types of signals in the different channels from the corresponding photoreceptor position. Synaptic weights are randomized and learned using spike-timing-dependent plasticity (STDP). After training with natural images, the neurons display heightened sensitivity to specific colors. Information-theoretic analysis reveals mutual information between particular stimuli and responses, and that the information reaches a maximum with fewer neurons in the higher layers, indicating that estimations of the input colors can be done using the output of fewer cells in the later stages of cortical processing. In addition, cells with similar color receptive fields form clusters. Analysis of spiking activity reveals increased firing synchrony between neurons when particular color inputs are presented or removed (ON-cell/OFF-cell).


Subject(s)
Action Potentials/physiology , Color Perception/physiology , Neuronal Plasticity/physiology , Visual Cortex/physiology , Visual Pathways/physiology , Animals , Computer Simulation , Geniculate Bodies/physiology , Models, Neurological , Neurons/physiology , Photic Stimulation
SELECTION OF CITATIONS
SEARCH DETAIL
...