Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 47
Filter
Add more filters










Publication year range
1.
Neural Netw ; 155: 258-286, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36081198

ABSTRACT

We approach the issue of robust machine vision by presenting a novel deep-learning architecture, inspired by work in theoretical neuroscience on how the primate brain performs visual feature binding. Feature binding describes how separately represented features are encoded in a relationally meaningful way, such as an edge composing part of the larger contour of an object. We propose that the absence of such representations from current models might partly explain their vulnerability to small, often humanly-imperceptible distortions known as adversarial examples. It has been proposed that adversarial examples are a result of 'off-manifold' perturbations of images. Our novel architecture is designed to approximate hierarchical feature binding, providing explicit representations in these otherwise vulnerable directions. Having introduced these representations into convolutional neural networks, we provide empirical evidence of enhanced robustness against a broad range of L0, L2 and L∞ attacks, particularly in the black-box setting. While we eventually report that the model remains vulnerable to a sufficiently powerful attacker (i.e. the defense can be broken), we demonstrate that our main results cannot be accounted for by trivial, false robustness (gradient masking). Analysis of the representational geometry of our architectures shows a positive relationship between hierarchical binding, expanded manifolds, and robustness. Through hyperparameter manipulation, we find evidence that robustness emerges through the preservation of general low-level information alongside more abstract features, rather than by capturing which specific low-level features drove the abstract representation. Finally, we propose how hierarchical binding relates to the observation that, under appropriate viewing conditions, humans show sensitivity to adversarial examples.


Subject(s)
Brain , Neural Networks, Computer , Humans
2.
Cereb Cortex Commun ; 3(1): tgab052, 2022.
Article in English | MEDLINE | ID: mdl-35047822

ABSTRACT

Place and head-direction (HD) cells are fundamental to maintaining accurate representations of location and heading in the mammalian brain across sensory conditions, and are thought to underlie path integration-the ability to maintain an accurate representation of location and heading during motion in the dark. Substantial evidence suggests that both populations of spatial cells function as attractor networks, but their developmental mechanisms are poorly understood. We present simulations of a fully self-organizing attractor network model of this process using well-established neural mechanisms. We show that the differential development of the two cell types can be explained by their different idiothetic inputs, even given identical visual signals: HD cells develop when the population receives angular head velocity input, whereas place cells develop when the idiothetic input encodes planar velocity. Our model explains the functional importance of conjunctive "state-action" cells, implying that signal propagation delays and a competitive learning mechanism are crucial for successful development. Consequently, we explain how insufficiently rich environments result in pathology: place cell development requires proximal landmarks; conversely, HD cells require distal landmarks. Finally, our results suggest that both networks are instantiations of general mechanisms, and we describe their implications for the neurobiology of spatial processing.

3.
Network ; 31(1-4): 37-141, 2020.
Article in English | MEDLINE | ID: mdl-32746663

ABSTRACT

Many researchers have tried to model how environmental knowledge is learned by the brain and used in the form of cognitive maps. However, previous work was limited in various important ways: there was little consensus on how these cognitive maps were formed and represented, the planning mechanism was inherently limited to performing relatively simple tasks, and there was little consideration of how these mechanisms would scale up. This paper makes several significant advances. Firstly, the planning mechanism used by the majority of previous work propagates a decaying signal through the network to create a gradient that points towards the goal. However, this decaying signal limited the scale and complexity of tasks that can be solved in this manner. Here we propose several ways in which a network can can self-organize a novel planning mechanism that does not require decaying activity. We also extend this model with a hierarchical planning mechanism: a layer of cells that identify frequently-used sequences of actions and reuse them to significantly increase the efficiency of planning. We speculate that our results may explain the apparent ability of humans and animals to perform model-based planning on both small and large scales without a noticeable loss of efficiency.


Subject(s)
Algorithms , Brain Mapping/methods , Brain/physiology , Cognition/physiology , Neural Networks, Computer , Animals , Humans
4.
Front Neural Circuits ; 14: 30, 2020.
Article in English | MEDLINE | ID: mdl-32528255

ABSTRACT

The responses of many cortical neurons to visual stimuli are modulated by the position of the eye. This form of gain modulation by eye position does not change the retinotopic selectivity of the responses, but only changes the amplitude of the responses. Particularly in the case of cortical responses, this form of eye position gain modulation has been observed to be multiplicative. Multiplicative gain modulated responses are crucial to encode information that is relevant to high-level visual functions, such as stable spatial awareness, eye movement planning, visual-motor behaviors, and coordinate transformation. Here we first present a hardwired model of different functional forms of gain modulation, including peaked and monotonic modulation by eye position. We use a biologically realistic Gaussian function to model the influence of the position of the eye on the internal activation of visual neurons. Next we show how different functional forms of gain modulation by eye position may develop in a self-organizing neural network model of visual neurons. A further contribution of our work is the investigation of the influence of the width of the eye position tuning curve on the development of a variety of forms of eye position gain modulation. Our simulation results show how the width of the eye position tuning curve affects the development of different forms of gain modulation of visual responses by the position of the eye.


Subject(s)
Eye Movements/physiology , Neural Networks, Computer , Neurons/physiology , Visual Cortex/cytology , Visual Cortex/physiology , Visual Fields/physiology , Humans , Normal Distribution , Photic Stimulation/methods , Visual Perception/physiology
5.
Folia Primatol (Basel) ; 91(4): 417-432, 2020.
Article in English | MEDLINE | ID: mdl-32069456

ABSTRACT

Gut passage time of food has consequences for primate digestive strategies, which subsequently affect seed dispersal. Seed dispersal models are critical in understanding plant population and community dynamics through estimation of seed dispersal distances, combining movement data with gut passage times. Thus, developing methods to collect in situ data on gut passage time are of great importance. Here we present a first attempt to develop an in situ study of gut passage time in an arboreal forest guenon, the samango monkey (Cercopithecus albogularis schwarzi) in the Soutpansberg Mountains, South Africa. Cercopithecus spp. consume large proportions of fruit and are important seed dispersers. However, previous studies on gut passage times have been conducted only on captive Cercopithecusspp. subjects, where movement is restricted, and diets are generally dissimilar to those observed in the wild. Using artificial digestive markers, we targeted provisioning of a male and a female samango monkey 4 times over 3 and 4 days, respectively. We followed the focal subjects from dawn until dusk following each feeding event, collecting faecal samples and recording the date and time of deposition and the number of markers found in each faecal sample. We recovered 6.61 ± 4 and 13 ± 9% of markers from the male and the female, respectively, and were able to estimate a gut passage window of 16.63-25.12 h from 3 of the 8 trials. We discuss methodological issues to help future researchers to develop in situ studies on gut passage times.


Subject(s)
Cercopithecus/physiology , Digestion/physiology , Physiology/methods , Animals , Animals, Wild , Biomarkers , Feces/chemistry , Female , Male , South Africa
6.
PLoS One ; 13(11): e0207961, 2018.
Article in English | MEDLINE | ID: mdl-30496225

ABSTRACT

We study a self-organising neural network model of how visual representations in the primate dorsal visual pathway are transformed from an eye-centred to head-centred frame of reference. The model has previously been shown to robustly develop head-centred output neurons with a standard trace learning rule, but only under limited conditions. Specifically it fails when incorporating visual input neurons with monotonic gain modulation by eye-position. Since eye-centred neurons with monotonic gain modulation are so common in the dorsal visual pathway, it is an important challenge to show how efferent synaptic connections from these neurons may self-organise to produce head-centred responses in a subpopulation of postsynaptic neurons. We show for the first time how a variety of modified, yet still biologically plausible, versions of the standard trace learning rule enable the model to perform a coordinate transformation from eye-centred to head-centred reference frames when the visual input neurons have monotonic gain modulation by eye-position.


Subject(s)
Visual Pathways/anatomy & histology , Visual Pathways/physiology , Visual Perception/physiology , Algorithms , Animals , Eye Movements/physiology , Learning , Models, Neurological , Nerve Net/physiology , Neural Networks, Computer , Neurons , Primates/physiology , Vision, Ocular/physiology
7.
Proc Natl Acad Sci U S A ; 115(35): 8811-8816, 2018 08 28.
Article in English | MEDLINE | ID: mdl-30104349

ABSTRACT

Despite growing awareness about its detrimental effects on tropical biodiversity, land conversion to oil palm continues to increase rapidly as a consequence of global demand, profitability, and the income opportunity it offers to producing countries. Although most industrial oil palm plantations are located in Southeast Asia, it is argued that much of their future expansion will occur in Africa. We assessed how this could affect the continent's primates by combining information on oil palm suitability and current land use with primate distribution, diversity, and vulnerability. We also quantified the potential impact of large-scale oil palm cultivation on primates in terms of range loss under different expansion scenarios taking into account future demand, oil palm suitability, human accessibility, carbon stock, and primate vulnerability. We found a high overlap between areas of high oil palm suitability and areas of high conservation priority for primates. Overall, we found only a few small areas where oil palm could be cultivated in Africa with a low impact on primates (3.3 Mha, including all areas suitable for oil palm). These results warn that, consistent with the dramatic effects of palm oil cultivation on biodiversity in Southeast Asia, reconciling a large-scale development of oil palm in Africa with primate conservation will be a great challenge.


Subject(s)
Arecaceae/growth & development , Biodiversity , Conservation of Natural Resources , Crops, Agricultural/growth & development , Primates/physiology , Africa , Animals
8.
Interface Focus ; 8(4): 20180021, 2018 Aug 06.
Article in English | MEDLINE | ID: mdl-29951198

ABSTRACT

We discuss a recently proposed approach to solve the classic feature-binding problem in primate vision that uses neural dynamics known to be present within the visual cortex. Broadly, the feature-binding problem in the visual context concerns not only how a hierarchy of features such as edges and objects within a scene are represented, but also the hierarchical relationships between these features at every spatial scale across the visual field. This is necessary for the visual brain to be able to make sense of its visuospatial world. Solving this problem is an important step towards the development of artificial general intelligence. In neural network simulation studies, it has been found that neurons encoding the binding relations between visual features, known as binding neurons, emerge during visual training when key properties of the visual cortex are incorporated into the models. These biological network properties include (i) bottom-up, lateral and top-down synaptic connections, (ii) spiking neuronal dynamics, (iii) spike timing-dependent plasticity, and (iv) a random distribution of axonal transmission delays (of the order of several milliseconds) in the propagation of spikes between neurons. After training the network on a set of visual stimuli, modelling studies have reported observing the gradual emergence of polychronization through successive layers of the network, in which subpopulations of neurons have learned to emit their spikes in regularly repeating spatio-temporal patterns in response to specific visual stimuli. Such a subpopulation of neurons is known as a polychronous neuronal group (PNG). Some neurons embedded within these PNGs receive convergent inputs from neurons representing lower- and higher-level visual features, and thus appear to encode the hierarchical binding relationship between features. Neural activity with this kind of spatio-temporal structure robustly emerges in the higher network layers even when neurons in the input layer represent visual stimuli with spike timings that are randomized according to a Poisson distribution. The resulting hierarchical representation of visual scenes in such models, including the representation of hierarchical binding relations between lower- and higher-level visual features, is consistent with the hierarchical phenomenology or subjective experience of primate vision and is distinct from approaches interested in segmenting a visual scene into a finite set of objects.

9.
Psychol Rev ; 125(4): 545-571, 2018 07.
Article in English | MEDLINE | ID: mdl-29863378

ABSTRACT

We present a hierarchical neural network model, in which subpopulations of neurons develop fixed and regularly repeating temporal chains of spikes (polychronization), which respond specifically to randomized Poisson spike trains representing the input training images. The performance is improved by including top-down and lateral synaptic connections, as well as introducing multiple synaptic contacts between each pair of pre- and postsynaptic neurons, with different synaptic contacts having different axonal delays. Spike-timing-dependent plasticity thus allows the model to select the most effective axonal transmission delay between neurons. Furthermore, neurons representing the binding relationship between low-level and high-level visual features emerge through visually guided learning. This begins to provide a way forward to solving the classic feature binding problem in visual neuroscience and leads to a new hypothesis concerning how information about visual features at every spatial scale may be projected upward through successive neuronal layers. We name this hypothetical upward projection of information the "holographic principle." (PsycINFO Database Record


Subject(s)
Models, Theoretical , Neural Networks, Computer , Neurons , Visual Perception , Animals
10.
Neural Comput ; 30(7): 1801-1829, 2018 07.
Article in English | MEDLINE | ID: mdl-29652586

ABSTRACT

It is well known that auditory nerve (AN) fibers overcome bandwidth limitations through the volley principle, a form of multiplexing. What is less well known is that the volley principle introduces a degree of unpredictability into AN neural firing patterns that may be affecting even simple stimulus categorization learning. We use a physiologically grounded, unsupervised spiking neural network model of the auditory brain with spike time dependent plasticity learning to demonstrate that plastic auditory cortex is unable to learn even simple auditory object categories when exposed to the raw AN firing input without subcortical preprocessing. We then demonstrate the importance of nonplastic subcortical preprocessing within the cochlear nucleus and the inferior colliculus for stabilizing and denoising AN responses. Such preprocessing enables the plastic auditory cortex to learn efficient robust representations of the auditory object categories. The biological realism of our model makes it suitable for generating neurophysiologically testable hypotheses.


Subject(s)
Cochlear Nerve/physiology , Cochlear Nucleus/physiology , Inferior Colliculi/physiology , Learning/physiology , Models, Neurological , Pattern Recognition, Physiological/physiology , Action Potentials/physiology , Animals , Auditory Pathways/physiology , Computer Simulation , Haplorhini , Humans , Neural Networks, Computer , Neuronal Plasticity/physiology , Neurons/physiology , Rats , Synapses/physiology , Time Factors
11.
Network ; 29(1-4): 37-69, 2018.
Article in English | MEDLINE | ID: mdl-30905280

ABSTRACT

The head direction (HD) system signals HD in an allocentric frame of reference. The system is able to update firing based on internally derived information about self-motion, a process known as path integration. Of particular interest is how path integration might maintain concordance between true HD and internally represented HD. Here we present a self-sustaining two-layer model, capable of self-organizing, which produces extremely accurate path integration. The implications of this work for future investigations of HD system path integration are discussed.


Subject(s)
Models, Neurological , Motion Perception/physiology , Neural Pathways/physiology , Neurons/physiology , Space Perception/physiology , Action Potentials/physiology , Animals , Computer Simulation , Head , Head Movements/physiology , Humans , Nerve Net/physiology , Nonlinear Dynamics
12.
PLoS One ; 12(8): e0180174, 2017.
Article in English | MEDLINE | ID: mdl-28797034

ABSTRACT

The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable.


Subject(s)
Auditory Cortex/physiology , Computer Simulation , Models, Neurological , Nerve Net/physiology , Vocabulary , Action Potentials , Cochlear Nucleus/physiology , Humans , Learning , Neuronal Plasticity , Synapses/physiology
13.
PLoS One ; 12(5): e0178304, 2017.
Article in English | MEDLINE | ID: mdl-28562618

ABSTRACT

A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet.


Subject(s)
Hand , Learning/physiology , Primates/physiology , Visual Pathways/physiology , Animals , Models, Neurological , Nerve Net
14.
Psychol Rev ; 124(2): 154-167, 2017 03.
Article in English | MEDLINE | ID: mdl-28068117

ABSTRACT

We use an established neural network model of the primate visual system to show how neurons might learn to encode the gender of faces. The model consists of a hierarchy of 4 competitive neuronal layers with associatively modifiable feedforward synaptic connections between successive layers. During training, the network was presented with many realistic images of male and female faces, during which the synaptic connections are modified using biologically plausible local associative learning rules. After training, we found that different subsets of output neurons have learned to respond exclusively to either male or female faces. With the inclusion of short range excitation within each neuronal layer to implement a self-organizing map architecture, neurons representing either male or female faces were clustered together in the output layer. This learning process is entirely unsupervised, as the gender of the face images is not explicitly labeled and provided to the network as a supervisory training signal. These simulations are extended to training the network on rotating faces. It is found that by using a trace learning rule incorporating a temporal memory trace of recent neuronal activity, neurons responding selectively to either male or female faces were also able to learn to respond invariantly over different views of the faces. This kind of trace learning has been previously shown to operate within the primate visual system by neurophysiological and psychophysical studies. The computer simulations described here predict that similar neurons encoding the gender of faces will be present within the primate visual system. (PsycINFO Database Record


Subject(s)
Facial Recognition/physiology , Learning/physiology , Neural Networks, Computer , Neurons/physiology , Pattern Recognition, Visual/physiology , Primates , Sex , Visual Pathways/physiology , Animals , Brain/physiology , Computer Simulation , Female , Male , Visual Perception/physiology
15.
J Consult Clin Psychol ; 85(3): 200-217, 2017 Mar.
Article in English | MEDLINE | ID: mdl-27991805

ABSTRACT

[Correction Notice: An Erratum for this article was reported in Vol 85(3) of Journal of Consulting and Clinical Psychology (see record 2017-07144-002). In the article, there was an error in the Discussion section's first paragraph for Implications and Future Work. The in-text reference citation for Penton-Voak et al. (2013) was incorrectly listed as "Blumenfeld, Preminger, Sagi, and Tsodyks (2006)". All versions of this article have been corrected.] Objective: Cognitive bias modification (CBM) eliminates cognitive biases toward negative information and is efficacious in reducing depression recurrence, but the mechanisms behind the bias elimination are not fully understood. The present study investigated, through computer simulation of neural network models, the neural dynamics underlying the use of CBM in eliminating the negative biases in the way that depressed patients evaluate facial expressions. METHOD: We investigated 2 new CBM methodologies using biologically plausible synaptic learning mechanisms-continuous transformation learning and trace learning-which guide learning by exploiting either the spatial or temporal continuity between visual stimuli presented during training. We first describe simulations with a simplified 1-layer neural network, and then we describe simulations in a biologically detailed multilayer neural network model of the ventral visual pathway. RESULTS: After training with either the continuous transformation learning rule or the trace learning rule, the 1-layer neural network eliminated biases in interpreting neutral stimuli as sad. The multilayer neural network trained with realistic face stimuli was also shown to be able to use continuous transformation learning or trace learning to reduce biases in the interpretation of neutral stimuli. CONCLUSIONS: The simulation results suggest 2 biologically plausible synaptic learning mechanisms, continuous transformation learning and trace learning, that may subserve CBM. The results are highly informative for the development of experimental protocols to produce optimal CBM training methodologies with human participants. (PsycINFO Database Record


Subject(s)
Cognition/physiology , Computer Simulation , Depressive Disorder, Major/physiopathology , Facial Expression , Mental Processes/physiology , Nerve Net/physiology , Humans , Visual Perception/physiology
16.
Psychol Rev ; 123(6): 696-739, 2016 11.
Article in English | MEDLINE | ID: mdl-27797539

ABSTRACT

Experimental studies have shown that neurons at an intermediate stage of the primate ventral visual pathway, occipital face area, encode individual facial parts such as eyes and nose while neurons in the later stages, middle face patches, are selective to the full face by encoding the spatial relations between facial features. We have performed a computer modeling study to investigate how these cell firing properties may develop through unsupervised visually guided learning. A hierarchical neural network model of the primate's ventral visual pathway is trained by presenting many randomly generated faces to the network while a local learning rule modifies the strengths of the synaptic connections between neurons in successive layers. After training, the model is found to have developed the experimentally observed cell firing properties. In particular, we have shown how the visual system forms separate representations of facial features such as the eyes, nose, and mouth as well as monotonically tuned representations of the spatial relationships between these facial features. We also demonstrated how the primate brain learns to represent facial expression independently of facial identity. Furthermore, based on the simulation results, we propose that neurons encoding different global attributes simply represent different spatial relationships between local features with monotonic tuning curves or particular combinations of these spatial relations. (PsycINFO Database Record


Subject(s)
Facial Recognition/physiology , Neural Networks, Computer , Visual Pathways/physiology , Animals , Brain/physiology , Computer Simulation , Facial Expression , Humans , Neurons/physiology , Primates
17.
Neurobiol Learn Mem ; 136: 147-165, 2016 Dec.
Article in English | MEDLINE | ID: mdl-27743879

ABSTRACT

As Rubin's famous vase demonstrates, our visual perception tends to assign luminance contrast borders to one or other of the adjacent image regions. Experimental evidence for the neuronal coding of such border-ownership in the primate visual system has been reported in neurophysiology. We have investigated exactly how such neural circuits may develop through visually-guided learning. More specifically, we have investigated through computer simulation how top-down connections may play a fundamental role in the development of border ownership representations in the early cortical visual layers V1/V2. Our model consists of a hierarchy of competitive neuronal layers, with both bottom-up and top-down synaptic connections between successive layers, and the synaptic connections are self-organised by a biologically plausible, temporal trace learning rule during training on differently shaped visual objects. The simulations reported in this paper have demonstrated that top-down connections may help to guide competitive learning in lower layers, thus driving the formation of lower level (border ownership) visual representations in V1/V2 that are modulated by higher level (object boundary element) representations in V4. Lastly we investigate the limitations of our model in the more general situation where multiple objects are presented to the network simultaneously.


Subject(s)
Computer Simulation , Learning/physiology , Neural Networks, Computer , Visual Cortex/physiology , Visual Perception/physiology , Animals , Humans
18.
J Physiol ; 594(22): 6527-6534, 2016 11 15.
Article in English | MEDLINE | ID: mdl-27479741

ABSTRACT

Maintaining a sense of direction requires combining information from static environmental landmarks with dynamic information about self-motion. This is accomplished by the head direction system, whose neurons - head direction cells - encode specific head directions. When the brain integrates information in sensory domains, this process is almost always 'optimal' - that is, inputs are weighted according to their reliability. Evidence suggests cue combination by head direction cells may also be optimal. The simplicity of the head direction signal, together with the detailed knowledge we have about the anatomy and physiology of the underlying circuit, therefore makes this system a tractable model with which to discover how optimal cue combination occurs at a neural level. In the head direction system, cue interactions are thought to occur on an attractor network of interacting head direction neurons, but attractor dynamics predict a winner-take-all decision between cues, rather than optimal combination. However, optimal cue combination in an attractor could be achieved via plasticity in the feedforward connections from external sensory cues (i.e. the landmarks) onto the ring attractor. Short-term plasticity would allow rapid re-weighting that adjusts the final state of the network in accordance with cue reliability (reflected in the connection strengths), while longer term plasticity would allow long-term learning about this reliability. Although these principles were derived to model the head direction system, they could potentially serve to explain optimal cue combination in other sensory systems more generally.


Subject(s)
Head/physiology , Learning/physiology , Sensation/physiology , Animals , Brain/physiology , Cues , Humans , Models, Neurological , Motion Perception/physiology , Neurons/physiology , Space Perception/physiology
19.
Network ; 27(1): 29-51, 2016.
Article in English | MEDLINE | ID: mdl-27253452

ABSTRACT

Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant's gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views.


Subject(s)
Ocular Physiological Phenomena , Primates , Animals , Hand , Humans , Learning , Neural Networks, Computer , Neurons
20.
Front Comput Neurosci ; 10: 24, 2016.
Article in English | MEDLINE | ID: mdl-27047368

ABSTRACT

Attempting to explain the perceptual qualities of pitch has proven to be, and remains, a difficult problem. The wide range of sounds which elicit pitch and a lack of agreement across neurophysiological studies on how pitch is encoded by the brain have made this attempt more difficult. In describing the potential neural mechanisms by which pitch may be processed, a number of neural networks have been proposed and implemented. However, no unsupervised neural networks with biologically accurate cochlear inputs have yet been demonstrated. This paper proposes a simple system in which pitch representing neurons are produced in a biologically plausible setting. Purely unsupervised regimes of neural network learning are implemented and these prove to be sufficient in identifying the pitch of sounds with a variety of spectral profiles, including sounds with missing fundamental frequencies and iterated rippled noises.

SELECTION OF CITATIONS
SEARCH DETAIL
...