Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 26
Filter
1.
PLoS One ; 19(5): e0301513, 2024.
Article in English | MEDLINE | ID: mdl-38722934

ABSTRACT

The decision of when to add a new hidden unit or layer is a fundamental challenge for constructive algorithms. It becomes even more complex in the context of multiple hidden layers. Growing both network width and depth offers a robust framework for leveraging the ability to capture more information from the data and model more complex representations. In the context of multiple hidden layers, should growing units occur sequentially with hidden units only being grown in one layer at a time or in parallel with hidden units growing across multiple layers simultaneously? The effects of growing sequentially or in parallel are investigated using a population dynamics-inspired growing algorithm in a multilayer context. A modified version of the constructive growing algorithm capable of growing in parallel is presented. Sequential and parallel growth methodologies are compared in a three-hidden layer multilayer perceptron on several benchmark classification tasks. Several variants of these approaches are developed for a more in-depth comparison based on the type of hidden layer initialization and the weight update methods employed. Comparisons are then made to another sequential growing approach, Dynamic Node Creation. Growing hidden layers in parallel resulted in comparable or higher performances than sequential approaches. Growing hidden layers in parallel promotes growing narrower deep architectures tailored to the task. Dynamic growth inspired by population dynamics offers the potential to grow the width and depth of deeper neural networks in either a sequential or parallel fashion.


Subject(s)
Algorithms , Neural Networks, Computer , Humans
2.
Neural Netw ; 167: 244-265, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37660673

ABSTRACT

A multilayered bidirectional associative memory neural network is proposed to account for learning nonlinear types of association. The model (denoted as the MF-BAM) is composed of two modules, the Multi-Feature extracting bidirectional associative memory (MF), which contains various unsupervised network layers, and a modified Bidirectional Associative Memory (BAM), which consists of a single supervised network layer. The MF generates successive feature patterns from the original inputs. These patterns change the relationship between the inputs and targets in a way that the BAM can learn. The model was tested on different nonlinear tasks, such as the N-bit, Double Moon and its variants, and the 3-class spiral task. Behaviors were reported through learning errors, decision zones, and recall performances. Results showed that it was possible to learn all tasks consistently. By manipulating the number of units per layer and the number of unsupervised network layers in the MF, it was possible to change the level of nonlinearity observed in the decision boundaries. Furthermore, results indicated that different behaviors were achieved from the same set of inputs by using the different generated patterns. These findings are significant as they showed how a BAM-inspired model could solve nonlinear tasks in a more cognitively plausible fashion.


Subject(s)
Algorithms , Neural Networks, Computer , Learning , Mental Recall
3.
PLoS One ; 16(1): e0244822, 2021.
Article in English | MEDLINE | ID: mdl-33400724

ABSTRACT

Sensory stimuli endow animals with the ability to generate an internal representation. This representation can be maintained for a certain duration in the absence of previously elicited inputs. The reliance on an internal representation rather than purely on the basis of external stimuli is a hallmark feature of higher-order functions such as working memory. Patterns of neural activity produced in response to sensory inputs can continue long after the disappearance of previous inputs. Experimental and theoretical studies have largely invested in understanding how animals faithfully maintain sensory representations during ongoing reverberations of neural activity. However, these studies have focused on preassigned protocols of stimulus presentation, leaving out by default the possibility of exploring how the content of working memory interacts with ongoing input streams. Here, we study working memory using a network of spiking neurons with dynamic synapses subject to short-term and long-term synaptic plasticity. The formal model is embodied in a physical robot as a companion approach under which neuronal activity is directly linked to motor output. The artificial agent is used as a methodological tool for studying the formation of working memory capacity. To this end, we devise a keyboard listening framework to delineate the context under which working memory content is (1) refined, (2) overwritten or (3) resisted by ongoing new input streams. Ultimately, this study takes a neurorobotic perspective to resurface the long-standing implication of working memory in flexible cognition.


Subject(s)
Memory, Short-Term , Models, Neurological , Neuronal Plasticity , Neurons/physiology , Robotics
4.
Comput Intell Neurosci ; 2019: 6989128, 2019.
Article in English | MEDLINE | ID: mdl-31191633

ABSTRACT

Recognizing and tracking the direction of moving stimuli is crucial to the control of much animal behaviour. In this study, we examine whether a bio-inspired model of synaptic plasticity implemented in a robotic agent may allow the discrimination of motion direction of real-world stimuli. Starting with a well-established model of short-term synaptic plasticity (STP), we develop a microcircuit motif of spiking neurons capable of exhibiting preferential and nonpreferential responses to changes in the direction of an orientation stimulus in motion. While the robotic agent processes sensory inputs, the STP mechanism introduces direction-dependent changes in the synaptic connections of the microcircuit, resulting in a population of units that exhibit a typical cortical response property observed in primary visual cortex (V1), namely, direction selectivity. Visually evoked responses from the model are then compared to those observed in multielectrode recordings from V1 in anesthetized macaque monkeys, while sinusoidal gratings are displayed on a screen. Overall, the model highlights the role of STP as a complementary mechanism in explaining the direction selectivity and applies these insights in a physical robot as a method for validating important response characteristics observed in experimental data from V1, namely, direction selectivity.


Subject(s)
Motion Perception/physiology , Motion , Neuronal Plasticity/physiology , Robotics , Animals , Evoked Potentials, Visual/physiology , Neurons/physiology , Orientation/physiology , Visual Cortex/physiology , Visual Perception/physiology
5.
Front Neurorobot ; 12: 75, 2018.
Article in English | MEDLINE | ID: mdl-30524261

ABSTRACT

Visual motion detection is essential for the survival of many species. The phenomenon includes several spatial properties, not fully understood at the level of a neural circuit. This paper proposes a computational model of a visual motion detector that integrates direction and orientation selectivity features. A recent experiment in the Drosophila model highlights that stimulus orientation influences the neural response of direction cells. However, this interaction and the significance at the behavioral level are currently unknown. As such, another objective of this article is to study the effect of merging these two visual processes when contextualized in a neuro-robotic model and an operant conditioning procedure. In this work, the learning task was solved using an artificial spiking neural network, acting as the brain controller for virtual and physical robots, showing a behavior modulation from the integration of both visual processes.

6.
PLoS One ; 10(7): e0132218, 2015.
Article in English | MEDLINE | ID: mdl-26200767

ABSTRACT

Untrained, "flower-naïve" bumblebees display behavioural preferences when presented with visual properties such as colour, symmetry, spatial frequency and others. Two unsupervised neural networks were implemented to understand the extent to which these models capture elements of bumblebees' unlearned visual preferences towards flower-like visual properties. The computational models, which are variants of Independent Component Analysis and Feature-Extracting Bidirectional Associative Memory, use images of test-patterns that are identical to ones used in behavioural studies. Each model works by decomposing images of floral patterns into meaningful underlying factors. We reconstruct the original floral image using the components and compare the quality of the reconstructed image to the original image. Independent Component Analysis matches behavioural results substantially better across several visual properties. These results are interpreted to support a hypothesis that the temporal and energetic costs of information processing by pollinators served as a selective pressure on floral displays: flowers adapted to pollinators' cognitive constraints.


Subject(s)
Bees/physiology , Visual Perception/physiology , Animals , Neural Networks, Computer
7.
J Comp Psychol ; 129(3): 229-36, 2015 Aug.
Article in English | MEDLINE | ID: mdl-25984936

ABSTRACT

The behavioral experiment herein tests the computational load hypothesis generated by an unsupervised neural network to examine bumblebee (Bombus impatiens) behavior at 2 visual properties: spatial frequency and symmetry. Untrained "flower-naïve" bumblebees were hypothesized to prefer symmetry only when the spatial frequency of artificial flowers is high and therefore places great information-processing demands on the bumblebees' visual system. Bumblebee choice behavior was recorded using high-definition motion-sensitive camcorders. The results support the computational model's prediction: 1-axis symmetry influenced bumblebees' preference behavior at low and high spatial frequency patterns. Additionally, increasing the level of symmetry from 1 axis to 4 axes amplified preference toward the symmetric patterns of both low and high spatial frequency patterns. The results are discussed in the context of the artificial neural network model and other hypotheses generated from the behavioral literature.


Subject(s)
Bees/physiology , Behavior, Animal/physiology , Choice Behavior/physiology , Neural Networks, Computer , Pattern Recognition, Visual/physiology , Animals
11.
Neural Netw ; 32: 46-56, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22406172

ABSTRACT

The Fusiform Face Area (FFA) is the brain region considered to be responsible for face recognition. Prosopagnosia is a brain disorder causing the inability to a recognise faces that is said to mainly affect the FFA. We put forward a model that simulates the capacity to retrieve label associated with faces and objects depending on the depth of treatment of the information. Akin to prosopagnosia, various localised "lesions" were inserted into the network in order to evaluate the degradation of performance. The network is first composed of a Feature Extracting Bidirectional Associative Memory (FEBAM-SOM) to represent the topological maps allowing the categorisation of all faces. The second component of the network is a Bidirectional Heteroassociative Memory (BHM) that links those representations to their semantic label. For the latter, specific semantic labels were used as well as more general ones. The inputs were images representing faces and various objects. Just like in the visual perceptual system, the images were pre-processed using a low-pass filter. Results showed that the network is able to associate the extracted map with the correct label information. The network is able to generalise and is robust to noise. Moreover, results showed that the recall performance of names associated with faces decrease with the size of lesion without affecting the performance of the objects. Finally, results obtained with the network are also consistent with human ones in that higher level, more general labels are more robust to lesion compared to low level, specific labels.


Subject(s)
Mental Recall/physiology , Neural Networks, Computer , Prosopagnosia/psychology , Algorithms , Brain Mapping , Computer Simulation , Face , Generalization, Psychological , Humans , Learning/physiology , Models, Neurological , Neuronal Plasticity , Normal Distribution , Pattern Recognition, Automated , Photic Stimulation , Recognition, Psychology/physiology , Semantics , Visual Perception/physiology
12.
Nonlinear Dynamics Psychol Life Sci ; 14(4): 463-89, 2010 Oct.
Article in English | MEDLINE | ID: mdl-20887690

ABSTRACT

Sexual arousal and gaze behavior dynamics are used to characterize deviant sexual interests in male subjects. Pedophile patients and non-deviant subjects are immersed with virtual characters depicting relevant sexual features. Gaze behavior dynamics as indexed from correlation dimensions (D2) appears to be fractal in nature and significantly different from colored noise (surrogate data tests and recurrence plot analyses were performed). This perceptual-motor fractal dynamics parallels sexual arousal and differs from pedophiles to non-deviant subjects when critical sexual information is processed. Results are interpreted in terms of sexual affordance, perceptual invariance extraction and intentional nonlinear dynamics.


Subject(s)
Arousal , Intention , Nonlinear Dynamics , Pedophilia/psychology , Psychomotor Performance , Adult , Arousal/physiology , Computer Simulation , Erotica , Eye Movements/physiology , Humans , Male , Mathematical Computing , Middle Aged , Pedophilia/physiopathology , Penis/blood supply , Plethysmography , Psychomotor Performance/physiology , Reference Values , Sexual Behavior/physiology , Signal Processing, Computer-Assisted , User-Computer Interface
13.
Neural Netw ; 23(7): 892-904, 2010 Sep.
Article in English | MEDLINE | ID: mdl-20627454

ABSTRACT

The focus of this paper is to propose a hybrid neural network model for associative recall of analog and digital patterns. This hybrid model consists of self-feedback neural network structures (SFNN) in parallel with generalized regression neural networks (GRNN). Using a new one-shot learning algorithm developed in the paper, pattern representations are first stored as the asymptotically stable fixed points of the SFNN. Then in the retrieving process, each pattern is applied to the GRNN to make the corresponding initial condition and to initiate the dynamical equations of the SFNN that should in turn output the corresponding representation. In this way, the corresponding stored patterns are retrieved even under high noise degradation. Moreover, contrary to many associative memories, the proposed hybrid model is without any spurious attractors and can store both binary and real-value patterns without any preprocessing. Several simulations confirm the theoretical analyses of the model. Results indicate that the performance of the hybrid model is better than that of recurrent associative memory and competitive with other classes of networks.


Subject(s)
Association Learning , Computer Simulation , Feedback, Physiological , Models, Neurological , Nerve Net , Algorithms , Artificial Intelligence , Neural Networks, Computer , Pattern Recognition, Physiological , Regression Analysis
14.
Neural Netw ; 22(5-6): 568-78, 2009.
Article in English | MEDLINE | ID: mdl-19604669

ABSTRACT

In this paper, we present a new recurrent bidirectional model that encompasses correlational, competitive and topological model properties. The simultaneous use of many classes of network behaviors allows for the unsupervised learning/categorization of perceptual patterns (through input compression) and the concurrent encoding of proximities in a multidimensional space. All of these operations are achieved within a common learning operation, and using a single set of defining properties. It is shown that the model can learn categories by developing prototype representations strictly from exposition to specific exemplars. Moreover, because the model is recurrent, it can reconstruct perfect outputs from incomplete and noisy patterns. Empirical exploration of the model's properties and performance shows that its ability for adequate clustering stems from: (1) properly distributing connection weights, and (2) producing a weight space with a low dispersion level (or higher density). In addition, since the model uses a sparse representation (k-winners), the size of topological neighborhood can be fixed, and no longer requires a decrease through time as was the case with classic self-organizing feature maps. Since the model's learning and transmission parameters are independent from learning trials, the model can develop stable fixed points in a constrained topological architecture, while being flexible enough to learn novel patterns.


Subject(s)
Artificial Intelligence , Neural Networks, Computer , Algorithms , Classification , Cluster Analysis , Computer Simulation , Humans , Learning , Memory , Mental Recall
15.
IEEE Trans Neural Netw ; 20(8): 1281-92, 2009 Aug.
Article in English | MEDLINE | ID: mdl-19596635

ABSTRACT

Most bidirectional associative memory (BAM) networks use a symmetrical output function for dual fixed-point behavior. In this paper, we show that by introducing an asymmetry parameter into a recently introduced chaotic BAM output function, prior knowledge can be used to momentarily disable desired attractors from memory, hence biasing the search space to improve recall performance. This property allows control of chaotic wandering, favoring given subspaces over others. In addition, reinforcement learning can then enable a dual BAM architecture to store and recall nonlinearly separable patterns. Our results allow the same BAM framework to model three different types of learning: supervised, reinforcement, and unsupervised. This ability is very promising from the cognitive modeling viewpoint. The new BAM model is also useful from an engineering perspective; our simulations results reveal a notable overall increase in BAM learning and recall performances when using a hybrid model with the general regression neural network (GRNN).


Subject(s)
Algorithms , Artificial Intelligence , Memory , Neural Networks, Computer , Nonlinear Dynamics , Reinforcement, Psychology , Cognition , Computer Simulation , Humans , Learning , Mental Recall , Regression Analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...