Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38833393

ABSTRACT

Sensory information recognition is primarily processed through the ventral and dorsal visual pathways in the primate brain visual system, which exhibits layered feature representations bearing a strong resemblance to convolutional neural networks (CNNs), encompassing reconstruction and classification. However, existing studies often treat these pathways as distinct entities, focusing individually on pattern reconstruction or classification tasks, overlooking a key feature of biological neurons, the fundamental units for neural computation of visual sensory information. Addressing these limitations, we introduce a unified framework for sensory information recognition with augmented spikes. By integrating pattern reconstruction and classification within a single framework, our approach not only accurately reconstructs multimodal sensory information but also provides precise classification through definitive labeling. Experimental evaluations conducted on various datasets including video scenes, static images, dynamic auditory scenes, and functional magnetic resonance imaging (fMRI) brain activities demonstrate that our framework delivers state-of-the-art pattern reconstruction quality and classification accuracy. The proposed framework enhances the biological realism of multimodal pattern recognition models, offering insights into how the primate brain visual system effectively accomplishes the reconstruction and classification tasks through the integration of ventral and dorsal pathways.

2.
Comput Biol Med ; 163: 107114, 2023 09.
Article in English | MEDLINE | ID: mdl-37329620

ABSTRACT

To navigate in space, it is important to predict headings in real-time from neural responses in the brain to vestibular and visual signals, and the ventral intraparietal area (VIP) is one of the critical brain areas. However, it remains unexplored in the population level how the heading perception is represented in VIP. And there are no commonly used methods suitable for decoding the headings from the population responses in VIP, given the large spatiotemporal dynamics and heterogeneity in the neural responses. Here, responses were recorded from 210 VIP neurons in three rhesus monkeys when they were performing a heading perception task. And by specifically and separately modelling the both dynamics with sparse representation, we built a sequential sparse autoencoder (SSAE) to do the population decoding on the recorded dataset and tried to maximize the decoding performance. The SSAE relies on a three-layer sparse autoencoder to extract temporal and spatial heading features in the dataset via unsupervised learning, and a softmax classifier to decode the headings. Compared with other population decoding methods, the SSAE achieves a leading accuracy of 96.8% ± 2.1%, and shows the advantages of robustness, low storage and computing burden for real-time prediction. Therefore, our SSAE model performs well in learning neurobiologically plausible features comprising dynamic navigational information.


Subject(s)
Eye Movements , Motion Perception , Animals , Parietal Lobe/physiology , Motion Perception/physiology , Photic Stimulation/methods , Brain , Macaca mulatta
3.
Cereb Cortex ; 33(11): 6772-6784, 2023 05 24.
Article in English | MEDLINE | ID: mdl-36734278

ABSTRACT

Gaze change can misalign spatial reference frames encoding visual and vestibular signals in cortex, which may affect the heading discrimination. Here, by systematically manipulating the eye-in-head and head-on-body positions to change the gaze direction of subjects, the performance of heading discrimination was tested with visual, vestibular, and combined stimuli in a reaction-time task in which the reaction time is under the control of subjects. We found the gaze change induced substantial biases in perceived heading, increased the threshold of discrimination and reaction time of subjects in all stimulus conditions. For the visual stimulus, the gaze effects were induced by changing the eye-in-world position, and the perceived heading was biased in the opposite direction of gaze. In contrast, the vestibular gaze effects were induced by changing the eye-in-head position, and the perceived heading was biased in the same direction of gaze. Although the bias was reduced when the visual and vestibular stimuli were combined, integration of the 2 signals substantially deviated from predictions of an extended diffusion model that accumulates evidence optimally over time and across sensory modalities. These findings reveal diverse gaze effects on the heading discrimination and emphasize that the transformation of spatial reference frames may underlie the effects.


Subject(s)
Motion Perception , Vestibule, Labyrinth , Humans , Reaction Time , Cerebral Cortex , Bias , Visual Perception , Photic Stimulation
4.
IEEE Trans Neural Netw Learn Syst ; 34(9): 5841-5855, 2023 09.
Article in English | MEDLINE | ID: mdl-34890341

ABSTRACT

Spiking neural networks (SNNs), inspired by the neuronal network in the brain, provide biologically relevant and low-power consuming models for information processing. Existing studies either mimic the learning mechanism of brain neural networks as closely as possible, for example, the temporally local learning rule of spike-timing-dependent plasticity (STDP), or apply the gradient descent rule to optimize a multilayer SNN with fixed structure. However, the learning rule used in the former is local and how the real brain might do the global-scale credit assignment is still not clear, which means that those shallow SNNs are robust but deep SNNs are difficult to be trained globally and could not work so well. For the latter, the nondifferentiable problem caused by the discrete spike trains leads to inaccuracy in gradient computing and difficulties in effective deep SNNs. Hence, a hybrid solution is interesting to combine shallow SNNs with an appropriate machine learning (ML) technique not requiring the gradient computing, which is able to provide both energy-saving and high-performance advantages. In this article, we propose a HybridSNN, a deep and strong SNN composed of multiple simple SNNs, in which data-driven greedy optimization is used to build powerful classifiers, avoiding the derivative problem in gradient descent. During the training process, the output features (spikes) of selected weak classifiers are fed back to the pool for the subsequent weak SNN training and selection. This guarantees HybridSNN not only represents the linear combination of simple SNNs, as what regular AdaBoost algorithm generates, but also contains neuron connection information, thus closely resembling the neural networks of a brain. HybridSNN has the benefits of both low power consumption in weak units and overall data-driven optimizing strength. The network structure in HybridSNN is learned from training samples, which is more flexible and effective compared with existing fixed multilayer SNNs. Moreover, the topological tree of HybridSNN resembles the neural system in the brain, where pyramidal neurons receive thousands of synaptic input signals through their dendrites. Experimental results show that the proposed HybridSNN is highly competitive among the state-of-the-art SNNs.


Subject(s)
Algorithms , Neural Networks, Computer , Machine Learning , Neurons/physiology , Brain/physiology
5.
Article in English | MEDLINE | ID: mdl-37015639

ABSTRACT

Thanks to their event-driven nature, spiking neural networks (SNNs) are surmised to be great computation-efficient models. The spiking neurons encode beneficial temporal facts and possess excessive anti-noise properties. However, the high-quality encoding of spatio-temporal complexity and also its training optimization of SNNs are restricted by means of the contemporary problem, this article proposes a novel hierarchical event-driven visual device to explore how information transmits and signifies in the retina the usage of biologically manageable mechanisms. This cognitive model is an augmented spiking-based framework consisting of the function learning capacity of convolutional neural networks (CNNs) with the cognition capability of SNNs. Furthermore, this visual device is modeled in a biological realism way with unsupervised learning rules and advanced spike firing rate encoding methods. We train and test them on some image datasets (Modified National Institute of Standards and Technology (MNIST), Canadian Institute for Advanced Research (CIFAR)10, and its noisy versions) to show that our mannequin can process greater vital data than present cognitive models. This article also proposes a novel quantization approach to make the proposed spiking-based model more efficient for neuromorphic hardware implementation. The outcomes show this joint CNN-SNN model can reap excessive focus accuracy and get more effective generalization ability.

6.
IEEE Trans Neural Netw Learn Syst ; 33(5): 1935-1946, 2022 05.
Article in English | MEDLINE | ID: mdl-34665741

ABSTRACT

Neural coding, including encoding and decoding, is one of the key problems in neuroscience for understanding how the brain uses neural signals to relate sensory perception and motor behaviors with neural systems. However, most of the existed studies only aim at dealing with the continuous signal of neural systems, while lacking a unique feature of biological neurons, termed spike, which is the fundamental information unit for neural computation as well as a building block for brain-machine interface. Aiming at these limitations, we propose a transcoding framework to encode multi-modal sensory information into neural spikes and then reconstruct stimuli from spikes. Sensory information can be compressed into 10% in terms of neural spikes, yet re-extract 100% of information by reconstruction. Our framework can not only feasibly and accurately reconstruct dynamical visual and auditory scenes, but also rebuild the stimulus patterns from functional magnetic resonance imaging (fMRI) brain activities. More importantly, it has a superb ability of noise immunity for various types of artificial noises and background signals. The proposed framework provides efficient ways to perform multimodal feature representation and reconstruction in a high-throughput fashion, with potential usage for efficient neuromorphic computing in a noisy environment.


Subject(s)
Brain-Computer Interfaces , Neural Networks, Computer , Action Potentials/physiology , Brain/physiology , Models, Neurological , Neurons/physiology
7.
Neural Comput ; 33(11): 2971-2995, 2021 10 12.
Article in English | MEDLINE | ID: mdl-34474470

ABSTRACT

Our real-time actions in everyday life reflect a range of spatiotemporal dynamic brain activity patterns, the consequence of neuronal computation with spikes in the brain. Most existing models with spiking neurons aim at solving static pattern recognition tasks such as image classification. Compared with static features, spatiotemporal patterns are more complex due to their dynamics in both space and time domains. Spatiotemporal pattern recognition based on learning algorithms with spiking neurons therefore remains challenging. We propose an end-to-end recurrent spiking neural network model trained with an algorithm based on spike latency and temporal difference backpropagation. Our model is a cascaded network with three layers of spiking neurons where the input and output layers are the encoder and decoder, respectively. In the hidden layer, the recurrently connected neurons with transmission delays carry out high-dimensional computation to incorporate the spatiotemporal dynamics of the inputs. The test results based on the data sets of spiking activities of the retinal neurons show that the proposed framework can recognize dynamic spatiotemporal patterns much better than using spike counts. Moreover, for 3D trajectories of a human action data set, the proposed framework achieves a test accuracy of 83.6% on average. Rapid recognition is achieved through the learning methodology-based on spike latency and the decoding process using the first spike of the output neurons. Taken together, these results highlight a new model to extract information from activity patterns of neural computation in the brain and provide a novel approach for spike-based neuromorphic computing.


Subject(s)
Models, Neurological , Neural Networks, Computer , Action Potentials , Algorithms , Humans , Neurons
8.
Neural Netw ; 121: 512-519, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31733521

ABSTRACT

Neurons in the brain use an event signal, termed spike, encode temporal information for neural computation. Spiking neural networks (SNNs) take this advantage to serve as biological relevant models. However, the effective encoding of sensory information and also its integration with downstream neurons of SNNs are limited by the current shallow structures and learning algorithms. To tackle this limitation, this paper proposes a novel hybrid framework combining the feature learning ability of continuous-valued convolutional neural networks (CNNs) and SNNs, named deep CovDenseSNN, such that SNNs can make use of feature extraction ability of CNNs during the encoding stage, but still process features with unsupervised learning rule of spiking neurons. We evaluate them on MNIST and its variations to show that our model can extract and transmit more important information than existing models, especially for anti-noise ability in the noisy environment. The proposed architecture provides efficient ways to perform feature representation and recognition in a consistent temporal learning framework, which is easily adapted to neuromorphic hardware implementations and bring more biological realism into modern image classification models, with the hope that the proposed framework can inform us how sensory information is transmitted and represented in the brain.


Subject(s)
Models, Neurological , Neural Networks, Computer , Neurons/physiology , Visual Perception , Brain/physiology , Humans , Signal-To-Noise Ratio
SELECTION OF CITATIONS
SEARCH DETAIL
...