Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS Comput Biol ; 19(8): e1011343, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37540638

RESUMO

Long-range horizontal connections (LRCs) are conspicuous anatomical structures in the primary visual cortex (V1) of mammals, yet their detailed functions in relation to visual processing are not fully understood. Here, we show that LRCs are key components to organize a "small-world network" optimized for each size of the visual cortex, enabling the cost-efficient integration of visual information. Using computational simulations of a biologically inspired model neural network, we found that sparse LRCs added to networks, combined with dense local connections, compose a small-world network and significantly enhance image classification performance. We confirmed that the performance of the network appeared to be strongly correlated with the small-world coefficient of the model network under various conditions. Our theoretical model demonstrates that the amount of LRCs to build a small-world network depends on each size of cortex and that LRCs are beneficial only when the size of the network exceeds a certain threshold. Our model simulation of various sizes of cortices validates this prediction and provides an explanation of the species-specific existence of LRCs in animal data. Our results provide insight into a biological strategy of the brain to balance functional performance and resource cost.


Assuntos
Redes Neurais de Computação , Córtex Visual Primário , Animais , Simulação por Computador , Percepção Visual , Encéfalo , Mamíferos
2.
Front Comput Neurosci ; 16: 1030707, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36405785

RESUMO

The ability to perceive visual objects with various types of transformations, such as rotation, translation, and scaling, is crucial for consistent object recognition. In machine learning, invariant object detection for a network is often implemented by augmentation with a massive number of training images, but the mechanism of invariant object detection in biological brains-how invariance arises initially and whether it requires visual experience-remains elusive. Here, using a model neural network of the hierarchical visual pathway of the brain, we show that invariance of object detection can emerge spontaneously in the complete absence of learning. First, we found that units selective to a particular object class arise in randomly initialized networks even before visual training. Intriguingly, these units show robust tuning to images of each object class under a wide range of image transformation types, such as viewpoint rotation. We confirmed that this "innate" invariance of object selectivity enables untrained networks to perform an object-detection task robustly, even with images that have been significantly modulated. Our computational model predicts that invariant object tuning originates from combinations of non-invariant units via random feedforward projections, and we confirmed that the predicted profile of feedforward projections is observed in untrained networks. Our results suggest that invariance of object detection is an innate characteristic that can emerge spontaneously in random feedforward networks.

3.
Nat Commun ; 12(1): 7328, 2021 12 16.
Artigo em Inglês | MEDLINE | ID: mdl-34916514

RESUMO

Face-selective neurons are observed in the primate visual pathway and are considered as the basis of face detection in the brain. However, it has been debated as to whether this neuronal selectivity can arise innately or whether it requires training from visual experience. Here, using a hierarchical deep neural network model of the ventral visual stream, we suggest a mechanism in which face-selectivity arises in the complete absence of training. We found that units selective to faces emerge robustly in randomly initialized networks and that these units reproduce many characteristics observed in monkeys. This innate selectivity also enables the untrained network to perform face-detection tasks. Intriguingly, we observed that units selective to various non-face objects can also arise innately in untrained networks. Our results imply that the random feedforward connections in early, untrained deep neural networks may be sufficient for initializing primitive visual selectivity.


Assuntos
Reconhecimento Facial , Haplorrinos/fisiologia , Animais , Encéfalo/fisiologia , Redes Neurais de Computação , Vias Visuais
4.
Sci Adv ; 7(1)2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33523851

RESUMO

Number sense, the ability to estimate numerosity, is observed in naïve animals, but how this cognitive function emerges in the brain remains unclear. Here, using an artificial deep neural network that models the ventral visual stream of the brain, we show that number-selective neurons can arise spontaneously, even in the complete absence of learning. We also show that the responses of these neurons can induce the abstract number sense, the ability to discriminate numerosity independent of low-level visual cues. We found number tuning in a randomly initialized network originating from a combination of monotonically decreasing and increasing neuronal activities, which emerges spontaneously from the statistical properties of bottom-up projections. We confirmed that the responses of these number-selective neurons show the single- and multineuron characteristics observed in the brain and enable the network to perform number comparison tasks. These findings provide insight into the origin of innate cognitive functions.

5.
Neural Netw ; 134: 76-85, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33291018

RESUMO

The brain successfully performs visual object recognition with a limited number of hierarchical networks that are much shallower than artificial deep neural networks (DNNs) that perform similar tasks. Here, we show that long-range horizontal connections (LRCs), often observed in the visual cortex of mammalian species, enable such a cost-efficient visual object recognition in shallow neural networks. Using simulations of a model hierarchical network with convergent feedforward connections and LRCs, we found that the addition of LRCs to the shallow feedforward network significantly enhances the performance of networks for image classification, to a degree that is comparable to much deeper networks. We found that a combination of sparse LRCs and dense local connections dramatically increases performance per wiring cost. From network pruning with gradient-based optimization, we also confirmed that LRCs could emerge spontaneously by minimizing the total connection length while maintaining performance. Ablation of emerged LRCs led to a significant reduction of classification performance, which implies these LRCs are crucial for performing image classification. Taken together, our findings suggest a brain-inspired strategy for constructing a cost-efficient network architecture to implement parsimonious object recognition under physical constraints such as shallow hierarchical depth.


Assuntos
Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Córtex Visual/fisiologia , Animais , Encéfalo/fisiologia , Vias Visuais/fisiologia , Percepção Visual/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...