Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 12(1): 20931, 2022 12 03.
Artigo em Inglês | MEDLINE | ID: mdl-36463378

RESUMO

Symmetry is omnipresent in nature and perceived by the visual system of many species, as it facilitates detecting ecologically important classes of objects in our environment. Yet, the neural underpinnings of symmetry perception remain elusive, as they require abstraction of long-range spatial dependencies between image regions and are acquired with limited experience. In this paper, we evaluate Deep Neural Network (DNN) architectures on the task of learning symmetry perception from examples. We demonstrate that feed-forward DNNs that excel at modelling human performance on object recognition tasks, are unable to acquire a general notion of symmetry. This is the case even when the feed-forward DNNs are architected to capture long-range spatial dependencies, such as through 'dilated' convolutions and the 'transformers' design. By contrast, we find that recurrent architectures are capable of learning a general notion of symmetry by breaking down the symmetry's long-range spatial dependencies into a progression of local-range operations. These results suggest that recurrent connections likely play an important role in symmetry perception in artificial systems, and possibly, biological ones too.


Assuntos
Formação de Conceito , Aprendizagem , Humanos , Clorexidina , Fontes de Energia Elétrica , Percepção Visual
2.
Neural Netw ; 155: 119-143, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36054984

RESUMO

The training data distribution is often biased towards objects in certain orientations and illumination conditions. While humans have a remarkable capability of recognizing objects in out-of-distribution (OoD) orientations and illuminations, Deep Neural Networks (DNNs) severely suffer in this case, even when large amounts of training examples are available. Neurons that are invariant to orientations and illuminations have been proposed as a neural mechanism that could facilitate OoD generalization, but it is unclear how to encourage the emergence of such invariant neurons. In this paper, we investigate three different approaches that lead to the emergence of invariant neurons and substantially improve DNNs in recognizing objects in OoD orientations and illuminations. Namely, these approaches are (i) training much longer after convergence of the in-distribution (InD) validation accuracy, i.e., late-stopping, (ii) tuning the momentum parameter of the batch normalization layers, and (iii) enforcing invariance of the neural activity in an intermediate layer to orientation and illumination conditions. Each of these approaches substantially improves the DNN's OoD accuracy (more than 20% in some cases). We report results in four datasets: two datasets are modified from the MNIST and iLab datasets, and the other two are novel (one of 3D rendered cars and another of objects taken from various controlled orientations and illumination conditions). These datasets allow to study the effects of different amounts of bias and are challenging as DNNs perform poorly in OoD conditions. Finally, we demonstrate that even though the three approaches focus on different aspects of DNNs, they all tend to lead to the same underlying neural mechanism to enable OoD accuracy gains - individual neurons in the intermediate layers become invariant to OoD orientations and illuminations. We anticipate this study to be a basis for further improvement of deep neural networks' OoD generalization performance, which is highly demanded to achieve safe and fair AI applications.


Assuntos
Iluminação , Reconhecimento Visual de Modelos , Humanos , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa , Neurônios/fisiologia , Redes Neurais de Computação
3.
Front Comput Neurosci ; 16: 760085, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35173595

RESUMO

Biological learning systems are outstanding in their ability to learn from limited training data compared to the most successful learning machines, i.e., Deep Neural Networks (DNNs). What are the key aspects that underlie this data efficiency gap is an unresolved question at the core of biological and artificial intelligence. We hypothesize that one important aspect is that biological systems rely on mechanisms such as foveations in order to reduce unnecessary input dimensions for the task at hand, e.g., background in object recognition, while state-of-the-art DNNs do not. Datasets to train DNNs often contain such unnecessary input dimensions, and these lead to more trainable parameters. Yet, it is not clear whether this affects the DNNs' data efficiency because DNNs are robust to increasing the number of parameters in the hidden layers, and it is uncertain whether this holds true for the input layer. In this paper, we investigate the impact of unnecessary input dimensions on the DNNs data efficiency, namely, the amount of examples needed to achieve certain generalization performance. Our results show that unnecessary input dimensions that are task-unrelated substantially degrade data efficiency. This highlights the need for mechanisms that remove task-unrelated dimensions, such as foveation for image classification, in order to enable data efficiency gains.

4.
Neural Comput ; 33(9): 2511-2549, 2021 08 19.
Artigo em Inglês | MEDLINE | ID: mdl-34412113

RESUMO

The insideness problem is an aspect of image segmentation that consists of determining which pixels are inside and outside a region. Deep neural networks (DNNs) excel in segmentation benchmarks, but it is unclear if they have the ability to solve the insideness problem as it requires evaluating long-range spatial dependencies. In this letter, we analyze the insideness problem in isolation, without texture or semantic cues, such that other aspects of segmentation do not interfere in the analysis. We demonstrate that DNNs for segmentation with few units have sufficient complexity to solve the insideness for any curve. Yet such DNNs have severe problems with learning general solutions. Only recurrent networks trained with small images learn solutions that generalize well to almost any curve. Recurrent networks can decompose the evaluation of long-range dependencies into a sequence of local operations, and learning with small images alleviates the common difficulties of training recurrent networks with a large number of unrolling steps.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...