Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neural Netw ; 155: 119-143, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36054984

RESUMO

The training data distribution is often biased towards objects in certain orientations and illumination conditions. While humans have a remarkable capability of recognizing objects in out-of-distribution (OoD) orientations and illuminations, Deep Neural Networks (DNNs) severely suffer in this case, even when large amounts of training examples are available. Neurons that are invariant to orientations and illuminations have been proposed as a neural mechanism that could facilitate OoD generalization, but it is unclear how to encourage the emergence of such invariant neurons. In this paper, we investigate three different approaches that lead to the emergence of invariant neurons and substantially improve DNNs in recognizing objects in OoD orientations and illuminations. Namely, these approaches are (i) training much longer after convergence of the in-distribution (InD) validation accuracy, i.e., late-stopping, (ii) tuning the momentum parameter of the batch normalization layers, and (iii) enforcing invariance of the neural activity in an intermediate layer to orientation and illumination conditions. Each of these approaches substantially improves the DNN's OoD accuracy (more than 20% in some cases). We report results in four datasets: two datasets are modified from the MNIST and iLab datasets, and the other two are novel (one of 3D rendered cars and another of objects taken from various controlled orientations and illumination conditions). These datasets allow to study the effects of different amounts of bias and are challenging as DNNs perform poorly in OoD conditions. Finally, we demonstrate that even though the three approaches focus on different aspects of DNNs, they all tend to lead to the same underlying neural mechanism to enable OoD accuracy gains - individual neurons in the intermediate layers become invariant to OoD orientations and illuminations. We anticipate this study to be a basis for further improvement of deep neural networks' OoD generalization performance, which is highly demanded to achieve safe and fair AI applications.


Assuntos
Iluminação , Reconhecimento Visual de Modelos , Humanos , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa , Neurônios/fisiologia , Redes Neurais de Computação
2.
Prog Biophys Mol Biol ; 96(1-3): 60-89, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-17888502

RESUMO

Recent advances in biotechnology and the availability of ever more powerful computers have led to the formulation of increasingly complex models at all levels of biology. One of the main aims of systems biology is to couple these together to produce integrated models across multiple spatial scales and physical processes. In this review, we formulate a definition of multi-scale in terms of levels of biological organisation and describe the types of model that are found at each level. Key issues that arise in trying to formulate and solve multi-scale and multi-physics models are considered and examples of how these issues have been addressed are given for two of the more mature fields in computational biology: the molecular dynamics of ion channels and cardiac modelling. As even more complex models are developed over the coming few years, it will be necessary to develop new methods to model them (in particular in coupling across the interface between stochastic and deterministic processes) and new techniques will be required to compute their solutions efficiently on massively parallel computers. We outline how we envisage these developments occurring.


Assuntos
Biologia , Biologia Computacional , Modelos Biológicos , Fisiologia , Animais , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...