Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Neural Netw Learn Syst ; 33(6): 2350-2364, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34596562

RESUMO

In this article, we argue that the unsatisfactory out-of-distribution (OOD) detection performance of neural networks is mainly due to the SoftMax loss anisotropy and propensity to produce low entropy probability distributions in disagreement with the principle of maximum entropy. On the one hand, current OOD detection approaches usually do not directly fix the SoftMax loss drawbacks, but rather build techniques to circumvent it. Unfortunately, those methods usually produce undesired side effects (e.g., classification accuracy drop, additional hyperparameters, slower inferences, and collecting extra data). On the other hand, we propose replacing SoftMax loss with a novel loss function that does not suffer from the mentioned weaknesses. The proposed IsoMax loss is isotropic (exclusively distance-based) and provides high entropy posterior probability distributions. Replacing the SoftMax loss by IsoMax loss requires no model or training changes. Additionally, the models trained with IsoMax loss produce as fast and energy-efficient inferences as those trained using SoftMax loss. Moreover, no classification accuracy drop is observed. The proposed method does not rely on outlier/background data, hyperparameter tuning, temperature calibration, feature extraction, metric learning, adversarial training, ensemble procedures, or generative models. Our experiments showed that IsoMax loss works as a seamless SoftMax loss drop-in replacement that significantly improves neural networks' OOD detection performance. Hence, it may be used as a baseline OOD detection approach to be combined with current or future OOD detection techniques to achieve even higher results.


Assuntos
Redes Neurais de Computação , Extremidade Superior , Entropia
2.
Neural Netw ; 32: 285-91, 2012 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-22560678

RESUMO

In this work we present an evolutionary morphological approach to solve the software development cost estimation (SDCE) problem. The proposed approach consists of a hybrid artificial neuron based on framework of mathematical morphology (MM) with algebraic foundations in the complete lattice theory (CLT), referred to as dilation-erosion perceptron (DEP). Also, we present an evolutionary learning process, called DEP(MGA), using a modified genetic algorithm (MGA) to design the DEP model, because a drawback arises from the gradient estimation of morphological operators in the classical learning process of the DEP, since they are not differentiable in the usual way. Furthermore, an experimental analysis is conducted with the proposed model using five complex SDCE problems and three well-known performance metrics, demonstrating good performance of the DEP model to solve SDCE problems.


Assuntos
Custos e Análise de Custo/métodos , Software/economia , Algoritmos , Inteligência Artificial , Simulação por Computador , Bases de Dados Factuais , Modelos Genéticos , Modelos Estatísticos , Redes Neurais de Computação , Reprodutibilidade dos Testes
3.
Int J Neural Syst ; 16(4): 271-81, 2006 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-16972315

RESUMO

The dynamic decay adjustment (DDA) algorithm is a fast constructive algorithm for training RBF neural networks (RBFNs) and probabilistic neural networks (PNNs). The algorithm has two parameters, namely, theta(+) and theta(-). The papers which introduced DDA argued that those parameters would not heavily influence classification performance and therefore they recommended using always the default values of these parameters. In contrast, this paper shows that smaller values of parameter theta(-) can, for a considerable number of datasets, result in strong improvement in generalization performance. The experiments described here were carried out using twenty benchmark classification datasets from both Proben1 and the UCI repositories. The results show that for eleven of the datasets, the parameter theta(-) strongly influenced classification performance. The influence of theta(-) was also noticeable, although much less, on six of the datasets considered. This paper also compares the performance of RBF-DDA with theta(-) selection with both AdaBoost and Support Vector Machines (SVMs).


Assuntos
Algoritmos , Inteligência Artificial , Redes Neurais de Computação , Modelos Estatísticos , Dinâmica não Linear , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...