Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Int J Neural Syst ; 7(3): 263-71, 1996 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-8891842

RESUMO

"Error-Confidence" measures the probability that the proportion of errors made by a classifier will be within epsilon of EB, the optimal (Bayes) error. Probably Almost Bayes (PAB) theory attempts to quantify how this confidence increases with the number of training samples. We investigate the relationship empirically by comparing average error versus number of training patterns (m) for linear and neural network classifiers. On Gaussian problems, the resulting EC curves demonstrate that the PAB bounds are extremely conservative. Asymptotic statistics predicts a linear relationship between the logarithms of the average error and the number of training patterns. For low Bayes error rates we found excellent agreement between the prediction and the linear discriminant performance. At higher Bayes error rates we still found a linear relationship, but with a shallower slope than the predicted-1. When the underlying true model is a three-layer network, the EC curves show a greater dependence on classifier capacity, and the linear predictions no longer seem to hold.


Assuntos
Redes Neurais de Computação , Probabilidade , Teorema de Bayes , Simulação por Computador , Intervalos de Confiança , Distribuição Normal , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...