Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Neural Netw ; 10(5): 1075-89, 1999.
Artigo em Inglês | MEDLINE | ID: mdl-18252610

RESUMO

It is well known that for a given sample size there exists a model of optimal complexity corresponding to the smallest prediction (generalization) error. Hence, any method for learning from finite samples needs to have some provisions for complexity control. Existing implementations of complexity control include penalization (or regularization), weight decay (in neural networks), and various greedy procedures (aka constructive, growing, or pruning methods). There are numerous proposals for determining optimal model complexity (aka model selection) based on various (asymptotic) analytic estimates of the prediction risk and on resampling approaches. Nonasymptotic bounds on the prediction risk based on Vapnik-Chervonenkis (VC)-theory have been proposed by Vapnik. This paper describes application of VC-bounds to regression problems with the usual squared loss. An empirical study is performed for settings where the VC-bounds can be rigorously applied, i.e., linear models and penalized linear models where the VC-dimension can be accurately estimated, and the empirical risk can be reliably minimized. Empirical comparisons between model selection using VC-bounds and classical methods are performed for various noise levels, sample size, target functions and types of approximating functions. Our results demonstrate the advantages of VC-based complexity control with finite samples.

2.
IEEE Trans Neural Netw ; 7(4): 969-84, 1996.
Artigo em Inglês | MEDLINE | ID: mdl-18263491

RESUMO

The problem of estimating an unknown function from a finite number of noisy data points has fundamental importance for many applications. This problem has been studied in statistics, applied mathematics, engineering, artificial intelligence, and, more recently, in the fields of artificial neural networks, fuzzy systems, and genetic optimization. In spite of many papers describing individual methods, very little is known about the comparative predictive (generalization) performance of various methods. We discuss subjective and objective factors contributing to the difficult problem of meaningful comparisons. We also describe a pragmatic framework for comparisons between various methods, and present a detailed comparison study comprising several thousand individual experiments. Our approach to comparisons is biased toward general (nonexpert) users. Our study uses six representative methods described using a common taxonomy. Comparisons performed on artificial data sets provide some insights on applicability of various methods. No single method proved to be the best, since a method's performance depends significantly on the type of the target function, and on the properties of training data.

3.
Neural Comput ; 7(6): 1165-77, 1995 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-7584895

RESUMO

Kohonen's self-organizing map, when described in a batch processing mode, can be interpreted as a statistical kernel smoothing problem. The batch SOM algorithm consists of two steps. First, the training data are partitioned according to the Voronoi regions of the map unit locations. Second, the units are updated by taking weighted centroids of the data falling into the Voronoi regions, with the weighing function given by the neighborhood. Then, the neighborhood width is decreased and steps 1, 2 are repeated. The second step can be interpreted as a statistical kernel smoothing problem where the neighborhood function corresponds to the kernel and neighborhood width corresponds to kernel span. To determine the new unit locations, kernel smoothing is applied to the centroids of the Voronoi regions in the topological space. This interpretation leads to some new insights concerning the role of the neighborhood and dimensionality reduction. It also strengthens the algorithm's connection with the Principal Curve algorithm. A generalized self-organizing algorithm is proposed, where the kernel smoothing step is replaced with an arbitrary nonparametric regression method.


Assuntos
Algoritmos , Redes Neurais de Computação , Matemática , Modelos Estatísticos , Modelos Teóricos , Análise de Regressão , Estatísticas não Paramétricas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...