Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Neural Netw ; 6(1): 214-9, 1995.
Artigo em Inglês | MEDLINE | ID: mdl-18263300

RESUMO

This paper addresses the application of locally optimum (LO) signal detection techniques to environments in which the noise density is not known a priori. For small signal levels, the LO detection rule is shown to involve a nonlinearity which depends on the noise density. The estimation of the noise density is a major part of the computational burden of LO detection rules. In this paper, adaptive estimation of the noise density is implemented using a radial basis function neural network. Unlike existing algorithms, the present technique places few assumptions on the properties of the noise, and performs well under a wide variety of circumstances. Experimental results are shown which illustrate the system performance as a variety of noise densities are encountered.

2.
IEEE Trans Pattern Anal Mach Intell ; 9(1): 103-12, 1987 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-21869380

RESUMO

The bias of the finite-sample nearest neighbor (NN) error from its asymptotic value is examined. Expressions are obtained which relate the bias of the NN and 2-NN errors to sample size, dimensionality, metric, and distributions. These expressions isolate the effect of sample size from that of the distributions, giving an explicit relation showing how the bias changes as the sample size is increased. Experimental results are given which suggest that the expressions accurately predict the bias. It is shown that when the dimensionality of the data is high, it may not be possible to estimate the asymptotic error simply by increasing the sample size. A new procedure is suggested to alleviate this problem. This procedure involves measuring the mean NN errors at several sample sizes and using our derived relationship between the bias and the sample size to extrapolate an estimate of the asymptotic NN error. The results are extended to the multiclass problem. The choice of an optimal metric to minimize the bias is also discussed.

3.
IEEE Trans Pattern Anal Mach Intell ; 9(5): 634-43, 1987 May.
Artigo em Inglês | MEDLINE | ID: mdl-21869422

RESUMO

The use of k nearest neighbor (k-NN) and Parzen density estimates to obtain estimates of the Bayes error is investigated under limited design set conditions. By drawing analogies between the k-NN and Parzen procedures, new procedures are suggested, and experimental results are given which indicate that these procedures yield a significant improvement over the conventional k-NN and Parzen procedures. We show that, by varying the decision threshold, many of the biases associated with the k-NN or Parzen density estimates may be compensated, and successful error estimation may be performed in spite of these biases. Experimental results are given which demonstrate the effect of kernel size and shape (Parzen), the size of k (k-NN), and the number of samples in the design set.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...