Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Acoust Soc Am ; 136(3): 1160, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25190391

RESUMO

A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.

2.
IEEE Trans Neural Netw Learn Syst ; 23(12): 1974-86, 2012 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24808151

RESUMO

In this paper, a new robust single-hidden layer feedforward network (SLFN)-based pattern classifier is developed. It is shown that the frequency spectrums of the desired feature vectors can be specified in terms of the discrete Fourier transform (DFT) technique. The input weights of the SLFN are then optimized with the regularization theory such that the error between the frequency components of the desired feature vectors and the ones of the feature vectors extracted from the outputs of the hidden layer is minimized. For the linearly separable input patterns, the hidden layer of the SLFN plays the role of removing the effects of the disturbance from the noisy input data and providing the linearly separable feature vectors for the accurate classification. However, for the nonlinearly separable input patterns, the hidden layer is capable of assigning the DFTs of all feature vectors to the desired positions in the frequency-domain such that the separability of all nonlinearly separable patterns are maximized. In addition, the output weights of the SLFN are also optimally designed so that both the empirical and the structural risks are well balanced and minimized in a noisy environment. Two simulation examples are presented to show the excellent performance and effectiveness of the proposed classification scheme.

3.
IEEE Trans Neural Netw ; 17(6): 1580-91, 2006 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-17131670

RESUMO

A new adaptive backpropagation (BP) algorithm based on Lyapunov stability theory for neural networks is developed in this paper. It is shown that the candidate of a Lyapunov function V(k) of the tracking error between the output of a neural network and the desired reference signal is chosen first, and the weights of the neural network are then updated, from the output layer to the input layer, in the sense that deltaV(k) = V(k) - V(k - 1) < 0. The output tracking error can then asymptotically converge to zero according to Lyapunov stability theory. Unlike gradient-based BP training algorithms, the new Lyapunov adaptive BP algorithm in this paper is not used for searching the global minimum point along the cost-function surface in the weight space, but it is aimed at constructing an energy surface with a single global minimum point through the adaptive adjustment of the weights as the time goes to infinity. Although a neural network may have bounded input disturbances, the effects of the disturbances can be eliminated, and asymptotic error convergence can be obtained. The new Lyapunov adaptive BP algorithm is then applied to the design of an adaptive filter in the simulation example to show the fast error convergence and strong robustness with respect to large bounded input disturbances.


Assuntos
Algoritmos , Armazenamento e Recuperação da Informação/métodos , Teoria da Informação , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Teoria de Sistemas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...