Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Neural Netw ; 12(6): 1411-32, 2001.
Artigo em Inglês | MEDLINE | ID: mdl-18249970

RESUMO

Practical algorithms are presented for adaptive state filtering in nonlinear dynamic systems when the state equations are unknown. The state equations are constructively approximated using neural networks. The algorithms presented are based on the two-step prediction-update approach of the Kalman filter. The proposed algorithms make minimal assumptions regarding the underlying nonlinear dynamics and their noise statistics. Non-adaptive and adaptive state filtering algorithms are presented with both off-line and online learning stages. The algorithms are implemented using feedforward and recurrent neural network and comparisons are presented. Furthermore, extended Kalman filters (EKFs) are developed and compared to the filter algorithms proposed. For one of the case studies, the EKF converges but results in higher state estimation errors that the equivalent neural filters. For another, more complex case study with unknown system dynamics and noise statistics, the developed EKFs do not converge. The off-line trained neural state filters converge quite rapidly and exhibit acceptable performance. Online training further enhances the estimation accuracy of the developed adaptive filters, effectively decoupling the eventual filter accuracy from the accuracy of the process model.

2.
Neural Netw ; 13(7): 765-86, 2000 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-11152208

RESUMO

A method for the development of empirical predictive models for complex processes is presented. The models are capable of performing accurate multi-step-ahead (MS) predictions, while maintaining acceptable single-step-ahead (SS) prediction accuracy. Such predictors find applications in model predictive controllers and in fault diagnosis systems. The proposed method makes use of dynamic recurrent neural networks in the form of a nonlinear infinite impulse response (IIR) filter. A learning algorithm is presented, which is based on a dynamic gradient descent approach. The effectiveness of the method for accurate MS prediction is tested on an artificial problem and on a complex, open-loop unstable process. Comparative results are presented with polynomial Nonlinear AutoRegressive with eXogeneous (NARX) predictors, and with recurrent networks trained using teacher forcing. Validation studies indicate that excellent generalization is obtained for the range of operational dynamics studied. The research demonstrates that the proposed network architecture and the associated learning algorithm are quite effective in modeling the dynamics of complex processes and performing accurate MS predictions.


Assuntos
Algoritmos , Redes Neurais de Computação , Dinâmica não Linear
3.
IEEE Trans Neural Netw ; 11(3): 697-709, 2000.
Artigo em Inglês | MEDLINE | ID: mdl-18249797

RESUMO

How to efficiently train recurrent networks remains a challenging and active research topic. Most of the proposed training approaches are based on computational ways to efficiently obtain the gradient of the error function, and can be generally grouped into five major groups. In this study we present a derivation that unifies these approaches. We demonstrate that the approaches are only five different ways of solving a particular matrix equation. The second goal of this paper is develop a new algorithm based on the insights gained from the novel formulation. The new algorithm, which is based on approximating the error gradient, has lower computational complexity in computing the weight update than the competing techniques for most typical problems. In addition, it reaches the error minimum in a much smaller number of iterations. A desirable characteristic of recurrent network training algorithms is to be able to update the weights in an on-line fashion. We have also developed an on-line version of the proposed algorithm, that is based on updating the error gradient approximation in a recursive manner.

4.
IEEE Trans Neural Netw ; 5(2): 255-66, 1994.
Artigo em Inglês | MEDLINE | ID: mdl-18267795

RESUMO

A nonlinear dynamic model is developed for a process system, namely a heat exchanger, using the recurrent multilayer perceptron network as the underlying model structure. The perceptron is a dynamic neural network, which appears effective in the input-output modeling of complex process systems. Dynamic gradient descent learning is used to train the recurrent multilayer perceptron, resulting in an order of magnitude improvement in convergence speed over a static learning algorithm used to train the same network. In developing the empirical process model the effects of actuator, process, and sensor noise on the training and testing sets are investigated. Learning and prediction both appear very effective, despite the presence of training and testing set noise, respectively. The recurrent multilayer perceptron appears to learn the deterministic part of a stochastic training set, and it predicts approximately a moving average response of various testing sets. Extensive model validation studies with signals that are encountered in the operation of the process system modeled, that is steps and ramps, indicate that the empirical model can substantially generalize operational transients, including accurate prediction of instabilities not in the training set. However, the accuracy of the model beyond these operational transients has not been investigated. Furthermore, online learning is necessary during some transients and for tracking slowly varying process dynamics. Neural networks based empirical models in some cases appear to provide a serious alternative to first principles models.

5.
IEEE Trans Neural Netw ; 5(3): 493-7, 1994.
Artigo em Inglês | MEDLINE | ID: mdl-18267816

RESUMO

An accelerated learning algorithm (ABP-adaptive back propagation) is proposed for the supervised training of multilayer perceptron networks. The learning algorithm is inspired from the principle of "forced dynamics" for the total error functional. The algorithm updates the weights in the direction of steepest descent, but with a learning rate a specific function of the error and of the error gradient norm. This specific form of this function is chosen such as to accelerate convergence. Furthermore, ABP introduces no additional "tuning" parameters found in variants of the backpropagation algorithm. Simulation results indicate a superior convergence speed for analog problems only, as compared to other competing methods, as well as reduced sensitivity to algorithm step size parameter variations.

6.
IEEE Trans Neural Netw ; 5(4): 666-7, 1994.
Artigo em Inglês | MEDLINE | ID: mdl-18267840

RESUMO

This is a review of a video consisting of clips from presentations made at the Second IEEE International Conference on Fuzzy Systems (1993) and the 1993 IEEE International Conference on Neural Networks. Each video clip describes significant achievements and current advances in the fields of fuzzy logic and neural networks contributed by researchers from around the world.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...