Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
IEEE Trans Neural Netw ; 12(4): 929-35, 2001.
Article in English | MEDLINE | ID: mdl-18249923

ABSTRACT

The prediction of corporate bankruptcies is an important and widely studied topic since it can have significant impact on bank lending decisions and profitability. This work presents two contributions. First we review the topic of bankruptcy prediction, with emphasis on neural-network (NN) models. Second, we develop an NN bankruptcy prediction model. Inspired by one of the traditional credit risk models developed by Merton (1974), we propose novel indicators for the NN system. We show that the use of these indicators in addition to traditional financial ratio indicators provides a significant improvement in the (out-of-sample) prediction accuracy (from 81.46% to 85.5% for a three-year-ahead forecast).

2.
Neural Netw ; 13(4-5): 485-505, 2000.
Article in English | MEDLINE | ID: mdl-10946395

ABSTRACT

Piecewise-linear (PWL) neural networks are widely known for their amenability to digital implementation. This paper presents a new algorithm for learning in PWL networks consisting of a single hidden layer. The approach adopted is based upon constructing a continuous PWL error function and developing an efficient algorithm to minimize it. The algorithm consists of two basic stages in searching the weight space. The first stage of the optimization algorithm is used to locate a point in the weight space representing the intersection of N linearly independent hyperplanes, with N being the number of weights in the network. The second stage is then called to use this point as a starting point in order to continue searching by moving along the single-dimension boundaries between the different linear regions of the error function, hopping from one point (representing the intersection of N hyperplanes) to another. The proposed algorithm exhibits significantly accelerated convergence, as compared to standard algorithms such as back-propagation and improved versions of it, such as the conjugate gradient algorithm. In addition, it has the distinct advantage that there are no parameters to adjust, and therefore there is no time-consuming parameters tuning step. The new algorithm is expected to find applications in function approximation, time series prediction and binary classification problems.


Subject(s)
Algorithms , Computer Simulation , Neural Networks, Computer , Linear Models
3.
Neural Comput ; 12(6): 1303-12, 2000 Jun.
Article in English | MEDLINE | ID: mdl-10935714

ABSTRACT

Consider an algorithm whose time to convergence is unknown (because of some random element in the algorithm, such as a random initial weight choice for neural network training). Consider the following strategy. Run the algorithm for a specific time T. If it has not converged by time T, cut the run short and rerun it from the start (repeat the same strategy for every run). This so-called restart mechanism has been proposed by Fahlman (1988) in the context of backpropagation training. It is advantageous in problems that are prone to local minima or when there is a large variability in convergence time from run to run, and may lead to a speed-up in such cases. In this article, we analyze theoretically the restart mechanism, and obtain conditions on the probability density of the convergence time for which restart will improve the expected convergence time. We also derive the optimal restart time. We apply the derived formulas to several cases, including steepest-descent algorithms.


Subject(s)
Algorithms , Neural Networks, Computer , Linear Models
4.
Neural Netw ; 13(7): 765-86, 2000 Sep.
Article in English | MEDLINE | ID: mdl-11152208

ABSTRACT

A method for the development of empirical predictive models for complex processes is presented. The models are capable of performing accurate multi-step-ahead (MS) predictions, while maintaining acceptable single-step-ahead (SS) prediction accuracy. Such predictors find applications in model predictive controllers and in fault diagnosis systems. The proposed method makes use of dynamic recurrent neural networks in the form of a nonlinear infinite impulse response (IIR) filter. A learning algorithm is presented, which is based on a dynamic gradient descent approach. The effectiveness of the method for accurate MS prediction is tested on an artificial problem and on a complex, open-loop unstable process. Comparative results are presented with polynomial Nonlinear AutoRegressive with eXogeneous (NARX) predictors, and with recurrent networks trained using teacher forcing. Validation studies indicate that excellent generalization is obtained for the range of operational dynamics studied. The research demonstrates that the proposed network architecture and the associated learning algorithm are quite effective in modeling the dynamics of complex processes and performing accurate MS predictions.


Subject(s)
Algorithms , Neural Networks, Computer , Nonlinear Dynamics
5.
IEEE Trans Neural Netw ; 11(3): 697-709, 2000.
Article in English | MEDLINE | ID: mdl-18249797

ABSTRACT

How to efficiently train recurrent networks remains a challenging and active research topic. Most of the proposed training approaches are based on computational ways to efficiently obtain the gradient of the error function, and can be generally grouped into five major groups. In this study we present a derivation that unifies these approaches. We demonstrate that the approaches are only five different ways of solving a particular matrix equation. The second goal of this paper is develop a new algorithm based on the insights gained from the novel formulation. The new algorithm, which is based on approximating the error gradient, has lower computational complexity in computing the weight update than the competing techniques for most typical problems. In addition, it reaches the error minimum in a much smaller number of iterations. A desirable characteristic of recurrent network training algorithms is to be able to update the weights in an on-line fashion. We have also developed an on-line version of the proposed algorithm, that is based on updating the error gradient approximation in a recursive manner.

6.
IEEE Trans Neural Netw ; 10(2): 402-9, 1999.
Article in English | MEDLINE | ID: mdl-18252536

ABSTRACT

Estimating the flows of rivers can have significant economic impact, as this can help in agricultural water management and in protection from water shortages and possible flood damage. The first goal of this paper is to apply neural networks to the problem of forecasting the flow of the River Nile in Egypt. The second goal of the paper is to utilize the time series as a benchmark to compare between several neural-network forecasting methods.We compare between four different methods to preprocess the inputs and outputs, including a novel method proposed here based on the discrete Fourier series. We also compare between three different methods for the multistep ahead forecast problem: the direct method, the recursive method, and the recursive method trained using a backpropagation through time scheme. We also include a theoretical comparison between these three methods. The final comparison is between different methods to perform longer horizon forecast, and that includes ways to partition the problem into the several subproblems of forecasting K steps ahead.

7.
IEEE Trans Image Process ; 8(6): 762-73, 1999.
Article in English | MEDLINE | ID: mdl-18267491

ABSTRACT

Three-dimensional (3-D) video compression using wavelets decomposition along the temporal axis dictates that a number of video frames must be buffered to allow for the temporal decomposition. Buffering of frames allows for the temporal correlation to be made use of, and the larger the buffer the more effective the decomposition. One problem inherent in such a set up in interactive applications such as video conferencing, is that buffering translates into a corresponding time delay. We show that 3-D coding of such image sequences can be achieved in the true sense of temporal direction decomposition but with much less buffering requirements. For a practical coder, this can be achieved by introducing an approximation to the way the transform coefficients are encoded. Applying wavelet decomposition using some types of filters may introduce edge errors, which become more prominent in short signal segments. We also present a solution to this problem for the Daubechies (1988) family of filters.

8.
IEEE Trans Neural Netw ; 5(2): 255-66, 1994.
Article in English | MEDLINE | ID: mdl-18267795

ABSTRACT

A nonlinear dynamic model is developed for a process system, namely a heat exchanger, using the recurrent multilayer perceptron network as the underlying model structure. The perceptron is a dynamic neural network, which appears effective in the input-output modeling of complex process systems. Dynamic gradient descent learning is used to train the recurrent multilayer perceptron, resulting in an order of magnitude improvement in convergence speed over a static learning algorithm used to train the same network. In developing the empirical process model the effects of actuator, process, and sensor noise on the training and testing sets are investigated. Learning and prediction both appear very effective, despite the presence of training and testing set noise, respectively. The recurrent multilayer perceptron appears to learn the deterministic part of a stochastic training set, and it predicts approximately a moving average response of various testing sets. Extensive model validation studies with signals that are encountered in the operation of the process system modeled, that is steps and ramps, indicate that the empirical model can substantially generalize operational transients, including accurate prediction of instabilities not in the training set. However, the accuracy of the model beyond these operational transients has not been investigated. Furthermore, online learning is necessary during some transients and for tracking slowly varying process dynamics. Neural networks based empirical models in some cases appear to provide a serious alternative to first principles models.

9.
IEEE Trans Neural Netw ; 5(3): 493-7, 1994.
Article in English | MEDLINE | ID: mdl-18267816

ABSTRACT

An accelerated learning algorithm (ABP-adaptive back propagation) is proposed for the supervised training of multilayer perceptron networks. The learning algorithm is inspired from the principle of "forced dynamics" for the total error functional. The algorithm updates the weights in the direction of steepest descent, but with a learning rate a specific function of the error and of the error gradient norm. This specific form of this function is chosen such as to accelerate convergence. Furthermore, ABP introduces no additional "tuning" parameters found in variants of the backpropagation algorithm. Simulation results indicate a superior convergence speed for analog problems only, as compared to other competing methods, as well as reduced sensitivity to algorithm step size parameter variations.

10.
IEEE Trans Neural Netw ; 5(4): 612-21, 1994.
Article in English | MEDLINE | ID: mdl-18267834

ABSTRACT

We investigate the effects of delays on the dynamics and, in particular, on the oscillatory properties of simple neural network models. We extend previously known results regarding the effects of delays on stability and convergence properties. We treat in detail the case of ring networks for which we derive simple conditions for oscillating behavior and several formulas to predict the regions of bifurcation, the periods of the limit cycles and the phases of the different neurons. These results in turn can readily be applied to more complex and more biologically motivated architectures, such as layered networks. In general, the main result is that delays tend to increase the period of oscillations and broaden the spectrum of possible frequencies, in a quantifiable way. Simulations show that the theoretically predicted values are in excellent agreement with the numerically observed behavior. Adaptable delays are then proposed as one additional mechanism through which neural systems could tailor their own dynamics. Accordingly, we derive recurrent backpropagation learning formulas for the adjustment of delays and other parameters in networks with delayed interactions and discuss some possible applications.

11.
IEEE Trans Biomed Eng ; 39(7): 723-9, 1992 Jul.
Article in English | MEDLINE | ID: mdl-1516939

ABSTRACT

An essential step in studying nerve cell interaction during information processing is the extracellular microelectrode recording of the electrical activity of groups of adjacent cells. The recording usually contains the superposition of the spike trains produced by a number of neurons in the vicinity of the electrode. It is therefore necessary to correctly classify the signals generated by these different neurons. This paper considers this problem, and a new classification scheme is developed, which does not require human supervision. A learning stage is first applied on the beginning portion of the recording to estimate the typical spike shapes of the different neurons. As for the classification stage, a method is developed, which specifically considers the case when spikes overlap temporally. The method minimizes the probability of error, taking into account the statistical properties of the discharges of the neurons. The method is tested on a real recording as well as on synthetic data.


Subject(s)
Action Potentials , Classification , Motor Neurons/physiology , Signal Processing, Computer-Assisted , Algorithms , Bias , Evaluation Studies as Topic , Probability Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...