ABSTRACT
In this letter, an improvement of the recently developed neighborhood-based Levenberg-Marquardt (NBLM) algorithm is proposed and tested for neural network (NN) training. The algorithm is modified by allowing local adaptation of a different learning coefficient for each neighborhood. This simple add-in to the NBLM training method significantly increases the efficiency of the training episodes carried out with small neighborhood sizes, thus, allowing important savings in memory occupation and computational time while obtaining better performance than the original Levenberg-Marquardt (LM) and NBLM methods.
Subject(s)
Algorithms , Models, Statistical , Neural Networks, Computer , Computer SimulationABSTRACT
Although the Levenberg-Marquardt (LM) algorithm has been extensively applied as a neural-network training method, it suffers from being very expensive, both in memory and number of operations required, when the network to be trained has a significant number of adaptive weights. In this paper, the behavior of a recently proposed variation of this algorithm is studied. This new method is based on the application of the concept of neural neighborhoods to the LM algorithm. It is shown that, by performing an LM step on a single neighborhood at each training iteration, not only significant savings in memory occupation and computing effort are obtained, but also, the overall performance of the LM method can be increased.