Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 1 de 1
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Netw ; 19(4): 500-13, 2006 May.
Article in English | MEDLINE | ID: mdl-16352417

ABSTRACT

Neural networks (NNs) belong to 'black box' models and therefore 'suffer' from interpretation difficulties. Four recent methods inferring variable influence in NNs are compared in this paper. The methods assist the interpretation task during different phases of the modeling procedure. They belong to information theory (ITSS), the Bayesian framework (ARD), the analysis of the network's weights (GIM), and the sequential omission of the variables (SZW). The comparison is based upon artificial and real data sets of differing size, complexity and noise level. The influence of the neural network's size has also been considered. The results provide useful information about the agreement between the methods under different conditions. Generally, SZW and GIM differ from ARD regarding the variable influence, although applied to NNs with similar modeling accuracy, even when larger data sets sizes are used. ITSS produces similar results to SZW and GIM, although suffering more from the 'curse of dimensionality'.


Subject(s)
Algorithms , Artificial Intelligence , Neural Networks, Computer , Animals , Bayes Theorem , Computer Simulation , Humans , Logistic Models , Pattern Recognition, Automated
SELECTION OF CITATIONS
SEARCH DETAIL
...