Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Netw ; 21(9): 1344-62, 2008 Nov.
Article in English | MEDLINE | ID: mdl-18272334

ABSTRACT

In this paper we propose a boosting approach to random subspace method (RSM) to achieve an improved performance and avoid some of the major drawbacks of RSM. RSM is a successful method for classification. However, the random selection of inputs, its source of success, can also be a major problem. For several problems some of the selected subspaces may lack the discriminant ability to separate the different classes. These subspaces produce poor classifiers that harm the performance of the ensemble. Additionally, boosting RSM would also be an interesting approach for improving its performance. Nevertheless, the application of the two methods together, boosting and RSM, achieves poor results, worse than the results of each method separately. In this work, we propose a new approach for combining RSM and boosting. Instead of obtaining random subspaces, we search subspaces that optimize the weighted classification error given by the boosting algorithm, and then the new classifier added to the ensemble is trained using the obtained subspace. An additional advantage of the proposed methodology is that it can be used with any classifier, including those, such as k nearest neighbor classifiers, that cannot use boosting methods easily. The proposed approach is compared with standard ADABoost and RSM showing an improved performance on a large set of 45 problems from the UCI Machine Learning Repository. An additional study of the effect of noise on the labels of the training instances shows that the less aggressive versions of the proposed methodology are more robust than ADABoost in the presence of noise.


Subject(s)
Algorithms , Artificial Intelligence , Classification , Genetics/statistics & numerical data , Models, Statistical , Mutation/physiology , Neural Networks, Computer
2.
IEEE Trans Pattern Anal Mach Intell ; 28(6): 1001-6, 2006 Jun.
Article in English | MEDLINE | ID: mdl-16724593

ABSTRACT

We present a new method of multiclass classification based on the combination of one-vs-all method and a modification of one-vs-one method. This combination of one-vs-all and one-vs-one methods proposed enforces the strength of both methods. A study of the behavior of the two methods identifies some of the sources of their failure. The performance of a classifier can be improved if the two methods are combined in one, in such a way that the main sources of their failure are partially avoided.


Subject(s)
Algorithms , Artificial Intelligence , Information Storage and Retrieval/methods , Pattern Recognition, Automated/methods , Cluster Analysis
3.
Neural Netw ; 19(4): 514-28, 2006 May.
Article in English | MEDLINE | ID: mdl-16343847

ABSTRACT

In this work we present a new approach to crossover operator in the genetic evolution of neural networks. The most widely used evolutionary computation paradigm for neural network evolution is evolutionary programming. This paradigm is usually preferred due to the problems caused by the application of crossover to neural network evolution. However, crossover is the most innovative operator within the field of evolutionary computation. One of the most notorious problems with the application of crossover to neural networks is known as the permutation problem. This problem occurs due to the fact that the same network can be represented in a genetic coding by many different codifications. Our approach modifies the standard crossover operator taking into account the special features of the individuals to be mated. We present a new model for mating individuals that considers the structure of the hidden layer and redefines the crossover operator. As each hidden node represents a non-linear projection of the input variables, we approach the crossover as a problem on combinatorial optimization. We can formulate the problem as the extraction of a subset of near-optimal projections to create the hidden layer of the new network. This new approach is compared to a classical crossover in 25 real-world problems with an excellent performance. Moreover, the networks obtained are much smaller than those obtained with classical crossover operator.


Subject(s)
Algorithms , Biological Evolution , Genetics , Nerve Net/physiology , Neural Networks, Computer , Animals , Computer Simulation , Cross-Over Studies , Humans , Nonlinear Dynamics
SELECTION OF CITATIONS
SEARCH DETAIL
...