Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Neural Netw ; 17(4): 1091-7, 2006 Jul.
Article in English | MEDLINE | ID: mdl-16856672

ABSTRACT

A truly distributed (as opposed to parallelized) support vector machine (SVM) algorithm is presented. Training data are assumed to come from the same distribution and are locally stored in a number of different locations with processing capabilities (nodes). In several examples, it has been found that a reasonably small amount of information is interchanged among nodes to obtain an SVM solution, which is better than that obtained when classifiers are trained only with the local data and comparable (although a little bit worse) to that of the centralized approach (obtained when all the training data are available at the same place). We propose and analyze two distributed schemes: a "naïve" distributed chunking approach, where raw data (support vectors) are communicated, and the more elaborated distributed semiparametric SVM, which aims at further reducing the total amount of information passed between nodes while providing a privacy-preserving mechanism for information sharing. We show the feasibility of our proposal by evaluating the performance of the algorithms in benchmarks with both synthetic and real-world datasets.


Subject(s)
Artificial Intelligence , Algorithms , Database Management Systems/statistics & numerical data , Databases, Factual/statistics & numerical data
2.
IEEE Trans Neural Netw ; 14(2): 296-303, 2003.
Article in English | MEDLINE | ID: mdl-18238013

ABSTRACT

In this paper, we propose a general technique for solving support vector classifiers (SVCs) for an arbitrary loss function, relying on the application of an iterative reweighted least squares (IRWLS) procedure. We further show that three properties of the SVC solution can be written as conditions over the loss function. This technique allows the implementation of the empirical risk minimization (ERM) inductive principle on large margin classifiers obtaining, at the same time, very compact (in terms of number of support vectors) solutions. The improvements obtained by changing the SVC loss function are illustrated with synthetic and real data examples.

3.
IEEE Trans Neural Netw ; 12(5): 1047-59, 2001.
Article in English | MEDLINE | ID: mdl-18249932

ABSTRACT

An iterative block training method for support vector classifiers (SVCs) based on weighted least squares (WLS) optimization is presented. The algorithm, which minimizes structural risk in the primal space, is applicable to both linear and nonlinear machines. In some nonlinear cases, it is necessary to previously find a projection of data onto an intermediate-dimensional space by means of either principal component analysis or clustering techniques. The proposed approach yields very compact machines, the complexity reduction with respect to the SVC solution is especially notable in problems with highly overlapped classes. Furthermore, the formulation in terms of WLS minimization makes the development of adaptive SVCs straightforward, opening up new fields of application for this type of model, mainly online processing of large amounts of (static/stationary) data, as well as online update in nonstationary scenarios (adaptive solutions). The performance of this new type of algorithm is analyzed by means of several simulations.

4.
Neural Comput ; 12(6): 1429-47, 2000 Jun.
Article in English | MEDLINE | ID: mdl-10935721

ABSTRACT

The attractive possibility of applying layerwise block training algorithms to multilayer perceptrons MLP, which offers initial advantages in computational effort, is refined in this article by means of introducing a sensitivity correction factor in the formulation. This results in a clear performance advantage, which we verify in several applications. The reasons for this advantage are discussed and related to implicit relations with second-order techniques, natural gradient formulations through Fisher's information matrix, and sample selection. Extensions to recurrent networks and other research lines are suggested at the close of the article.


Subject(s)
Algorithms , Learning , Neural Networks, Computer , Expert Systems , Humans , Neoplasms/pathology
SELECTION OF CITATIONS
SEARCH DETAIL
...