Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Neural Netw Learn Syst ; 24(11): 1799-813, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24808613

RESUMO

This paper considers the classification problem using support vector (SV) machines and investigates how to maximally reduce the size of the training set without losing information. Under separable data set assumptions, we derive the exact conditions stating which observations can be discarded without diminishing the overall information content. For this purpose, we introduce the concept of potential SVs, i.e., those data that can become SVs when future data become available. To complement this, we also characterize the set of discardable vectors (DVs), i.e., those data that, given the current data set, can never become SVs. Thus, these vectors are useless for future training purposes and can eventually be removed without loss of information. Then, we provide an efficient algorithm based on linear programming that returns the potential and DVs by constructing a simplex tableau. Finally, we compare it with alternative algorithms available in the literature on some synthetic data as well as on data sets from standard repositories.

2.
IEEE Trans Neural Netw ; 22(10): 1576-87, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21859617

RESUMO

In this paper, we analyze the convergence of two general classes of optimization algorithms for regularized kernel methods with convex loss function and quadratic norm regularization. The first methodology is a new class of algorithms based on fixed-point iterations that are well-suited for a parallel implementation and can be used with any convex loss function. The second methodology is based on coordinate descent, and generalizes some techniques previously proposed for linear support vector machines. It exploits the structure of additively separable loss functions to compute solutions of line searches in closed form. The two methodologies are both very easy to implement. In this paper, we also show how to remove non-differentiability of the objective functional by exactly reformulating a convex regularization problem as an unconstrained differentiable stabilization problem.


Assuntos
Algoritmos , Inteligência Artificial , Redes Neurais de Computação , Humanos , Modelos Lineares , Modelos Neurológicos , Design de Software
3.
IEEE Trans Neural Netw ; 22(2): 290-303, 2011 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-21156391

RESUMO

A client-server architecture to simultaneously solve multiple learning tasks from distributed datasets is described. In such architecture, each client corresponds to an individual learning task and the associated dataset of examples. The goal of the architecture is to perform information fusion from multiple datasets while preserving privacy of individual data. The role of the server is to collect data in real time from the clients and codify the information in a common database. Such information can be used by all the clients to solve their individual learning task, so that each client can exploit the information content of all the datasets without actually having access to private data of others. The proposed algorithmic framework, based on regularization and kernel methods, uses a suitable class of "mixed effect" kernels. The methodology is illustrated through a simulated recommendation system, as well as an experiment involving pharmacological data coming from a multicentric clinical trial.


Assuntos
Inteligência Artificial , Simulação por Computador/normas , Processamento Eletrônico de Dados/métodos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Computadores/normas , Design de Software , Transferência de Experiência
4.
IEEE Trans Pattern Anal Mach Intell ; 32(2): 193-205, 2010 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-20075452

RESUMO

Standard single-task kernel methods have recently been extended to the case of multitask learning in the context of regularization theory. There are experimental results, especially in biomedicine, showing the benefit of the multitask approach compared to the single-task one. However, a possible drawback is computational complexity. For instance, when regularization networks are used, complexity scales as the cube of the overall number of training data, which may be large when several tasks are involved. The aim of this paper is to derive an efficient computational scheme for an important class of multitask kernels. More precisely, a quadratic loss is assumed and each task consists of the sum of a common term and a task-specific one. Within a Bayesian setting, a recursive online algorithm is obtained, which updates both estimates and confidence intervals as new data become available. The algorithm is tested on two simulated problems and a real data set relative to xenobiotics administration in human patients.


Assuntos
Inteligência Artificial , Teorema de Bayes , Distribuição Normal , Algoritmos , Glicemia/metabolismo , Simulação por Computador , Humanos , Farmacocinética , Xenobióticos/farmacocinética
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...