Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38335085

RESUMO

Semi-supervised support vector machine (S 3 VM) is important because it can use plentiful unlabeled data to improve the generalization accuracy of traditional SVMs. In order to achieve good performance, it is necessary for S 3 VM to take some effective measures to select hyperparameters. However, model selection for semi-supervised models is still a key open problem. Existing methods for semi-supervised models to search for the optimal parameter values are usually computationally demanding, especially those ones with grid search. To address this challenging problem, in this article, we first propose solution paths of S 3 VM (SPS 3 VM), which can track the solutions of the nonconvex S 3 VM with respect to the hyperparameters. Specifically, we apply incremental and decremental learning methods to update the solution and let it satisfy the Karush-Kuhn-Tucker (KKT) conditions. Based on the SPS 3 VM and the piecewise linearity of model function, we can find the model with the minimum cross-validation (CV) error for the entire range of candidate hyperparameters by computing the error path of S 3 VM. Our SPS 3 VM is the first solution path algorithm for nonconvex optimization problem of semi-supervised learning models. We also provide the finite convergence analysis and computational complexity of SPS 3 VM. Experimental results on a variety of benchmark datasets not only verify that our SPS 3 VM can globally search the hyperparameters (regularization and ramp loss parameters) but also show a huge reduction of computational time while retaining similar or slightly better generalization performance compared with the grid search approach.

2.
IEEE Trans Neural Netw Learn Syst ; 34(1): 490-501, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34310326

RESUMO

It is well known that the performance of a kernel method is highly dependent on the choice of kernel parameter. However, existing kernel path algorithms are limited to plain support vector machines (SVMs), which has one equality constraint. It is still an open question to provide a kernel path algorithm to ν -support vector classification ( ν -SVC) with more than one equality constraint. Compared with plain SVM, ν -SVC has the advantage of using a regularization parameter ν for controlling the number of support vectors and margin errors. To address this problem, in this article, we propose a kernel path algorithm (KP ν SVC) to trace the solutions of ν -SVC exactly with respect to the kernel parameter. Specifically, we first provide an equivalent formulation of ν -SVC with two equality constraints, which can avoid possible conflicts during tracing the solutions of ν -SVC. Based on this equivalent formulation of ν -SVC, we propose the KP ν SVC algorithm to trace the solutions with respect to the kernel parameter. However, KP ν SVC traces nonlinear solutions of kernel method rather than the errors of loss function, and it is still a challenge to provide the algorithm that guarantees to find the global optimal model. To address this challenging problem, we extend the classical error path algorithm to the nonlinear kernel solution paths and propose a new kernel error path (KEP) algorithm that ensures to find the global optimal kernel parameter by minimizing the cross validation error. We also provide the finite convergence analysis and computational complexity analysis to KP ν SVC and KEP. Extensive experimental results on a variety of benchmark datasets not only verify the effectiveness of KP ν SVC but also show the advantage of applying KEP to select the optimal kernel parameter.

3.
IEEE Trans Neural Netw Learn Syst ; 34(11): 8866-8878, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35275826

RESUMO

Tuning the values of kernel parameters plays a vital role in the performance of kernel methods. Kernel path algorithms have been proposed for several important learning algorithms, including support vector machine and kernelized Lasso, which can fit the piecewise nonlinear solutions of kernel methods with respect to the kernel parameter in a continuous space. Although the error path algorithms have been proposed to ensure that the model with the minimum cross validation (CV) error can be found, which is usually the ultimate goal of model selection, they are limited to piecewise linear solution paths. To address this problem, in this article, we extend the classic error path algorithm to the nonlinear kernel solution paths and propose a new kernel error path algorithm (KEP) that can find the global optimal kernel parameter with the minimum CV error. Specifically, we first prove that error functions of binary classification and regression problems are piecewise constant or smooth w.r.t. the kernel parameter. Then, we propose KEP for support vector machine and kernelized Lasso and prove that it guarantees to find the model with the minimum CV error within the whole range of kernel parameter values. Experimental results on various datasets show that our KEP can find the model with minimum CV error with less time consumption. Finally, it would have better generalization error on the test set, compared with grid search and random search.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...