Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Netw ; 143: 327-344, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34182234

ABSTRACT

Credit risk evaluation is a crucial yet challenging problem in financial analysis. It can not only help institutions reduce risk and ensure profitability, but also improve consumers' fair practices. The data-driven algorithms such as artificial intelligence techniques regard the evaluation as a classification problem and aim to classify transactions as default or non-default. Since non-default samples greatly outnumber default samples, it is a typical imbalanced learning problem and each class or each sample needs special treatment. Numerous data-level, algorithm-level and hybrid methods are presented, and cost-sensitive support vector machines (CSSVMs) are representative algorithm-level methods. Based on the minimization of symmetric and unbounded loss functions, CSSVMs impose higher penalties on the misclassification costs of minority instances using domain specific parameters. However, such loss functions as error measurement cannot have an obvious cost-sensitive generalization. In this paper, we propose a robust cost-sensitive kernel method with Blinex loss (CSKB), which can be applied in credit risk evaluation. By inheriting the elegant merits of Blinex loss function, i.e., asymmetry and boundedness, CSKB not only flexibly controls distinct costs for both classes, but also enjoys noise robustness. As a data-driven decision-making paradigm of credit risk evaluation, CSKB can achieve the "win-win" situation for both the financial institutions and consumers. We solve linear and nonlinear CSKB by Nesterov accelerated gradient algorithm and Pegasos algorithm respectively. Moreover, the generalization capability of CSKB is theoretically analyzed. Comprehensive experiments on synthetic, UCI and credit risk evaluation datasets demonstrate that CSKB compares more favorably than other benchmark methods in terms of various measures.


Subject(s)
Artificial Intelligence , Support Vector Machine , Algorithms
2.
Neural Netw ; 75: 12-21, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26690682

ABSTRACT

Nonparallel Support Vector Machine (NPSVM) which is more flexible and has better generalization than typical SVM is widely used for classification. Although some methods and toolboxes like SMO and libsvm for NPSVM are used, NPSVM is hard to scale up when facing millions of samples. In this paper, we propose a divide-and-combine method for large scale nonparallel support vector machine (DCNPSVM). In the division step, DCNPSVM divide samples into smaller sub-samples aiming at solving smaller subproblems independently. We theoretically and experimentally prove that the objective function value, solutions, and support vectors solved by DCNPSVM are close to the objective function value, solutions, and support vectors of the whole NPSVM problem. In the combination step, the sub-solutions combined as initial iteration points are used to solve the whole problem by global coordinate descent which converges quickly. In order to balance the accuracy and efficiency, we adopt a multi-level structure which outperforms state-of-the-art methods. Moreover, our DCNPSVM can tackle unbalance problems efficiently by tuning the parameters. Experimental results on lots of large data sets show the effectiveness of our method in memory usage, classification accuracy and time consuming.


Subject(s)
Cluster Analysis , Support Vector Machine , Algorithms , Humans
3.
IEEE Trans Cybern ; 44(7): 1067-79, 2014 Jul.
Article in English | MEDLINE | ID: mdl-24013833

ABSTRACT

We propose a novel nonparallel classifier, called nonparallel support vector machine (NPSVM), for binary classification. Our NPSVM that is fully different from the existing nonparallel classifiers, such as the generalized eigenvalue proximal support vector machine (GEPSVM) and the twin support vector machine (TWSVM), has several incomparable advantages: 1) two primal problems are constructed implementing the structural risk minimization principle; 2) the dual problems of these two primal problems have the same advantages as that of the standard SVMs, so that the kernel trick can be applied directly, while existing TWSVMs have to construct another two primal problems for nonlinear cases based on the approximate kernel-generated surfaces, furthermore, their nonlinear problems cannot degenerate to the linear case even the linear kernel is used; 3) the dual problems have the same elegant formulation with that of standard SVMs and can certainly be solved efficiently by sequential minimization optimization algorithm, while existing GEPSVM or TWSVMs are not suitable for large scale problems; 4) it has the inherent sparseness as standard SVMs; 5) existing TWSVMs are only the special cases of the NPSVM when the parameters of which are appropriately chosen. Experimental results on lots of datasets show the effectiveness of our method in both sparseness and classification accuracy, and therefore, confirm the above conclusion further. In some sense, our NPSVM is a new starting point of nonparallel classifiers.

SELECTION OF CITATIONS
SEARCH DETAIL
...