Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Netw ; 171: 200-214, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38096649

ABSTRACT

Loss function is a critical component of machine learning. Some robust loss functions are proposed to mitigate the adverse effects caused by noise. However, they still face many challenges. Firstly, there is currently a lack of unified frameworks for building robust loss functions in machine learning. Secondly, most of them only care about the occurring noise and pay little attention to those normal points. Thirdly, the resulting performance gain is limited. To this end, we put forward a general framework of robust loss functions for machine learning (RML) with rigorous theoretical analyses, which can smoothly and adaptively flatten any unbounded loss function and apply to various machine learning problems. In RML, an unbounded loss function serves as the target, with the aim of being flattened. A scale parameter is utilized to limit the maximum value of noise points, while a shape parameter is introduced to control both the compactness and the growth rate of the flattened loss function. Later, this framework is employed to flatten the Hinge loss function and the Square loss function. Based on this, we build two robust kernel classifiers called FHSVM and FLSSVM, which can distinguish different types of data. The stochastic variance reduced gradient (SVRG) approach is used to optimize FHSVM and FLSSVM. Extensive experiments demonstrate their superiority, with both consistently occupying the top two positions among all evaluated methods, achieving an average accuracy of 81.07% (accompanied by an F-score of 73.25%) for FHSVM and 81.54% (with an F-score of 75.71%) for FLSSVM.


Subject(s)
Algorithms , Machine Learning
2.
ISA Trans ; 140: 279-292, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37385859

ABSTRACT

The class imbalance issue is a pretty common and enduring topic all the time. When encountering unbalanced data distribution, conventional methods are prone to classify minority samples as majority ones, which may cause severe consequences in reality. It is crucial yet challenging to cope with such problems. In this paper, inspired by our previous work, we borrow the linear-exponential (LINEX) loss function in statistics into deep learning for the first time and extend it into a multi-class form, denoted as DLINEX. Compared with existing loss functions in class imbalance learning (e.g., the weighted cross entropy-loss and the focal loss), DLINEX has an asymmetric geometry interpretation, which can adaptively focus more on the minority and hard-to-classify samples by solely adjusting one parameter. Besides, it simultaneously achieves between and within class diversities via caring about the inherent properties of each instance. As a result, DLINEX achieves 42.08% G-means on the CIFAR-10 dataset at the imbalance ratio of 200, 79.06% G-means on the HAM10000 dataset, 82.74% F1 on the DRIVE dataset, 83.93% F1 on the CHASEDB1 dataset and 79.55% F1 on the STARE dataset The quantitative and qualitative experiments convincingly demonstrate that DLINEX can work favorably in imbalanced classifications, either at the image-level or the pixel-level.

3.
Neural Netw ; 161: 708-734, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36848826

ABSTRACT

Partial label learning (PLL) is an emerging framework in weakly supervised machine learning with broad application prospects. It handles the case in which each training example corresponds to a candidate label set and only one label concealed in the set is the ground-truth label. In this paper, we propose a novel taxonomy framework for PLL including four categories: disambiguation strategy, transformation strategy, theory-oriented strategy and extensions. We analyze and evaluate methods in each category and sort out synthetic and real-world PLL datasets which are all hyperlinked to the source data. Future work of PLL is profoundly discussed in this article based on the proposed taxonomy framework.


Subject(s)
Supervised Machine Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...