Noisy Label Learning With Provable Consistency for a Wider Family of Losses.
IEEE Trans Pattern Anal Mach Intell
; 45(11): 13536-13552, 2023 Nov.
Article
in En
| MEDLINE
| ID: mdl-37459268
Deep models have achieved state-of-the-art performance on a broad range of visual recognition tasks. Nevertheless, the generalization ability of deep models is seriously affected by noisy labels. Though deep learning packages have different losses, this is not transparent for users to choose consistent losses. This paper addresses the problem of how to use abundant loss functions designed for the traditional classification problem in the presence of label noise. We present a dynamic label learning (DLL) algorithm for noisy label learning and then prove that any surrogate loss function can be used for classification with noisy labels by using our proposed algorithm, with a consistency guarantee that the label noise does not ultimately hinder the search for the optimal classifier of the noise-free sample. In addition, we provide a depth theoretical analysis of our algorithm to verify the justifies' correctness and explain the powerful robustness. Finally, experimental results on synthetic and real datasets confirm the efficiency of our algorithm and the correctness of our justifies and show that our proposed algorithm significantly outperforms or is comparable to current state-of-the-art counterparts.
Full text:
1
Collection:
01-internacional
Database:
MEDLINE
Language:
En
Journal:
IEEE Trans Pattern Anal Mach Intell
Journal subject:
INFORMATICA MEDICA
Year:
2023
Document type:
Article
Country of publication:
United States