Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38032778

ABSTRACT

Multilabel image recognition (MLR) aims to annotate an image with comprehensive labels and suffers from object occlusion or small object sizes within images. Although the existing works attempt to capture and exploit label correlations to tackle these issues, they predominantly rely on global statistical label correlations as prior knowledge for guiding label prediction, neglecting the unique label correlations present within each image. To overcome this limitation, we propose a semantic and correlation disentangled graph convolution (SCD-GC) method, which builds the image-specific graph and employs graph propagation to reason the labels effectively. Specifically, we introduce a semantic disentangling module to extract categorywise semantic features as graph nodes and develop a correlation disentangling module to extract image-specific label correlations as graph edges. Performing graph convolutions on this image-specific graph allows for better mining of difficult labels with weak visual representations. Visualization experiments reveal that our approach successfully disentangles the dominant label correlations existing within the input image. Through extensive experimentation, we demonstrate that our method achieves superior results on the challenging Microsoft COCO (MS-COCO), PASCAL visual object classes (PASCAL-VOC), NUS web image dataset (NUS-WIDE), and Visual Genome 500 (VG-500) datasets. Code is available at GitHub: https://github.com/caigitrepo/SCDGC.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 9789-9805, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37022219

ABSTRACT

Neural networks often make predictions relying on the spurious correlations from the datasets rather than the intrinsic properties of the task of interest, facing with sharp degradation on out-of-distribution (OOD) test data. Existing de-bias learning frameworks try to capture specific dataset bias by annotations but they fail to handle complicated OOD scenarios. Others implicitly identify the dataset bias by special design low capability biased models or losses, but they degrade when the training and testing data are from the same distribution. In this paper, we propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and base model. The base model is encouraged to focus on examples that are hard to solve with biased models, thus remaining robust against spurious correlations in the test stage. GGD largely improves models' OOD generalization ability on various tasks, but sometimes over-estimates the bias level and degrades on the in-distribution test. We further re-analyze the ensemble process of GGD and introduce the Curriculum Regularization inspired by curriculum learning, which achieves a good trade-off between in-distribution (ID) and out-of-distribution performance. Extensive experiments on image classification, adversarial question answering, and visual question answering demonstrate the effectiveness of our method. GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge. Codes are available at https://github.com/GeraldHan/GGD.


Subject(s)
Algorithms , Learning , Head , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...