Your browser doesn't support javascript.
loading
Bridging the gap with grad: Integrating active learning into semi-supervised domain generalization.
Li, Jingwei; Li, Yuan; Tan, Jie; Liu, Chengbao.
Afiliação
  • Li J; Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China. Electronic address: lijingwei2019@ia.ac.cn.
  • Li Y; Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China. Electronic address: liyuan2020@ia.ac.cn.
  • Tan J; Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China. Electronic address: tan.jie@tom.com.
  • Liu C; Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China. Electronic address: liuchengbao2016@163.com.
Neural Netw ; 171: 186-199, 2024 Mar.
Article em En | MEDLINE | ID: mdl-38096648
ABSTRACT
Domain generalization (DG) aims to generalize from a large amount of source data that are fully annotated. However, it is laborious to collect labels for all source data in practice. Some research gets inspiration from semi-supervised learning (SSL) and develops a new task called semi-supervised domain generalization (SSDG). Unlabeled source data is trained jointly with labeled one to significantly improve the performance. Nevertheless, different research adopts different settings, leading to unfair comparisons. Moreover, the initial annotation of unlabeled source data is random, causing unstable and unreliable training. To this end, we first specify the training paradigm, and then leverage active learning (AL) to handle the issues. We further develop a new task called Active Semi-supervised Domain Generalization (ASSDG), which consists of two parts, i.e., SSDG and AL. We delve deep into the commonalities of SSL and AL and propose a unified framework called Gradient-Similarity-based Sample Filtering and Sorting (GSSFS) to iteratively train the SSDG and AL parts. Gradient similarity is utilized to select reliable and informative unlabeled source samples for these two parts respectively. Our methods are simple yet efficient, and extensive experiments demonstrate that our methods can achieve the best results on the DG datasets in the low-data regime without bells and whistles.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Aprendizagem Baseada em Problemas / Generalização Psicológica Idioma: En Revista: Neural Netw Assunto da revista: NEUROLOGIA Ano de publicação: 2024 Tipo de documento: Article País de publicação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Aprendizagem Baseada em Problemas / Generalização Psicológica Idioma: En Revista: Neural Netw Assunto da revista: NEUROLOGIA Ano de publicação: 2024 Tipo de documento: Article País de publicação: Estados Unidos