Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Pattern Anal Mach Intell ; 44(7): 3909-3924, 2022 07.
Article in English | MEDLINE | ID: mdl-33621167

ABSTRACT

Domain Adaptation aims at adapting the knowledge learned from a domain (source-domain) to another (target-domain). Existing approaches typically require a portion of task-relevant target-domain data a priori. We propose an approach, zero-shot deep domain adaptation (ZDDA), which uses paired dual-domain task-irrelevant data to eliminate the need for task-relevant target-domain training data. ZDDA learns to generate common representations for source and target domains data. Then, either domain representation is used later to train a system that works on both domains or having the ability to eliminate the need to either domain in sensor fusion settings. Two variants of ZDDA have been developed: ZDDA for classification task (ZDDA-C) and ZDDA for metric learning task (ZDDA-ML). Another limitation in Existing approaches is that most of them are designed for the closed-set classification task, i.e., the sets of classes in both the source and target domains are "known." However, ZDDA-C is also applicable to the open-set classification task where not all classes are "known" during training. Moreover, the effectiveness of ZDDA-ML shows ZDDA's applicability is not limited to classification tasks. ZDDA-C and ZDDA-ML are tested on classification and metric-learning tasks, respectively. Under most experimental conditions, ZDDA outperforms the baseline without using task-relevant target-domain-training data.


Subject(s)
Algorithms , Machine Learning , Learning
2.
IEEE Trans Pattern Anal Mach Intell ; 42(12): 2996-3010, 2020 12.
Article in English | MEDLINE | ID: mdl-31180839

ABSTRACT

With only coarse labels, weakly supervised learning typically uses top-down attention maps generated by back-propagating gradients as priors for tasks such as object localization and semantic segmentation. While these attention maps are intuitive and informative explanations of deep neural network, there is no effective mechanism to manipulate the network attention during learning process. In this paper, we address three shortcomings of previous approaches in modeling such attention maps in one common framework. First, we make attention maps a natural and explicit component in the training pipeline such that they are end-to-end trainable. Moreover, we provide self-guidance directly on these maps by exploring supervision from the network itself to improve them towards specific target tasks. Lastly, we proposed a design to seamlessly bridge the gap between using weak and extra supervision if available. Despite its simplicity, experiments on the semantic segmentation task demonstrate the effectiveness of our methods. Besides, the proposed framework provides a way not only explaining the focus of the learner but also feeding back with direct guidance towards specific tasks. Under mild assumptions our method can also be understood as a plug-in to existing convolutional neural networks to improve their generalization performance.


Subject(s)
Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Semantics , Supervised Machine Learning , Pattern Recognition, Automated/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...